- Home
- /Glossary
AI Glossary
AI conversations are full of terms that sound intimidating but are simpler than they seem. This glossary breaks down every concept in plain English.
Each entry includes a one-line definition, an “explain like I’m 12” analogy, real-world context, and links to related terms and deeper guides.
41 terms
A
Agent
aka: AI Agent, Autonomous Agent
An AI system that can use tools, make decisions, and take actions to complete tasks autonomously rather than just answering questions.
AI (Artificial Intelligence)
aka: Artificial Intelligence, AI
Making machines perform tasks that typically require human intelligence—like understanding language, recognizing patterns, or making decisions.
API (Application Programming Interface)
aka: REST API, API Endpoint
A way for different software programs to talk to each other—like a menu of requests you can make to get AI to do something.
B
C
Chain-of-Thought
aka: CoT, Chain-of-Thought Prompting, Step-by-Step Reasoning
A prompting technique that asks AI to show its reasoning step-by-step, improving accuracy on complex tasks like maths, logic, and multi-step problem-solving.
Constitutional AI
aka: CAI, Constitutional Training, Self-Critique
A safety technique where an AI is trained using a set of principles (a 'constitution') to critique and revise its own outputs, making them more helpful, honest, and harmless without human feedback on every response.
Context Engineering
aka: Context Design, Context Architecture
The discipline of designing everything an AI model sees — system prompts, retrieved documents, tool definitions, conversation history, and examples — to produce reliable, high-quality outputs.
Context Window
aka: Context, Context Length, Window Size
The maximum amount of text an AI model can process at once—including both what you send and what it generates. Once the window fills up, the AI loses access to earlier parts of the conversation.
E
Embedding
aka: Vector, Vector Representation, Embeddings
A list of numbers that represents the meaning of text, images, or other data. Similar meanings produce similar numbers, so computers can measure how 'close' two concepts are.
Evaluation (Evals)
aka: Evals, Model Evaluation, Testing
Systematically testing an AI system to measure how well it performs on specific tasks, criteria, or safety requirements.
F
Few-Shot Learning
aka: Few-Shot Prompting, In-Context Learning
Teaching an AI model by including a few examples in your prompt, without any formal training—the model learns the pattern from the examples you show it.
Fine-Tuning
aka: Model Fine-Tuning, Transfer Learning
Taking a pre-trained AI model and training it further on your specific data to make it better at your particular task or adopt a specific style.
G
Grounding
aka: Grounded Generation, Factual Grounding
Connecting AI outputs to verified sources or real data to reduce hallucinations and ensure responses are factually accurate and verifiable.
Guardrails
aka: Safety Guardrails, AI Guardrails, Policy Guardrails
Rules and filters that prevent AI from generating harmful, biased, or inappropriate content. They act as safety boundaries that keep AI systems operating within acceptable limits.
H
I
L
LangChain
aka: LangChain.js, LangChain Python
An open-source framework for building applications powered by large language models, providing tools for chaining prompts, managing memory, connecting to external tools, and creating AI agents.
Latency
aka: Response Time, Inference Time
The time delay between sending a request to an AI model and receiving the first part of its response. Lower latency means faster replies.
LlamaIndex
aka: LlamaIndex.ai, GPT Index
An open-source framework designed for building LLM applications that connect to your own data, with particular strength in retrieval-augmented generation (RAG) systems and data indexing.
LLM (Large Language Model)
aka: Large Language Model, Foundation Model
A type of AI trained on massive amounts of text data to understand, generate, and reason about human language. LLMs power chatbots, writing tools, coding assistants, and many other applications.
M
Machine Learning (ML)
aka: ML, Statistical Learning
A branch of artificial intelligence where computers learn patterns from data and improve at tasks through experience, rather than following explicitly programmed rules.
Model
aka: AI Model, ML Model
The trained AI system that contains all the patterns and knowledge learned from data. It's the end product of training—the 'brain' that takes inputs and produces predictions, decisions, or generated content.
O
P
Parameters
aka: Model Parameters, Weights
The internal numerical values within an AI model that are adjusted during training to capture patterns in data. More parameters generally mean a more capable model, but also higher costs and slower inference.
Prompt
aka: Prompting, Input, Query
The text instruction you give to an AI model to get a response. The quality and specificity of your prompt directly determines the quality of the AI's output.
Prompt Injection
aka: Prompt Attack, Jailbreaking
A security vulnerability where malicious users craft inputs designed to override an AI system's instructions, bypass safety filters, or extract hidden information from the system prompt.
Q
R
RAG (Retrieval-Augmented Generation)
aka: Retrieval-Augmented Generation, RAG
A technique where AI searches your documents for relevant information first, then uses what it finds to generate accurate, grounded answers.
RLHF (Reinforcement Learning from Human Feedback)
aka: Reinforcement Learning from Human Feedback, RLHF, Human Feedback Training
A training method where humans rate AI outputs to teach the model which responses are helpful, harmless, and accurate.
ROI (Return on Investment)
aka: AI ROI, Return on AI Investment, AI Value Measurement
The measure of value gained from AI investments compared to their costs. In AI, this includes time saved, quality improvements, revenue increases, and cost reductions versus implementation and ongoing expenses.
S
T
Temperature
aka: Sampling Temperature, Randomness, Creativity Setting
A setting that controls how creative or random an AI's responses are. Low temperature produces predictable, focused answers. High temperature produces varied, more creative outputs.
Token
aka: Tokens, Tokenization
A chunk of text — usually a word or part of a word — that AI models process as a single unit. Most English words are one token, but longer or uncommon words get split into pieces.
Tokenizer
aka: Tokenisation, Token Encoding
A tool that breaks text into smaller pieces (tokens) that an AI model can process. Different models use different tokenizers, affecting how they count and understand text.
Tool (Function Calling)
aka: Function, Function Calling, Tool Use
A capability that allows an AI model to call external functions or APIs — like searching the web, querying databases, or running calculations — instead of relying solely on its training data.
Top-p (Nucleus Sampling)
aka: Nucleus Sampling, Top-p Sampling
A parameter that controls randomness in AI text generation by choosing from the smallest set of words whose combined probability reaches a threshold p. Lower values make output more focused; higher values make it more creative.
Training
aka: Model Training, AI Training
The process of feeding large amounts of data to an AI system so it learns patterns, relationships, and rules, enabling it to make predictions or generate output.
Training Data
aka: Training Set, Training Dataset, Training Corpus
The collection of examples an AI system learns from. The quality, quantity, and diversity of training data directly determines what the AI can and cannot do.
Transformer
aka: Transformer Architecture, Transformer Model
A neural network architecture that revolutionised AI by using attention mechanisms to understand relationships between all words in a text simultaneously, enabling modern LLMs like GPT and Claude.