AI Glossary
Quick, friendly definitions of AI terms from A to Z.
A
Agent
aka: AI Agent, Autonomous Agent
An AI system that can use tools, make decisions, and take actions to complete tasks autonomously rather than just answering questions.
AI (Artificial Intelligence)
aka: Artificial Intelligence, AI
Making machines perform tasks that typically require human intelligence—like understanding language, recognizing patterns, or making decisions.
API (Application Programming Interface)
aka: REST API, API Endpoint
A way for different software programs to talk to each other—like a menu of requests you can make to get AI to do something.
B
C
Constitutional AI
aka: CAI, Constitutional Training, Self-Critique
A safety technique where an AI is trained using a set of principles (a 'constitution') to critique and revise its own outputs, making them more helpful, honest, and harmless without human feedback on every response.
Context Window
aka: Context, Context Length, Window Size
How much text an AI can 'see' or 'remember' at once. Older messages fall off when the window fills up.
E
Embedding
aka: Vector, Vector Representation, Embeddings
A list of numbers that represents the meaning of text. Similar meanings have similar numbers, so computers can compare by 'closeness'.
Embeddings
aka: Vector Embeddings, Semantic Embeddings, Text Embeddings
Collections of numerical representations that capture meaning. When you have multiple embeddings, you can compare them to find similar content, power search systems, and enable AI to understand relationships between concepts.
Evaluation (Evals)
aka: Evals, Model Evaluation, Testing
Systematically testing an AI system to measure how well it performs on specific tasks or criteria.
F
Few-Shot Learning
aka: Few-Shot Prompting, In-Context Learning
Teaching an AI model by including a few examples in your prompt, without any formal training—the model learns the pattern from the examples you show.
Fine-Tuning
aka: Model Fine-Tuning, Transfer Learning
Taking a pre-trained AI model and training it further on your specific data to make it better at your particular task.
G
Grounding
aka: Grounded Generation, Factual Grounding
Connecting AI outputs to verified sources or real data to reduce hallucinations and ensure responses are factually accurate.
Guardrails
aka: Safety Guardrails, AI Guardrails, Policy Guardrails
Rules or filters that prevent AI from generating harmful, biased, or inappropriate content. Like safety bumpers.
H
I
L
LangChain
aka: LangChain.js, LangChain Python
An open-source framework for building applications with LLMs, providing tools for chaining prompts, managing memory, connecting to external tools, and building AI agents.
Latency
aka: Response Time, Inference Time
How long it takes for an AI model to generate a response after you send a request.
LlamaIndex
aka: LlamaIndex.ai, GPT Index
An open-source framework for building LLM applications with data connectors, indexing, and retrieval—particularly strong for RAG (Retrieval Augmented Generation) systems.
LLM (Large Language Model)
aka: Large Language Model, Language Model, LLM
AI trained on massive amounts of text to understand and generate human-like language. Powers chatbots, writing tools, and more.
M
Machine Learning (ML)
aka: ML, Machine Learning
A way to train computers to learn from examples and data, instead of programming every rule manually.
Model
aka: AI Model, ML Model
The trained AI system that contains all the patterns it learned from data. Think of it as the 'brain' that makes predictions or decisions.
O
P
Parameters
aka: Model Parameters, Weights
Numbers inside an AI model that get adjusted during training to improve accuracy. More parameters usually mean more capability.
Prompt
aka: Prompting, Prompt Engineering
The question or instruction you give to an AI. A good prompt is clear, specific, and gives context.
Prompt Injection
aka: Prompt Attack, Jailbreaking
A security vulnerability where users trick an AI into ignoring its instructions by inserting malicious commands into their prompts.
Q
R
RAG (Retrieval-Augmented Generation)
aka: Retrieval-Augmented Generation, RAG
A technique where AI searches your documents for relevant info, then uses it to generate accurate, grounded answers.
RLHF (Reinforcement Learning from Human Feedback)
aka: Reinforcement Learning from Human Feedback, RLHF, Human Feedback Training
A training method where humans rate AI outputs to teach the model which responses are helpful, harmless, and accurate.
ROI (Return on Investment)
aka: AI ROI, Return on AI Investment, AI Value Measurement
The measure of value gained from AI investments compared to their costs. In AI, this includes time saved, quality improvements, revenue increases, and cost reductions versus implementation and ongoing expenses.
S
T
Temperature
aka: Sampling Temperature, Randomness
A setting that controls how creative or random AI outputs are. Low = predictable and focused. High = creative and varied.
Token
aka: Tokens, Tokenization
A chunk of text (usually a word or part of a word) that AI processes. 'Chatbot' might be one token or split into 'chat' and 'bot'.
Tokenizer
aka: Tokenization, Token Encoding
A tool that breaks text into smaller pieces (tokens) that an AI model can process. Different models use different tokenizers, affecting how they count and understand text.
Tool (Function Calling)
aka: Function, Function Calling, Tool Use
A capability that allows an AI to call external functions or APIs—like searching the web, querying databases, or running calculations.
Top-p (Nucleus Sampling)
aka: Nucleus Sampling, Top-k, Sampling Strategy
A parameter that controls randomness in AI text generation by choosing from the smallest set of words whose probabilities add up to p%. Lower values (0.1-0.5) make output more focused; higher values (0.9-1.0) make it more creative.
Training
aka: Model Training, AI Training
The process of feeding data to an AI system so it learns patterns and improves its predictions over time.
Training Data
aka: Training Set, Training Dataset, Training Corpus
The collection of examples an AI system learns from. The quality, quantity, and diversity of training data directly determines what the AI can and cannot do.
Transformer
aka: Transformer Architecture, Transformer Model
A neural network architecture that revolutionized AI by using attention mechanisms to understand relationships between words, enabling modern LLMs.