Fine-Tuning
Also known as: Model Fine-Tuning, Transfer Learning
In one sentence
Taking a pre-trained AI model and training it further on your specific data to make it better at your particular task or adopt a specific style.
Explain like I'm 12
Like teaching a smart student who already knows a lot, but giving them extra lessons on exactly what you need them to be good at—like training a general doctor to become a heart specialist.
In context
Fine-tuning starts with a foundation model (like GPT-4 or Llama) that already understands language, then trains it on your specific dataset. A legal firm might fine-tune a model on thousands of contracts to make it better at legal language. OpenAI offers fine-tuning through their API where you upload training examples as input-output pairs. The process typically requires hundreds to thousands of examples and costs more than standard API usage, but produces a customised model that performs better on your specific tasks and can be cheaper to run at scale.
See also
Related Guides
Learn more about Fine-Tuning in these guides:
Fine-Tuning vs RAG: Which Should You Use?
IntermediateCompare fine-tuning and RAG to customize AI. Learn when each approach works best, how they differ, and how to combine them.
12 min readTransfer Learning Explained: Building on What AI Already Knows
IntermediateUnderstand transfer learning and why it matters. Learn how pre-trained models accelerate AI development and reduce data requirements.
9 min readFine-Tuning Fundamentals: Customizing AI Models
IntermediateFine-tuning adapts pre-trained models to your specific use case. Learn when to fine-tune, how it works, and alternatives.
8 min read