Skip to main content
BETAThis is a new design — give feedback

Fine-Tuning

Also known as: Model Fine-Tuning, Transfer Learning

In one sentence

Taking a pre-trained AI model and training it further on your specific data to make it better at your particular task or adopt a specific style.

Explain like I'm 12

Like teaching a smart student who already knows a lot, but giving them extra lessons on exactly what you need them to be good at—like training a general doctor to become a heart specialist.

In context

Fine-tuning starts with a foundation model (like GPT-4 or Llama) that already understands language, then trains it on your specific dataset. A legal firm might fine-tune a model on thousands of contracts to make it better at legal language. OpenAI offers fine-tuning through their API where you upload training examples as input-output pairs. The process typically requires hundreds to thousands of examples and costs more than standard API usage, but produces a customised model that performs better on your specific tasks and can be cheaper to run at scale.

See also

Related Guides

Learn more about Fine-Tuning in these guides: