TL;DR

Function calling lets LLMs invoke external tools (APIs, databases, calculators). Model decides which function to call, generates parameters, you execute it, return results to model.

How it works

  1. Define available functions (name, parameters, description)
  2. LLM decides if/which function to call
  3. LLM generates function call with parameters (JSON)
  4. Your code executes function
  5. Return results to LLM
  6. LLM incorporates results into response

Use cases

  • Query databases for real-time data
  • Call APIs (weather, search, CRM)
  • Perform calculations
  • Send emails, create tasks
  • Multi-step workflows

Implementation (OpenAI)

Define functions:

{
  "name": "get_weather",
  "description": "Get current weather",
  "parameters": {
    "type": "object",
    "properties": {
      "location": {"type": "string"}
    }
  }
}

Model generates:

{
  "function": "get_weather",
  "arguments": {"location": "Paris"}
}

You execute, return result, model responds.

Best practices

  • Validate function calls before executing
  • Handle errors gracefully
  • Set timeouts
  • Limit expensive operations
  • Log all function calls

Security considerations

  • Whitelist allowed functions
  • Validate parameters
  • Rate limit
  • Never give unrestricted access