knowledgesdk.com/glossary/function-calling
AI Agentsbeginner

Also known as: tool calling, structured tool use

Function Calling

A structured mechanism that allows LLMs to output structured JSON specifying a function name and arguments for external execution.

What Is Function Calling?

Function calling is a specific mechanism — first popularized by OpenAI's API and now supported by most major LLM providers — that allows a language model to produce a structured JSON object describing a function to call, rather than freeform text.

The model does not execute the function itself. Instead, it signals intent: "I want to call this function with these arguments." The calling application is responsible for actually executing the function and returning the result.

Why Structured Output Matters

Before function calling, developers who wanted LLMs to trigger actions had to prompt-engineer the model to produce parseable output and write their own fragile parsers. Function calling solves this cleanly:

  • The model is given a formal schema for each available function (name, description, parameter types).
  • When the model decides a function is appropriate, it outputs a JSON object that is guaranteed to match the schema.
  • The application parses it reliably and calls the real function.

This removes ambiguity and makes tool-using agents far more robust.

How It Works in Practice

Here is a simplified example. You define a tool schema:

{
  "name": "extract_page",
  "description": "Extracts structured data from a URL using KnowledgeSDK.",
  "parameters": {
    "type": "object",
    "properties": {
      "url": { "type": "string", "description": "The URL to extract." }
    },
    "required": ["url"]
  }
}

You send this schema along with the user's message to the LLM. If the model decides extraction is needed, it responds with:

{
  "name": "extract_page",
  "arguments": { "url": "https://example.com/product" }
}

Your application intercepts this, calls KnowledgeSDK's /v1/extract endpoint with that URL, and returns the result to the model as a tool message. The model then reasons over the extracted data to produce its final response.

Function Calling vs. Tool Use

The terms are often used interchangeably. Technically:

  • Function calling refers to the specific JSON-structured output mechanism.
  • Tool use is the broader capability of an agent calling any external resource — which may use function calling under the hood.

In practice, most modern agent frameworks use function calling as the underlying primitive for all tool use.

Parallel Function Calls

Many LLMs now support calling multiple functions in a single response. This is useful when an agent needs several pieces of independent information simultaneously — for example, scraping three URLs at once rather than one at a time. Parallel function calls can dramatically reduce the number of round-trips in an agent loop.

Supported Providers

Function calling is supported by OpenAI (GPT-4o, GPT-4 Turbo), Anthropic (Claude 3+ via tool use), Google (Gemini), Mistral, and others. While the exact API shape differs slightly, the concept is consistent: the model outputs structured intent, and your code executes it.

Related Terms

AI Agentsbeginner
Tool Use
The ability of an LLM-powered agent to call external functions, APIs, or services to gather information or take actions.
LLMsbeginner
JSON Schema
A vocabulary for describing and validating the structure of JSON data, widely used to define the expected output format for LLM function calls.
LLMsintermediate
Structured Output
LLM responses constrained to a specific format — typically JSON — by using function calling, grammar constraints, or guided generation.
AI Agentsbeginner
AI Agent
An AI system that perceives its environment, reasons about it, and takes autonomous actions to complete goals.
Full-Page ExtractionGraphRAG

Try it now

Build with Function Calling using one API.

Extract, index, and search any web content. First 1,000 requests free.

GET API KEY →
← Back to glossary