Skip to content

Function calling

Function calling lets the model request work from your own code. You describe possible functions as tools, the model decides when to call them, and you send the results back so it can continue the conversation.

sequenceDiagram
  participant App
  participant Model
  participant Tool

  App->>Model: messages + available tools
  Model-->>App: assistant message with tool_call parts
  App->>Tool: execute(args)
  Tool-->>App: tool result payload
  App->>Model: append tool message + retry
  Model-->>App: final assistant content

The loop continues until the model returns regular assistant content (no tool call parts) or you stop the interaction. Each tool call is explicit, so you stay in control of side effects and error handling.

Create one entry per callable function. The schema tells the model which arguments are expected.

types.ts
interface Tool {
/**
* The name of the tool.
*/
name: string;
/**
* A description of the tool.
*/
description: string;
/**
* The JSON schema of the parameters that the tool accepts. The type must be "object".
*/
parameters: JSONSchema;
}

Include these tools when you call the model. In the streaming APIs, pass them on every turn.

When the model wants to call something, the assistant message carries one or more tool_call parts. Each part includes the tool name and JSON arguments that you must parse before executing your own code.

types.ts
interface ToolCallPart {
type: "tool-call";
/**
* The ID of the tool call, used to match the tool result with the tool call.
*/
tool_call_id: string;
/**
* The name of the tool to call.
*/
tool_name: string;
/**
* The arguments to pass to the tool.
*/
args: Record<string, unknown>;
/**
* The ID of the tool call part, if applicable.
* This is different from tool_call_id which is used to match tool results.
*/
id?: string;
}

Execute each requested function in your application. Handle failures yourself—if you return an error payload the model can choose a different strategy or ask for clarification.

After execution, respond with a single tool message that contains a tool_result part for every call you serviced. The model uses these results to continue reasoning or to compose the final answer.

types.ts
interface ToolMessage {
role: "tool";
content: Part[];
}
interface ToolResultPart {
type: "tool-result";
/**
* The ID of the tool call from previous assistant message.
*/
tool_call_id: string;
/**
* The name of the tool that was called.
*/
tool_name: string;
/**
* The content of the tool result.
*/
content: Part[];
/**
* Marks the tool result as an error.
*/
is_error?: boolean;
}

Append both the assistant message that requested the tool and your tool message to the conversation, then call the model again. Repeat until you get a regular assistant response.

Below is a minimal loop that wires everything together. For larger flows, delegating to the Agent library keeps lifecycle management and error handling consistent.

tool-use.ts
import type {
Message,
ModelResponse,
Tool,
ToolMessage,
} from "@hoangvvo/llm-sdk";
import { getModel } from "./get-model.ts";
let MY_BALANCE = 1000;
const STOCK_PRICE = 100;
function trade({
action,
quantity,
symbol,
}: {
action: "buy" | "sell";
quantity: number;
symbol: string;
}) {
console.log(
`[TOOLS trade()] Trading ${String(quantity)} shares of ${symbol} with action: ${action}`,
);
const balanceChange =
action === "buy" ? -quantity * STOCK_PRICE : quantity * STOCK_PRICE;
MY_BALANCE += balanceChange;
return {
success: true,
balance: MY_BALANCE,
balance_change: balanceChange,
};
}
let MAX_TURN_LEFT = 10;
const model = getModel("openai", "gpt-4o");
const tools: Tool[] = [
{
name: "trade",
description: "Trade stocks",
parameters: {
type: "object",
properties: {
action: {
type: "string",
enum: ["buy", "sell"],
description: "The action to perform",
},
quantity: {
type: "number",
description: "The number of stocks to trade",
},
symbol: {
type: "string",
description: "The stock symbol",
},
},
required: ["action", "quantity", "symbol"],
additionalProperties: false,
},
},
];
const messages: Message[] = [
{
role: "user",
content: [
{
type: "text",
text: "I would like to buy 50 NVDA stocks.",
},
],
},
];
let response: ModelResponse;
do {
response = await model.generate({
messages,
tools,
});
messages.push({
role: "assistant",
content: response.content,
});
const toolCallParts = response.content.filter((c) => c.type === "tool-call");
if (toolCallParts.length === 0) {
break;
}
let toolMessage: ToolMessage | undefined;
for (const toolCallPart of toolCallParts) {
const { tool_call_id, tool_name, args } = toolCallPart;
let toolResult: any;
switch (tool_name) {
case "trade": {
toolResult = trade(args as any);
break;
}
default:
throw new Error(`Tool ${tool_name} not found`);
}
toolMessage = toolMessage ?? {
role: "tool",
content: [],
};
toolMessage.content.push({
type: "tool-result",
tool_name,
tool_call_id,
content: [
{
type: "text",
text: JSON.stringify(toolResult),
},
],
});
}
if (toolMessage) messages.push(toolMessage);
} while (MAX_TURN_LEFT-- > 0);
console.dir(response, { depth: null });