Function calling
Function calling lets the model request work from your own code. You describe possible functions as tools, the model decides when to call them, and you send the results back so it can continue the conversation.
Request/response loop
Section titled “Request/response loop”sequenceDiagram participant App participant Model participant Tool App->>Model: messages + available tools Model-->>App: assistant message with tool_call parts App->>Tool: execute(args) Tool-->>App: tool result payload App->>Model: append tool message + retry Model-->>App: final assistant content
The loop continues until the model returns regular assistant content (no tool call parts) or you stop the interaction. Each tool call is explicit, so you stay in control of side effects and error handling.
Describe the tools you expose
Section titled “Describe the tools you expose”Create one entry per callable function. The schema tells the model which arguments are expected.
interface Tool { /** * The name of the tool. */ name: string; /** * A description of the tool. */ description: string; /** * The JSON schema of the parameters that the tool accepts. The type must be "object". */ parameters: JSONSchema;}
pub struct Tool { /// The name of the tool. pub name: String, /// A description of the tool. pub description: String, /// The JSON schema of the parameters that the tool accepts. The type must /// be "object". pub parameters: JSONSchema,}
type Tool struct { // The name of the tool. Name string `json:"name"` // A description of the tool. Description string `json:"description"` // The JSON schema of the parameters that the tool accepts. The type must be "object". Parameters JSONSchema `json:"parameters"`}
Include these tools when you call the model. In the streaming APIs, pass them on every turn.
Detect tool calls from the model
Section titled “Detect tool calls from the model”When the model wants to call something, the assistant message carries one or more tool_call
parts. Each part includes the tool name and JSON arguments that you must parse before executing your own code.
interface ToolCallPart { type: "tool-call"; /** * The ID of the tool call, used to match the tool result with the tool call. */ tool_call_id: string; /** * The name of the tool to call. */ tool_name: string; /** * The arguments to pass to the tool. */ args: Record<string, unknown>; /** * The ID of the tool call part, if applicable. * This is different from tool_call_id which is used to match tool results. */ id?: string;}
pub struct ToolCallPart { /// The ID of the tool call, used to match the tool result with the tool /// call. pub tool_call_id: String, /// The name of the tool to call. pub tool_name: String, /// The arguments to pass to the tool. pub args: Value, /// The ID of the tool call, if applicable /// This is different from `tool_call_id`, which is the ID used to match the /// tool result with the tool call. #[serde(skip_serializing_if = "Option::is_none")] pub id: Option<String>,}
type ToolCallPart struct { // The ID of the tool call, used to match the tool result with the tool call. ToolCallID string `json:"tool_call_id"` // The name of the tool to call. ToolName string `json:"tool_name"` // The arguments to pass to the tool. Args json.RawMessage `json:"args"` // The ID of the part, if applicable. // This is different from ToolCallID which is used to match tool results. ID *string `json:"id,omitempty"`}
Execute each requested function in your application. Handle failures yourself—if you return an error payload the model can choose a different strategy or ask for clarification.
Return the results as a tool message
Section titled “Return the results as a tool message”After execution, respond with a single tool message that contains a tool_result
part for every call you serviced. The model uses these results to continue reasoning or to compose the final answer.
interface ToolMessage { role: "tool"; content: Part[];}
interface ToolResultPart { type: "tool-result"; /** * The ID of the tool call from previous assistant message. */ tool_call_id: string; /** * The name of the tool that was called. */ tool_name: string; /** * The content of the tool result. */ content: Part[]; /** * Marks the tool result as an error. */ is_error?: boolean;}
pub struct ToolMessage { pub content: Vec<Part>,}
pub struct ToolResultPart { /// The ID of the tool call from previous assistant message. pub tool_call_id: String, /// The name of the tool that was called. pub tool_name: String, /// The content of the tool result. pub content: Vec<Part>, /// Marks the tool result as an error. #[serde(skip_serializing_if = "Option::is_none")] pub is_error: Option<bool>,}
type ToolMessage struct { Content []Part `json:"content"`}
type ToolResultPart struct { // The ID of the tool call from previous assistant message. ToolCallID string `json:"tool_call_id"` // The name of the tool that was called. ToolName string `json:"tool_name"` // The content of the tool result. Content []Part `json:"content"` // Marks the tool result as an error. IsError bool `json:"is_error,omitempty"`}
Append both the assistant message that requested the tool and your tool message to the conversation, then call the model again. Repeat until you get a regular assistant response.
End-to-end example
Section titled “End-to-end example”Below is a minimal loop that wires everything together. For larger flows, delegating to the Agent library keeps lifecycle management and error handling consistent.
import type { Message, ModelResponse, Tool, ToolMessage,} from "@hoangvvo/llm-sdk";import { getModel } from "./get-model.ts";
let MY_BALANCE = 1000;const STOCK_PRICE = 100;
function trade({ action, quantity, symbol,}: { action: "buy" | "sell"; quantity: number; symbol: string;}) { console.log( `[TOOLS trade()] Trading ${String(quantity)} shares of ${symbol} with action: ${action}`, ); const balanceChange = action === "buy" ? -quantity * STOCK_PRICE : quantity * STOCK_PRICE;
MY_BALANCE += balanceChange;
return { success: true, balance: MY_BALANCE, balance_change: balanceChange, };}
let MAX_TURN_LEFT = 10;
const model = getModel("openai", "gpt-4o");
const tools: Tool[] = [ { name: "trade", description: "Trade stocks", parameters: { type: "object", properties: { action: { type: "string", enum: ["buy", "sell"], description: "The action to perform", }, quantity: { type: "number", description: "The number of stocks to trade", }, symbol: { type: "string", description: "The stock symbol", }, }, required: ["action", "quantity", "symbol"], additionalProperties: false, }, },];
const messages: Message[] = [ { role: "user", content: [ { type: "text", text: "I would like to buy 50 NVDA stocks.", }, ], },];
let response: ModelResponse;
do { response = await model.generate({ messages, tools, });
messages.push({ role: "assistant", content: response.content, });
const toolCallParts = response.content.filter((c) => c.type === "tool-call");
if (toolCallParts.length === 0) { break; }
let toolMessage: ToolMessage | undefined;
for (const toolCallPart of toolCallParts) { const { tool_call_id, tool_name, args } = toolCallPart;
let toolResult: any; switch (tool_name) { case "trade": { toolResult = trade(args as any); break; } default: throw new Error(`Tool ${tool_name} not found`); }
toolMessage = toolMessage ?? { role: "tool", content: [], };
toolMessage.content.push({ type: "tool-result", tool_name, tool_call_id, content: [ { type: "text", text: JSON.stringify(toolResult), }, ], }); }
if (toolMessage) messages.push(toolMessage);} while (MAX_TURN_LEFT-- > 0);
console.dir(response, { depth: null });
use dotenvy::dotenv;use llm_sdk::{LanguageModelInput, Message, Part, Tool, ToolCallPart, ToolResultPart};use serde::Deserialize;use serde_json::{json, Value};
mod common;
const STOCK_PRICE: i64 = 100;
#[derive(Debug)]struct Account { balance: i64,}impl Account { fn new(balance: i64) -> Self { Self { balance } } fn trade(&mut self, args: &TradeArgs) -> Value { println!( "[TOOLS trade()] Trading {} shares of {} with action: {}", args.quantity, args.symbol, args.action );
let change = match args.action.as_str() { "buy" => -args.quantity * STOCK_PRICE, "sell" => args.quantity * STOCK_PRICE, _ => 0, };
self.balance += change;
json!({ "success": true, "balance": self.balance, "balance_change": change }) }}
#[derive(Debug, Clone, Deserialize)]struct TradeArgs { action: String, // "buy" | "sell" quantity: i64, symbol: String,}
#[tokio::main]async fn main() { dotenv().ok();
let mut account = Account::new(1000);
let model = common::get_model("openai", "gpt-4o");
let tools: Vec<Tool> = vec![Tool { name: "trade".into(), description: "Trade stocks".into(), parameters: json!({ "type": "object", "properties": { "action": { "type": "string", "enum": ["buy", "sell"], "description": "The action to perform" }, "quantity": { "type": "number", "description": "The number of stocks to trade" }, "symbol": { "type": "string", "description": "The stock symbol" } }, "required": ["action", "quantity", "symbol"], "additionalProperties": false }), }];
let mut messages = vec![Message::user(vec![Part::text( "I would like to buy 50 NVDA stocks.", )])];
let mut response; let mut max_turn_left = 10;
loop { response = model .generate(LanguageModelInput { messages: messages.clone(), tools: Some(tools.clone()), ..Default::default() }) .await .unwrap();
messages.push(Message::assistant(response.content.clone()));
let tool_calls: Vec<ToolCallPart> = response .content .iter() .filter_map(|p| match p { Part::ToolCall(tc) => Some(tc.clone()), _ => None, }) .collect();
if tool_calls.is_empty() { break; }
let mut tool_results: Vec<Part> = Vec::new();
for call in tool_calls { let result_json = match call.tool_name.as_str() { "trade" => { let args: TradeArgs = serde_json::from_value(call.args.clone()) .expect("Failed to parse tool call arguments"); account.trade(&args) } other => panic!("tool {other} not found"), };
let result_str = result_json.to_string();
tool_results.push( ToolResultPart { tool_name: call.tool_name.clone(), tool_call_id: call.tool_call_id.clone(), content: vec![Part::text(result_str)], is_error: Some(false), } .into(), ); }
messages.push(Message::tool(tool_results));
max_turn_left -= 1; if max_turn_left <= 0 { break; } }
println!("{response:#?}");}
package main
import ( "context" "encoding/json" "fmt" "log"
llmsdk "github.com/hoangvvo/llm-sdk/sdk-go" "github.com/hoangvvo/llm-sdk/sdk-go/examples" "github.com/sanity-io/litter")
var myBalance = 1000
const stockPrice = 100
type tradeArgs struct { Action string `json:"action"` Quantity int `json:"quantity"` Symbol string `json:"symbol"`}
type tradeResult struct { Success bool `json:"success"` Balance int `json:"balance"` BalanceChange int `json:"balance_change"`}
func trade(args tradeArgs) tradeResult { fmt.Printf("[TOOLS trade()] Trading %d shares of %s with action: %s\n", args.Quantity, args.Symbol, args.Action)
var balanceChange int if args.Action == "buy" { balanceChange = -args.Quantity * stockPrice } else { balanceChange = args.Quantity * stockPrice }
myBalance += balanceChange
return tradeResult{ Success: true, Balance: myBalance, BalanceChange: balanceChange, }}
func main() { model := examples.GetModel("openai", "gpt-4o")
maxTurnLeft := 10
tools := []llmsdk.Tool{ { Name: "trade", Description: "Trade stocks", Parameters: llmsdk.JSONSchema{ "type": "object", "properties": map[string]any{ "action": map[string]any{ "type": "string", "enum": []string{"buy", "sell"}, "description": "The action to perform", }, "quantity": map[string]any{ "type": "number", "description": "The number of stocks to trade", }, "symbol": map[string]any{ "type": "string", "description": "The stock symbol", }, }, "required": []string{"action", "quantity", "symbol"}, "additionalProperties": false, }, }, }
messages := []llmsdk.Message{ llmsdk.NewUserMessage( llmsdk.NewTextPart("I would like to buy 50 NVDA stocks."), ), }
var response *llmsdk.ModelResponse var err error
for maxTurnLeft > 0 { response, err = model.Generate(context.Background(), &llmsdk.LanguageModelInput{ Messages: messages, Tools: tools, })
if err != nil { log.Fatalf("Generation failed: %v", err) }
messages = append(messages, llmsdk.NewAssistantMessage(response.Content...))
var hasToolCalls bool var toolMessage *llmsdk.ToolMessage
for _, part := range response.Content { if part.ToolCallPart != nil { hasToolCalls = true
toolCallPart := part.ToolCallPart fmt.Printf("Tool call: %s(%s)\n", toolCallPart.ToolName, toolCallPart.Args)
var toolResult any switch toolCallPart.ToolName { case "trade": var args tradeArgs argsBytes, _ := json.Marshal(toolCallPart.Args) if err := json.Unmarshal(argsBytes, &args); err != nil { log.Fatalf("Failed to parse trade args: %v", err) } toolResult = trade(args) default: log.Fatalf("Tool %s not found", toolCallPart.ToolName) }
if toolMessage == nil { toolMessage = &llmsdk.ToolMessage{Content: []llmsdk.Part{}} }
resultBytes, _ := json.Marshal(toolResult) toolMessage.Content = append(toolMessage.Content, llmsdk.NewToolResultPart( toolCallPart.ToolCallID, toolCallPart.ToolName, []llmsdk.Part{ llmsdk.NewTextPart(string(resultBytes)), }, false, ), ) } }
if !hasToolCalls { break }
if toolMessage != nil { messages = append(messages, llmsdk.NewToolMessage(toolMessage.Content...)) }
maxTurnLeft-- }
litter.Dump(response)}