Back to BlogAI/ML

Building AI Agents with Tool Use and Function Calling

How to design, implement, and deploy AI agents that can take real-world actions — from architecture patterns to safety guardrails.

Amar Singh Nov 22, 2025 13 min read
AI Agents LLM Function Calling Tool Use
Building AI Agents with Tool Use and Function Calling

AI agents are LLMs that don't just generate text — they take actions. They can search databases, call APIs, send emails, write code, and orchestrate multi-step workflows. The shift from chat-based AI to agent-based AI is the most significant evolution in AI application development since the introduction of ChatGPT. At Vaarak, we've built agents for customer support automation, code review, data analysis, and infrastructure management.

AI robot and technology concept
AI agents bridge the gap between language understanding and real-world action

The Agent Architecture

An AI agent follows a loop: perceive (read input and context), reason (decide what to do), act (call a tool), and observe (process the tool's result). This loop continues until the agent determines it has enough information to produce a final answer or has completed the task.

agent/core.ts
interface Tool {
  name: string;
  description: string;
  parameters: Record<string, unknown>;
  execute: (params: Record<string, unknown>) => Promise<string>;
}

async function agentLoop(query: string, tools: Tool[], maxSteps = 10): Promise<string> {
  const messages: Message[] = [
    { role: "system", content: SYSTEM_PROMPT },
    { role: "user", content: query },
  ];

  for (let step = 0; step < maxSteps; step++) {
    const response = await llm.chat({
      messages,
      tools: tools.map(t => ({ name: t.name, description: t.description, parameters: t.parameters })),
    });

    // If the model wants to use a tool
    if (response.toolCalls?.length) {
      for (const call of response.toolCalls) {
        const tool = tools.find(t => t.name === call.name);
        if (!tool) throw new Error(`Unknown tool: ${call.name}`);
        const result = await tool.execute(call.arguments);
        messages.push({ role: "tool", content: result, toolCallId: call.id });
      }
      continue;
    }

    // Model is done — return final answer
    return response.content;
  }

  return "I wasn't able to complete this task within the allowed steps.";
}

Designing Effective Tools

  • Tools should be atomic: one action per tool. 'search_database' and 'update_record' should be separate tools, not a 'database_operation' tool with a mode parameter.
  • Tool descriptions are prompts: write them as if explaining the tool to a junior developer. Include when to use it, what parameters mean, and what the output looks like.
  • Return structured, predictable output. The agent needs to parse tool results reliably — JSON with consistent fields, not free-form text.
  • Include error messages in tool responses, not exceptions. The agent should be able to reason about failures and try alternative approaches.

Safety Guardrails

AI agents that can take real-world actions need safety guardrails. A customer support agent that can issue refunds needs spending limits. A code agent that can deploy needs approval workflows for production changes. We implement three layers of safety:

  1. Permission scoping: Each agent has an explicit list of allowed tools and actions. A read-only research agent cannot call write tools.
  2. Confirmation gates: Destructive or high-impact actions (delete, deploy, refund > $100) require human approval before execution.
  3. Rate limiting: Cap the number of tool calls per session and per time window. Prevents runaway agents from calling APIs thousands of times.

Never give an agent unrestricted access to production systems. Always scope permissions to the minimum needed, implement confirmation gates for destructive actions, and maintain detailed audit logs of every tool call.

Multi-Agent Orchestration

For complex workflows, a single agent isn't enough. We use multi-agent patterns where a coordinator agent delegates subtasks to specialist agents: a research agent, a coding agent, a review agent. Each specialist has its own tools and system prompt optimized for its role. The coordinator manages the workflow, synthesizes results, and handles failures.

The best AI agents are invisible. Users don't care that an AI is searching databases, calling APIs, and synthesizing information. They just want the right answer, fast. Design your agents around user outcomes, not technical capabilities.

Amar Singh, Vaarak Engineering
A

Amar Singh

Founder & Lead Engineer