Intermediate3 min read

How Do You Decide What Tools to Give an AI Agent?

A framework for deciding which tools to give an AI agent — covering granularity, safety boundaries, observability, and the principle of minimal tool sets.

Also preparing for coding interviews?

Rubduck is an AI mock interviewer for DSA and coding rounds — get instant feedback on your solutions.

Daily tips, confessions & AI news. Unsubscribe anytime. Questions? [email protected]

Why This Is Asked

Tool design is one of the most underrated aspects of agent engineering. Give an agent too few tools and it cannot complete tasks. Give it too many and it gets confused, makes mistakes, or takes unintended actions.

Key Concepts to Cover

  • Minimal tool set — give only the tools needed for the task
  • Tool granularity — atomic tools vs. composite tools
  • Read vs. write tools — different risk levels require different handling
  • Tool descriptions — how you describe tools matters as much as what they do
  • Safety constraints — rate limits, scope restrictions, confirmation requirements
  • Observability — every tool call should be logged

How to Approach This

1. Start With the Task

Before designing tools, fully understand what the agent needs to accomplish:

  • What are the inputs and outputs?
  • What real-world systems does it need to interact with?
  • What actions are reversible vs. irreversible?

2. The Minimal Tool Set Principle

Give the agent the minimum set of tools needed — no more. Extra tools:

  • Confuse the model (which tool do I use?)
  • Increase the surface area for mistakes
  • Make the agent harder to reason about and test

3. Tool Granularity

Too atomic: read_file_line(path, line_number) — requires thousands of calls to read a file.

Too composite: analyze_codebase_and_fix_all_bugs() — hides too much logic, making behavior opaque.

Just right: read_file(path) + write_file(path, content) + run_tests() — flexible yet efficient.

4. Separate Read and Write Tools

Read tools: Low risk. The agent can call these freely.

Write tools: Higher risk. Consider:

  • Requiring explicit confirmation before calling
  • Adding rate limits
  • Logging every call with who authorized it
  • Returning a "dry run" result before actually executing

5. Write Tool Descriptions Carefully

Bad: send_email(to, body) — "Sends an email"

Good: send_email(to, body) — "Sends an email to a single recipient. Use ONLY after the user has explicitly asked to send an email and confirmed the recipient and content. Do not use this for drafting or previewing."

6. Handle Tool Failures Gracefully

Return structured errors:

{"success": false, "error": "rate_limit_exceeded", "retry_after_seconds": 60}

Common Follow-ups

  1. "What if the agent needs a capability you do not want it to use autonomously?" Create a "request human approval" tool: request_approval(action, rationale). The agent can plan around sensitive actions without taking them directly.

  2. "How do you prevent an agent from using tools in unintended ways?" Clear descriptions, output validation, rate limiting, audit logging, and system-level guardrails.

  3. "Should tools be stateful or stateless?" Prefer stateless tools for simplicity and testability. If state is needed, maintain it in explicit session state passed to the tool.

Related Questions

Prep the coding round too

AI knowledge is only half the picture. Rubduck helps you nail DSA and coding interviews with an AI interviewer that gives real-time feedback.

Daily tips, confessions & AI news. Unsubscribe anytime. Questions? [email protected]