What Is Tool Use in AI Agents
Tool use is the mechanism by which AI agents invoke structured functions to interact with external systems. Rather than only generating text, an agent with tool use can read files, run commands, query databases, send messages, and modify infrastructure. osModa provides 83 structured tools via the Model Context Protocol.
How Tool Use Works
Tool use follows a three-step process. First, the agent is presented with a catalog of available tools, each described by a name, description, and JSON Schema defining its input parameters. Second, during reasoning, the LLM decides to invoke a tool by outputting a structured JSON object matching the tool's schema -- this is often called "function calling." Third, the runtime executes the function, captures the result, and feeds it back to the LLM as context for the next reasoning step.
This loop can repeat multiple times within a single task. An agent might read a configuration file (tool call 1), analyze its contents (reasoning), write an updated version (tool call 2), verify the changes (tool call 3), and deploy the update (tool call 4). Each tool call provides new information that shapes the agent's next decision.
Tool Use via MCP
The Model Context Protocol (MCP) standardizes tool use across AI agent frameworks. Before MCP, each framework (LangChain, CrewAI, AutoGen, custom implementations) defined its own tool interface. A tool written for LangChain could not be used in CrewAI without rewriting the integration layer. MCP provides a single JSON-RPC protocol that any framework can implement, making tools portable across agents and platforms.
On osModa, tool use is managed by the osmoda-mcpd daemon. This Rust daemon handles tool registration, discovery, invocation routing, and result delivery. When an agent requests the list of available tools, osmoda-mcpd returns the full catalog with schemas. When the agent invokes a tool, osmoda-mcpd validates the input, executes the function, and returns the structured result.
osModa's 83-Tool Catalog
osModa ships with 83 built-in tools organized across seven categories. Every tool is defined with a JSON Schema, making it compatible with any MCP client. The tools are accessible to Claude Opus, Sonnet, Haiku, GPT-4o, and o3-mini through the osModa dashboard.
System Operations
Process management, service control, system metrics, resource monitoring, and OS-level operations on the NixOS host.
File Management
Read, write, move, copy, delete, search, and watch files and directories on the server filesystem.
Deployment
NixOS configuration management, service deployment, rollback via SafeSwitch, and environment provisioning.
Automation
Scheduled routines, cron-like task execution, event-driven triggers, and workflow orchestration.
Storage
Persistent key-value storage, structured data operations, and memory management for agent state.
Communication
Send and receive messages via Telegram, WhatsApp, Discord, and Slack. Inter-agent communication through osmoda-mesh.
Cryptography
ETH and SOL wallet operations via osmoda-keyd with policy-gated signing. Hash operations, signature verification, and encryption utilities.
Controlling Tool Access: The Trust Model
Not every agent should have access to every tool. An agent that reads log files should not be able to deploy configuration changes. osModa enforces this through three trust tiers:
- Tier 0 (Unrestricted) -- Full access to all 83 tools. For production agents that have been tested and vetted.
- Tier 1 (Sandboxed) -- Access only to tools explicitly declared in the agent's capability manifest. The agent cannot discover or invoke tools outside its declared scope.
- Tier 2 (Max Isolation) -- Most restrictive permissions. For untrusted or experimental agents that need minimal tool access in a fully isolated environment.
Every tool invocation, regardless of trust tier, is recorded in the SHA-256 hash-chained audit ledger. This provides a complete, tamper-evident record of every action taken by every agent.
Frequently Asked Questions
What is tool use in AI?
Tool use is the ability of an AI agent to invoke external functions to perform actions beyond text generation. Instead of just producing a written answer, the agent calls a structured function -- reading a file, executing a command, sending a message, or querying a database. The function has a defined input schema and returns a structured result that the agent incorporates into its reasoning.
What is the difference between tool use and function calling?
The terms are used interchangeably. 'Function calling' was popularized by OpenAI to describe GPT-4's ability to output structured JSON matching a function schema. 'Tool use' is the broader term used in AI agent research and the MCP specification. Both refer to the same mechanism: the LLM generates a structured invocation, the runtime executes it, and the result is fed back to the LLM.
How many tools does osModa provide?
osModa provides 83 built-in tools spanning seven categories: system operations, file management, deployment, automation, storage, communication, and cryptography. These tools are exposed through the Model Context Protocol via osmoda-mcpd and are accessible to any MCP-compatible agent running on the platform.
Can I add custom tools to osModa?
Yes. You can deploy custom MCP servers alongside osModa's built-in tools. osmoda-mcpd manages both built-in and custom MCP servers with the same lifecycle controls: health monitoring, watchdog restart, HTTP endpoint configuration, and audit logging. Custom tools are registered through standard MCP tool discovery.
How does osModa control which tools an agent can access?
osModa uses a three-tier trust model. Tier 0 gives agents unrestricted access to all tools. Tier 1 sandboxes agents with declared capability limits -- the agent can only use tools explicitly allowed in its configuration. Tier 2 enforces maximum isolation with the most restrictive permissions. Every tool invocation is logged in the SHA-256 hash-chained audit ledger regardless of tier.
Which LLMs support tool use on osModa?
osModa's dashboard supports Claude Opus, Sonnet, and Haiku as well as GPT-4o and o3-mini. All of these models support tool use / function calling and can invoke the 83 built-in MCP tools plus any custom tools you deploy. The tool schemas are model-agnostic -- the same tool definitions work across all supported models.
Access 83 Tools on Dedicated Infrastructure
Spawn a server with the full MCP tool catalog, watchdog supervision, and audit logging. Plans from $14.99/month.
Spawn Server