Multi-Agent Architecture on osModa
Running one agent is straightforward. Running multiple agents that collaborate, share tools, and operate at different trust levels is an architecture problem. This guide covers both single-server multi-agent setups and multi-server fleets connected via osmoda-mesh encrypted peer-to-peer networking.
Last updated: March 2026
Key concepts
- Single server: run multiple agent processes supervised by osmoda-watch, each as a separate systemd service.
- Multi-server: spawn additional servers, osmoda-mesh auto-connects them for encrypted P2P communication.
- Tool sharing: osmoda-mcpd manages MCP servers that multiple agents can access for shared capabilities.
- Isolation: trust tiers (0, 1, 2) control what each agent can access — from unrestricted root to sandboxed with allowlisted network only.
Pattern 1: Multiple Agents on One Server
The simplest multi-agent setup runs multiple agent processes on a single osModa server. Each agent is a separate systemd service, independently supervised by osmoda-watch. Agents communicate via local IPC, shared files, or local network ports.
Example: Two agents on one server
# Agent 1: Research agent — scrapes and summarizes data # /etc/systemd/system/research-agent.service [Unit] Description=Research Agent After=network-online.target agentd.service [Service] Type=simple WorkingDirectory=/opt/agents/research EnvironmentFile=/opt/agents/research/.env ExecStart=/opt/agents/research/.venv/bin/python main.py Restart=always RestartSec=5 [Install] WantedBy=multi-user.target # Agent 2: Writer agent — generates content from research # /etc/systemd/system/writer-agent.service [Unit] Description=Writer Agent After=network-online.target agentd.service [Service] Type=simple WorkingDirectory=/opt/agents/writer EnvironmentFile=/opt/agents/writer/.env ExecStart=/opt/agents/writer/.venv/bin/python main.py Restart=always RestartSec=5 [Install] WantedBy=multi-user.target
Enable and start both:
systemctl daemon-reload systemctl enable research-agent writer-agent systemctl start research-agent writer-agent # Verify osmoda-watch is supervising both journalctl -u osmoda-watch --since "2 minutes ago" | grep -E "research|writer"
When to use this pattern: Agents that collaborate tightly on the same data, share a local database, or where latency between agents must be minimal. Works well for 2-5 agents on Pro/Team plans.
Resource planning: Each agent calling external LLM APIs typically needs 512 MB - 1 GB RAM and minimal CPU. Agents running local models need significantly more. Choose your plan accordingly.
Resource allocation per plan:
Solo $14.99 → 2 CPU / 4 GB / 40 GB → 1-2 agents Pro $34.99 → 4 CPU / 8 GB / 80 GB → 2-4 agents Team $62.99 → 8 CPU / 16 GB / 160 GB → 5-10 agents Scale $125.99 → 16 CPU / 32 GB / 320 GB → 10-20+ agents
Pattern 2: Multi-Server Fleet with osmoda-mesh
When you need more resources or want to isolate agent workloads, spawn additional osModa servers. osmoda-mesh is the Rust daemon that provides encrypted peer-to-peer networking between servers. It handles discovery, connection management, and message routing — no central broker required.
When you spawn a new server under the same osModa account, osmoda-mesh automatically discovers it and establishes encrypted connections. Agents on different servers can then communicate as if they were on the same machine.
Check mesh status:
# View connected peers systemctl status osmoda-mesh # Check mesh connectivity journalctl -u osmoda-mesh --since "10 minutes ago" # Example output shows connected peers: # [osmoda-mesh] peer connected: 203.0.113.43 (server-2) # [osmoda-mesh] peer connected: 203.0.113.44 (server-3) # [osmoda-mesh] mesh healthy: 3 nodes, all reachable
Architecture examples:
Specialization by function
Server 1: Research agents that scrape and index data. Server 2: Analysis agents that process research output. Server 3: Action agents that execute decisions. Each server is sized for its workload. osmoda-mesh connects them for cross-server agent communication.
Isolation by trust level
Server 1: Tier 0 agents with unrestricted access for internal operations. Server 2: Tier 2 agents handling untrusted user input, sandboxed with osmoda-egress domain allowlisting and declared-only capabilities. Physical server isolation adds a hardware boundary on top of software sandboxing.
Geographic distribution
Servers in different regions for latency optimization or data residency requirements. osmoda-mesh maintains encrypted connections across regions. Agents route requests to the nearest server with the required capabilities.
Tool Sharing with osmoda-mcpd
In multi-agent setups, agents often need access to the same tools — database queries, file operations, browser automation, API integrations. osmoda-mcpd is the daemon that manages MCP (Model Context Protocol) server lifecycle: starting, stopping, auto-restarting, and registering tools that agents can use.
Instead of each agent running its own copy of every tool, you register MCP servers with osmoda-mcpd and multiple agents connect to them. This reduces resource usage and ensures consistent tool behavior across agents.
Check registered MCP servers:
# View osmoda-mcpd managed servers systemctl status osmoda-mcpd # Check registered MCP servers and their status journalctl -u osmoda-mcpd --since "5 minutes ago" # osModa comes with 83 built-in tools across categories: # - File operations (read, write, search, watch) # - Shell execution (sandboxed and unrestricted) # - Service management (start, stop, restart, status) # - SafeSwitch deployment (atomic deploy + rollback) # - Vector + keyword memory (store, search, recall) # - MCP management (register, connect, disconnect) # - Sandbox execution (isolated process runs) # - Fleet coordination (multi-server orchestration)
For a deep dive on MCP server configuration, see the MCP Server Setup guide.
Trust Tiers for Agent Isolation
Not all agents should have the same privileges. osModa's trust model defines three tiers that control what each agent can access. In a multi-agent system, different agents typically operate at different trust levels.
Tier 0 — Unrestricted (osModa Agent)
Full root access, unrestricted network, all 83 tools available. This is the osModa platform agent itself. Use for internal automation that you fully control — deployment scripts, system maintenance, infrastructure management.
Tier 1 — Sandboxed with Declared Capabilities
Runs in a sandbox with explicitly declared capabilities. Can access only the tools and network endpoints you specify. Use for agents that need some system access but should not have unrestricted root — third-party framework agents, customer-facing bots, agents processing semi-trusted input.
Tier 2 — Maximum Isolation
Maximum sandboxing. Network access restricted to domains allowlisted via osmoda-egress. Cannot access the broader filesystem or system services. Use for agents handling untrusted user input, running third-party code, or operating in regulated environments where blast radius must be minimal.
Practical example: A customer support system might use a Tier 0 agent for internal database operations, a Tier 1 agent for processing customer queries through an LLM, and a Tier 2 agent for executing any code or scripts that customers provide. Each tier limits the blast radius if an agent is compromised or misbehaves.
For detailed security configuration, see the Agent Security guide.
Agent Coordination Patterns
How agents communicate depends on your architecture. osModa supports several patterns, and osmoda-mesh enables all of them across server boundaries.
Shared filesystem
Agents on the same server read and write to shared directories. Simple but effective for pipeline architectures. Agent A writes research to /opt/shared/research, Agent B watches the directory and processes new files. File operations are logged in the audit ledger.
Local network ports
Agents expose HTTP or WebSocket endpoints on localhost. Agent A calls Agent B's API at localhost:8081. Works for request-response patterns and real-time streaming between agents on the same server.
osmoda-mesh for cross-server
For agents on different servers, osmoda-mesh handles encrypted message passing. Agents send messages to peers identified by server ID. The mesh handles routing, connection management, and reconnection on failure. Fleet coordination tools enable orchestrating tasks across all servers.
Vector memory sharing
osModa provides vector and keyword memory as built-in tools (part of the 83-tool set). Multiple agents can store and search a shared memory space, enabling collective knowledge without each agent maintaining its own index.
Frequently Asked Questions
How many agents can I run on one osModa server?
It depends on the plan and agent resource usage. A Solo plan (2 CPU, 4 GB RAM) handles 1-2 lightweight agents calling external LLM APIs. Pro (4 CPU, 8 GB) handles 2-4 agents. Team (8 CPU, 16 GB) handles 5-10. Scale (16 CPU, 32 GB) handles 10-20+. Agents running local models consume significantly more resources.
Is osmoda-mesh encrypted?
Yes. osmoda-mesh uses encrypted peer-to-peer networking for all inter-server communication. There is no central broker — servers connect directly to each other. Traffic between agents on different servers is encrypted end-to-end.
Can agents on different servers share MCP tools?
Yes. osmoda-mcpd manages MCP server lifecycle on each server. Agents on different servers can connect to MCP servers running on any server in the mesh, provided the trust tier configuration allows it. This enables tool specialization — one server hosts database tools, another hosts browser tools.
What happens if one server in the mesh goes down?
osmoda-mesh handles server disconnections gracefully. Agents on other servers continue operating independently. When the server comes back online, the mesh reconnects automatically. There is no single point of failure in the mesh topology.
Can I mix trust tiers on the same server?
Yes. You can run a Tier 0 (unrestricted) agent alongside a Tier 2 (max isolation) agent on the same server. Each agent operates within its own trust boundary. Tier 2 agents are sandboxed with restricted network access (controlled by osmoda-egress) and declared capabilities only.
How do I scale from one server to multiple servers?
Spawn additional servers via the dashboard or API. osmoda-mesh automatically discovers and connects servers in your account. No manual networking configuration is needed. Deploy agents to the new servers and they can communicate with agents on existing servers immediately.
Build Your Multi-Agent Fleet
Start with one server, scale to many. osmoda-mesh connects them automatically with encrypted P2P networking. From $14.99/month per server.
Explore More Guides