A comprehensive multi-agent communication framework supporting multiple protocols for distributed AI systems. This framework enables seamless interaction between AI agents across different communication paradigms with built-in security, monitoring, and scalability features.
- π Multi-Protocol Support: ANP, A2A, ACP, Agora, and custom protocols
- ποΈ Modular Architecture: Protocol-agnostic design with pluggable backends
- π Security-First: DID authentication, E2E encryption, privacy protection
- π Real-time Monitoring: Performance metrics, health checks, and observability
- π Distributed Systems: Support for complex multi-agent workflows
- π§ͺ Testing Frameworks: Comprehensive testing suites for each protocol
- β‘ High Performance: Async/await patterns, concurrent execution
- Quick Start
- Supported Scenarios & Getting Started
- Protocol Guide
- Installation
- Configuration
- Architecture
- Development
- Monitoring & Observability
- Contributing
- Chinese Documentation (Simplified Chinese)
# Required environment
Python 3.11+
OpenAI API Key (for LLM-based agents)# Clone the repository
git clone https://github.com/MultiagentBench/Multiagent-Protocol.git
cd Multiagent-Protocol
# Install dependencies
conda create -n map python==3.11 -y
conda activate map
pip install -r requirements.txt
# Set environment variables
export OPENAI_API_KEY='sk-your-openai-api-key-here'
export OPENAI_BASE_URL='https://your-base-url-here'Purpose: Task execution and coordination across distributed AI agents
All Available Runners:
# Run with different protocols
python -m scenarios.gaia.runners.run_anp # ANP Protocol
python -m scenarios.gaia.runners.run_a2a # A2A Protocol
python -m scenarios.gaia.runners.run_acp # ACP Protocol
python -m scenarios.gaia.runners.run_agora # Agora Protocol
# Protocol Router coordination
python -m scenarios.gaia.runners.run_meta_protocolPurpose: High-throughput message processing with coordinator-worker patterns
Quick Start:
# 1. Set up environment
export OPENAI_API_KEY='sk-your-key'
# 2. Run streaming queue with A2A
python -m scenarios.streaming_queue.runner.run_a2a
# 3. Observe coordinator-worker message processing
# Expected: High-frequency message exchange with load balancingAll Available Runners:
# Stream processing with different protocols
python -m scenarios.streaming_queue.runner.run_anp # ANP Streaming
python -m scenarios.streaming_queue.runner.run_a2a # A2A Streaming
python -m scenarios.streaming_queue.runner.run_acp # ACP Streaming
python -m scenarios.streaming_queue.runner.run_agora # Agora Streaming
# Protocol Router coordination
python -m scenarios.streaming_queue.runner.run_meta_networkPurpose: Privacy-preserving agent communication and security testing
Quick Start:
# 1. Set up environment
export OPENAI_API_KEY='sk-your-key'
# 2. Run privacy-aware security tests
python -m scenarios.safety_tech.runners.run_unified_security_test_anp
# 3. Review privacy protection mechanisms
# Expected: Encrypted communication with privacy compliance reportsAll Available Runners:
# Unified security testing
python -m scenarios.safety_tech.runners.run_unified_security_test_anp
python -m scenarios.safety_tech.runners.run_unified_security_test_a2a
python -m scenarios.safety_tech.runners.run_unified_security_test_acp
python -m scenarios.safety_tech.runners.run_unified_security_test_agora
# Protocol Router security analysis
python -m scenarios.safety_tech.runners.run_s2_metaPurpose: Fault-tolerant systems with automatic recovery mechanisms
Supported Protocols: ANP, A2A, ACP, Agora, Protocol Router
Usage:
export OPENAI_API_KEY='sk-your-key-here'
# Fault tolerance testing
python -m scenarios.fail_storm_recovery.runners.run_anp
python -m scenarios.fail_storm_recovery.runners.run_a2a
python -m scenarios.fail_storm_recovery.runners.run_acp
python -m scenarios.fail_storm_recovery.runners.run_agora
# Protocol Router coordination
python -m scenarios.fail_storm_recovery.runners.run_meta # no adapter
python -m scenarios.fail_storm_recovery.runners.run_meta_network # with adapterPurpose: Benchmark and evaluate protocol routing performance and decision-making
Quick Start:
# 1. Set up environment
export OPENAI_API_KEY='sk-your-key'
# 2. Run the benchmark test
python /root/Multiagent-Protocol/routerbench/run_benchmark.py
# 3. Review benchmark results
# Expected: Protocol routing accuracy, latency metrics, and performance analysis
# Results saved to: routerbench/results/benchmark_results.jsonFeatures:
- Protocol selection accuracy testing
- Routing latency benchmarking
- Multi-protocol comparison
- Detailed performance reports
- Features: DID authentication, E2E encryption, WebSocket communication
- Use Cases: Secure agent networks, identity-verified communications
- Dependencies:
agentconnect_src/(AgentConnect SDK)
- Features: Direct peer communication, JSON-RPC messaging, event streaming
- Use Cases: High-performance agent coordination, real-time messaging
- Dependencies:
a2a-sdk,a2a-server
- Features: Session management, conversation threads, message history
- Use Cases: Conversational agents, multi-turn interactions
- Dependencies:
acp-sdk
- Features: Tool orchestration, LangChain integration, function calling
- Use Cases: Tool-enabled agents, LLM-powered workflows
- Dependencies:
agora-protocol,langchain
- Features: Protocol abstraction, adaptive routing, multi-protocol support
- Use Cases: Protocol-agnostic applications, seamless migration
# Required
export OPENAI_API_KEY='sk-your-openai-api-key'
# Optional
export ANTHROPIC_API_KEY='your-anthropic-key'
export OPENAI_BASE_URL='https://api.openai.com/v1' # Custom endpoint
export LOG_LEVEL='INFO' # DEBUG, INFO, WARNING, ERROREach scenario uses YAML configuration files located in scenario/{scenario}/config/:
# Example: scenario/gaia/config/anp.yaml
model:
type: "openai"
name: "gpt-4o"
temperature: 0.0
api_key: "${OPENAI_API_KEY}"
network:
host: "127.0.0.1"
port_range:
start: 9000
end: 9010
agents:
- id: 1
name: "Agent1"
tool: "create_chat_completion"
max_tokens: 500
workflow:
type: "sequential"
max_steps: 5βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Multiagent-Protocol β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Scenarios β
β βββββββββββ βββββββββββββββ βββββββββββββββ ββββββββββββββββ
β β GAIA β β Streaming β β Safety Tech β β Fail Storm ββ
β β β β Queue β β β β Recovery ββ
β βββββββββββ βββββββββββββββ βββββββββββββββ ββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Protocol Backends β
β βββββββ βββββββ βββββββ βββββββ βββββββββββββββββ β
β β ANP β β A2A β β ACP β βAgoraβ βProtocol Routerβ β
β βββββββ βββββββ βββββββ βββββββ βββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Core Infrastructure β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β Network β β Agents β β Monitoring β β
β β Layer β β Layer β β Layer β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Create protocol backend in
scenario/{scenario}/protocol_backends/{protocol_name}/ - Implement required interfaces:
agent.py,network.py,comm.py - Add configuration in
scenario/{scenario}/config/{protocol_name}.yaml - Create runner in
scenario/{scenario}/runners/run_{protocol_name}.py
scenario/
βββ {scenario}/ # Scenario implementation
β βββ config/ # Configuration files
β βββ protocol_backends/ # Protocol implementations
β β βββ {protocol}/
β β β βββ agent.py # Agent implementation
β β β βββ network.py # Network coordinator
β β β βββ comm.py # Communication backend
β βββ runners/ # Entry point scripts
β βββ tools/ # Scenario-specific tools
βββ common/ # Shared utilities
βββ requirements.txt # Dependencies
The framework includes comprehensive monitoring capabilities:
- Performance Metrics: Message throughput, latency, success rates
- Health Monitoring: Agent status, network connectivity, resource usage
- Security Auditing: Authentication events, encryption status, privacy compliance
- Custom Dashboards: Protocol-specific visualizations and alerts
We welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and add tests
- Ensure all tests pass:
pytest - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Documentation: Wiki
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Built with β€οΈ by the Multi-Agent Systems Community