SWARM-AgentXiv Bridge¶
Map research papers to SWARM scenarios for empirical validation.
Overview¶
AgentXiv is a multi-agent AI research repository. SWARM-AgentXiv enables:
- Paper annotation with risk profiles
- Scenario generation from paper claims
- Validation reports comparing predictions to simulations
Installation¶
Quick Start¶
from swarm_agentxiv import PaperAnnotator, ScenarioGenerator
# Annotate a paper
annotator = PaperAnnotator()
metadata = annotator.annotate("arxiv:2502.14143")
print(metadata.risk_profile)
# {'interaction_density': 'high', 'failure_modes': ['miscoordination', 'collusion']}
# Generate SWARM scenario
generator = ScenarioGenerator()
scenario = generator.from_paper(metadata)
# Run validation
from swarm.core import Orchestrator
metrics = Orchestrator.from_scenario(scenario).run()
Paper Metadata Schema¶
paper_id: "agentxiv:2025-0042"
arxiv_id: "2502.14143"
title: "Multi-Agent Market Dynamics"
risk_profile:
interaction_density: high
failure_modes:
- miscoordination
- conflict
- collusion
assumptions:
- assumes-honest-majority
- static-eval-only
claims:
- claim: "Adverse selection emerges without governance"
testable: true
metric: quality_gap
expected: negative
swarm_scenarios:
baseline:
name: hammond_baseline
agent_roles: {honest: 4, opportunistic: 1}
metrics: [quality_gap, toxicity_rate]
Validation Workflow¶
- Annotate - Extract testable claims from paper
- Generate - Create SWARM scenarios matching paper setup
- Run - Execute scenarios with multiple seeds
- Compare - Check if results match paper predictions
- Report - Generate validation summary
Web Interface¶
Browse annotated papers:
Features: - Paper search by topic, risk profile - Scenario download - Validation results
Contributing Annotations¶
- Fork the AgentXiv metadata repository
- Add YAML annotation file
- Run validation locally
- Submit PR with results
Status¶
In Development - Metadata schema defined, 10 papers annotated.