Writing Scenarios¶
This guide covers how to create custom SWARM scenarios for your experiments.
Scenario File Structure¶
Scenarios are YAML files with four main sections:
name: my_scenario
description: What this scenario tests
simulation:
# Simulation parameters
agents:
# Agent configuration
governance:
# Governance settings
payoff:
# Payoff parameters
Simulation Section¶
simulation:
seed: 42 # Random seed (required for reproducibility)
n_epochs: 20 # Number of epochs
steps_per_epoch: 15 # Steps per epoch
async_mode: false # Async agent execution
log_level: INFO # Logging verbosity
Agents Section¶
Define agent populations:
agents:
- type: honest
count: 5
id_prefix: h
params:
cooperation_threshold: 0.7
- type: opportunistic
count: 3
id_prefix: opp
- type: deceptive
count: 2
id_prefix: dec
params:
trust_building_epochs: 5
exploitation_threshold: 0.8
- type: adversarial
count: 1
id_prefix: adv
Agent Types¶
| Type | Description | Key Parameters |
|---|---|---|
honest |
Cooperative, diligent | cooperation_threshold |
opportunistic |
Payoff maximizing | cherry_pick_threshold |
deceptive |
Trust then exploit | trust_building_epochs |
adversarial |
Active disruption | target_selection |
llm |
LLM-powered | model, persona |
LLM Agents¶
agents:
- type: llm
count: 2
id_prefix: llm
params:
model: claude-3-haiku-20240307
persona: |
You are a helpful assistant focused on completing tasks
efficiently while maintaining good relationships.
temperature: 0.7
Governance Section¶
governance:
# Friction
transaction_tax: 0.02
# Reputation
reputation_decay: 0.1
initial_reputation: 1.0
# Circuit breakers
circuit_breaker_threshold: 0.3
circuit_breaker_window: 10
circuit_breaker_cooldown: 5
# Audits
audit_probability: 0.05
audit_penalty: 0.5
# Staking
staking_requirement: 10.0
stake_slash_rate: 0.1
# Collusion
collusion_detection: true
collusion_threshold: 0.8
Payoff Section¶
payoff:
s_plus: 1.0 # Surplus if beneficial
s_minus: 0.5 # Loss if harmful
h: 0.3 # External harm
theta: 0.5 # Initiator's share
w_rep: 0.1 # Reputation weight
rho_a: 0.1 # Initiator externality internalization
rho_b: 0.1 # Counterparty externality internalization
Loading Scenarios¶
CLI¶
Programmatic¶
from swarm.scenarios import ScenarioLoader
from swarm.core.orchestrator import Orchestrator
scenario = ScenarioLoader.load("scenarios/my_scenario.yaml")
orchestrator = Orchestrator.from_scenario(scenario)
metrics = orchestrator.run()
Scenario Comparison¶
Compare multiple scenarios:
# scenarios/comparison.yaml
name: governance_comparison
base: baseline.yaml
variants:
- name: no_governance
governance:
transaction_tax: 0.0
- name: light_governance
governance:
transaction_tax: 0.01
- name: heavy_governance
governance:
transaction_tax: 0.05
Best Practices¶
Reproducibility
Always set a seed for reproducible results.
Start Simple
Begin with small agent counts and few epochs, then scale up.
Document Purpose
Use the description field to explain what question the scenario answers.
Version Control
Keep scenarios in git alongside your code.
Example Scenarios¶
See the scenarios directory for examples:
baseline.yaml- Minimal setup for testingadverse_selection.yaml- Demonstrates quality gap emergencegovernance_test.yaml- Tests governance effectivenessllm_agents.yaml- LLM-powered agents