Advanced Features
This guide covers features for experiments that need more than basic sensor-actuator evolution. Dynamics, regions, plasticity, and multi-agent setups each add a new dimension to what evolution can discover - but they also add complexity. Read this once you have a working basic experiment and want to push it further.
Using Dynamics
Section titled “Using Dynamics”Dynamics define how internal state changes each tick, independent of agent actions. This is where you create the physiological pressure that drives evolved behavior.
dynamics Metabolism { per_tick { hunger += 0.003 thirst += 0.005 }
rules { if hunger > 0.7: energy -= 0.008 if thirst > 0.6: energy -= 0.01 if energy < 0.15: health -= 0.005 }
death { if health <= 0 }
clamp 0..1}Design Principles
Section titled “Design Principles”-
Create pressure, not punishment. Hunger accumulates slowly (0.003/tick), giving the agent 233 ticks before hunger reaches 0.7. This creates urgency without instant death.
-
Cascade effects. Hunger drains energy. Low energy damages health. Health at zero means death. This cascade gives the agent multiple signals to respond to: “I’m getting hungry” -> “I’m low on energy” -> “I’m losing health.”
-
Rule ordering matters. Rules execute top-to-bottom within a tick. Earlier rules affect state that later rules read. In the example above, if hunger drains energy to below 0.15, the health damage rule fires in the same tick.
-
Use hidden state for instrumentation.
hidden total_sickness: 0..1 = 0.0tracks values the brain cannot see. Use hidden state for research metrics that should not influence evolved behavior.
Using Regions
Section titled “Using Regions”Regions structure the evolved brain into functional clusters. Use them when you want the initial topology to have some organization rather than starting from scratch.
body Agent { // ... sensors and actuators ...
region perception { nodes: 8 density: 0.5 activation: sigmoid recurrent: false }
region decision { nodes: 6 density: 0.4 activation: tanh recurrent: false }}When to Use Regions
Section titled “When to Use Regions”- Complex tasks where the brain needs enough structure to solve the problem from the start. Without regions, evolution must grow hidden nodes one at a time.
- Functional specialization where you want different brain areas to use different activation functions (e.g., step functions for binary decisions, sigmoid for continuous control).
- Large sensor/actuator counts where sparse initial connections benefit from intermediate hidden layers.
When Not to Use Regions
Section titled “When Not to Use Regions”- Simple tasks where direct input-to-output connections can solve the problem.
- Exploratory experiments where you want evolution to discover its own topology.
Density Guidelines
Section titled “Density Guidelines”| Density | Effect |
|---|---|
| 0.0 | No intra-region connections (neurons are isolated until evolution connects them) |
| 0.3-0.5 | Moderate internal connectivity (good starting point) |
| 0.8-1.0 | Dense internal connectivity (more computation but harder to optimize) |
Using Plasticity
Section titled “Using Plasticity”Plasticity allows within-lifetime learning. The evolved genome determines the initial weights, but plasticity rules adapt them during simulation.
body AdaptiveAgent { // ... sensors and actuators ...
plasticity { hebbian { rate: 0.01 max_weight: 2.0 } decay { rate: 0.001 min_weight: 0.0 } }}When to Use Plasticity
Section titled “When to Use Plasticity”- Environments that change within a scenario - the agent needs to adapt to new conditions mid-run
- Tasks requiring memory - Hebbian learning strengthens connections that fire together, creating a form of associative memory
- Complex discrimination - plasticity allows the agent to refine its item discrimination within a single lifetime
When Not to Use Plasticity
Section titled “When Not to Use Plasticity”- Simple static environments - evolution alone is sufficient
- Short scenarios (< 100 ticks) - not enough time for plasticity to have meaningful effect
- When you want pure evolutionary solutions - plasticity adds a learning dimension that can obscure what evolution itself discovered
Combining Plasticity with Regions
Section titled “Combining Plasticity with Regions”Homeostatic regulation requires regions. When using homeostatic, define regions in the body so the regulation has neuron groups to monitor. Without regions, the homeostatic sub-block has no effect.
body FullAgent { // ... sensors and actuators ...
region cortex { nodes: 10 density: 0.5 activation: sigmoid recurrent: false }
plasticity { hebbian { rate: 0.01 max_weight: 2.0 } decay { rate: 0.001 min_weight: 0.0 } homeostatic { target_activity: 0.3 adjustment_rate: 0.005 } }}Multi-Agent Experiments
Section titled “Multi-Agent Experiments”evolve SocialTest { // ... agents: 2}Setting agents: 2 spawns two instances of the same brain per scenario. Each agent has its own position, internal state, and sensor readings, but they share the same genome (same wiring).
Social Sensors
Section titled “Social Sensors”sensor peer_health: social(health)sensor peer_nausea: social(nausea)sensor peer_nearby: directional(range: 15, directions: 4)Social sensors let agents perceive each other’s state. This enables experiments in:
- Observational learning - agents can observe that a peer ate something and became sick
- Coordination - agents can track each other’s position and state
- Social signaling - certain internal states can serve as unambiguous signals for social observation
Tick Ordering
Section titled “Tick Ordering”Each tick processes agents sequentially with randomized order per scenario:
- Agent 1: sensors -> brain -> actions -> consumption
- World physics update
- Agent 2: sensors -> brain -> actions -> consumption
- World physics update
- Both agents: internal state cascade
Fitness
Section titled “Fitness”Fitness is combined across both agents and averaged. Both agents use the same genome, so evolution optimizes for brains that perform well regardless of which agent-slot they occupy.
Example: Food Discrimination
Section titled “Example: Food Discrimination”The fitness landscape principle in action - a discrimination task where the agent must learn which items are safe:
// Safe food - reduces hunger, increases energyitem Berries { category: food properties { color: 0.9 smell: 0.6 texture: 0.2 } on_consume { hunger: -0.4 energy: +0.3 health: +0.1 }}
// Dangerous food - similar properties, but causes nauseaitem ToxicBerries { category: food properties { color: 0.85 smell: 0.5 texture: 0.3 } on_consume { hunger: -0.1 health: -0.4 nausea: +0.8 }}The agent is not told which food is safe. It must discover through evolution that items with certain property signatures cause nausea and health damage. The dynamics cascade (nausea -> health damage -> death) creates the selective pressure.
This pattern generalizes to any domain: define items with similar-but-distinguishable properties, attach different consequences to each, and let evolution discover the discrimination strategy.
Design Checklist
Section titled “Design Checklist”When designing a new experiment, verify:
- The agent has sensors that provide enough information to solve the task
- Actuators cover the actions needed for the target behavior
- The dynamics create physiological pressure that drives the desired behavior
- The fitness function has a gradient (not binary success/failure)
- Safe and dangerous items have detectable but non-trivial property differences
- Spawn counts and grid size create an environment with appropriate density
- The complexity penalty is present but small enough not to dominate
- No obvious degenerate strategies bypass the intended fitness landscape
-
quale checkpasses without errors before starting a long evolution run