Skip to content

Advanced Features

This guide covers features for experiments that need more than basic sensor-actuator evolution. Dynamics, regions, plasticity, and multi-agent setups each add a new dimension to what evolution can discover - but they also add complexity. Read this once you have a working basic experiment and want to push it further.

Dynamics define how internal state changes each tick, independent of agent actions. This is where you create the physiological pressure that drives evolved behavior.

dynamics Metabolism {
per_tick {
hunger += 0.003
thirst += 0.005
}
rules {
if hunger > 0.7: energy -= 0.008
if thirst > 0.6: energy -= 0.01
if energy < 0.15: health -= 0.005
}
death {
if health <= 0
}
clamp 0..1
}
  1. Create pressure, not punishment. Hunger accumulates slowly (0.003/tick), giving the agent 233 ticks before hunger reaches 0.7. This creates urgency without instant death.

  2. Cascade effects. Hunger drains energy. Low energy damages health. Health at zero means death. This cascade gives the agent multiple signals to respond to: “I’m getting hungry” -> “I’m low on energy” -> “I’m losing health.”

  3. Rule ordering matters. Rules execute top-to-bottom within a tick. Earlier rules affect state that later rules read. In the example above, if hunger drains energy to below 0.15, the health damage rule fires in the same tick.

  4. Use hidden state for instrumentation. hidden total_sickness: 0..1 = 0.0 tracks values the brain cannot see. Use hidden state for research metrics that should not influence evolved behavior.


Regions structure the evolved brain into functional clusters. Use them when you want the initial topology to have some organization rather than starting from scratch.

body Agent {
// ... sensors and actuators ...
region perception {
nodes: 8
density: 0.5
activation: sigmoid
recurrent: false
}
region decision {
nodes: 6
density: 0.4
activation: tanh
recurrent: false
}
}
  • Complex tasks where the brain needs enough structure to solve the problem from the start. Without regions, evolution must grow hidden nodes one at a time.
  • Functional specialization where you want different brain areas to use different activation functions (e.g., step functions for binary decisions, sigmoid for continuous control).
  • Large sensor/actuator counts where sparse initial connections benefit from intermediate hidden layers.
  • Simple tasks where direct input-to-output connections can solve the problem.
  • Exploratory experiments where you want evolution to discover its own topology.
DensityEffect
0.0No intra-region connections (neurons are isolated until evolution connects them)
0.3-0.5Moderate internal connectivity (good starting point)
0.8-1.0Dense internal connectivity (more computation but harder to optimize)

Plasticity allows within-lifetime learning. The evolved genome determines the initial weights, but plasticity rules adapt them during simulation.

body AdaptiveAgent {
// ... sensors and actuators ...
plasticity {
hebbian { rate: 0.01 max_weight: 2.0 }
decay { rate: 0.001 min_weight: 0.0 }
}
}
  • Environments that change within a scenario - the agent needs to adapt to new conditions mid-run
  • Tasks requiring memory - Hebbian learning strengthens connections that fire together, creating a form of associative memory
  • Complex discrimination - plasticity allows the agent to refine its item discrimination within a single lifetime
  • Simple static environments - evolution alone is sufficient
  • Short scenarios (< 100 ticks) - not enough time for plasticity to have meaningful effect
  • When you want pure evolutionary solutions - plasticity adds a learning dimension that can obscure what evolution itself discovered

Homeostatic regulation requires regions. When using homeostatic, define regions in the body so the regulation has neuron groups to monitor. Without regions, the homeostatic sub-block has no effect.

body FullAgent {
// ... sensors and actuators ...
region cortex {
nodes: 10
density: 0.5
activation: sigmoid
recurrent: false
}
plasticity {
hebbian { rate: 0.01 max_weight: 2.0 }
decay { rate: 0.001 min_weight: 0.0 }
homeostatic { target_activity: 0.3 adjustment_rate: 0.005 }
}
}

evolve SocialTest {
// ...
agents: 2
}

Setting agents: 2 spawns two instances of the same brain per scenario. Each agent has its own position, internal state, and sensor readings, but they share the same genome (same wiring).

sensor peer_health: social(health)
sensor peer_nausea: social(nausea)
sensor peer_nearby: directional(range: 15, directions: 4)

Social sensors let agents perceive each other’s state. This enables experiments in:

  • Observational learning - agents can observe that a peer ate something and became sick
  • Coordination - agents can track each other’s position and state
  • Social signaling - certain internal states can serve as unambiguous signals for social observation

Each tick processes agents sequentially with randomized order per scenario:

  1. Agent 1: sensors -> brain -> actions -> consumption
  2. World physics update
  3. Agent 2: sensors -> brain -> actions -> consumption
  4. World physics update
  5. Both agents: internal state cascade

Fitness is combined across both agents and averaged. Both agents use the same genome, so evolution optimizes for brains that perform well regardless of which agent-slot they occupy.


The fitness landscape principle in action - a discrimination task where the agent must learn which items are safe:

// Safe food - reduces hunger, increases energy
item Berries {
category: food
properties { color: 0.9 smell: 0.6 texture: 0.2 }
on_consume { hunger: -0.4 energy: +0.3 health: +0.1 }
}
// Dangerous food - similar properties, but causes nausea
item ToxicBerries {
category: food
properties { color: 0.85 smell: 0.5 texture: 0.3 }
on_consume { hunger: -0.1 health: -0.4 nausea: +0.8 }
}

The agent is not told which food is safe. It must discover through evolution that items with certain property signatures cause nausea and health damage. The dynamics cascade (nausea -> health damage -> death) creates the selective pressure.

This pattern generalizes to any domain: define items with similar-but-distinguishable properties, attach different consequences to each, and let evolution discover the discrimination strategy.


When designing a new experiment, verify:

  • The agent has sensors that provide enough information to solve the task
  • Actuators cover the actions needed for the target behavior
  • The dynamics create physiological pressure that drives the desired behavior
  • The fitness function has a gradient (not binary success/failure)
  • Safe and dangerous items have detectable but non-trivial property differences
  • Spawn counts and grid size create an environment with appropriate density
  • The complexity penalty is present but small enough not to dominate
  • No obvious degenerate strategies bypass the intended fitness landscape
  • quale check passes without errors before starting a long evolution run