Body
Defines an agent’s sensors (brain inputs) and actuators (brain outputs). The body IS the schema - all other definitions validate against it.
Think of the body as your agent’s physical interface with the world. Sensors are its eyes, ears, and feelings - everything it can perceive. Actuators are its muscles - everything it can do. The evolution engine will wire these together through a neural network, but the body defines what’s available to wire.
body Forager { // Internal state sensors - one brain input node each sensor hunger: internal(0..1) sensor energy: internal(0..1)
// Directional sensors - expands to N brain input nodes (one per direction) sensor food_nearby: directional(range: 20, directions: 4) // creates: food_nearby_n, food_nearby_s, food_nearby_e, food_nearby_w // // 8-directional variant would use directions: 8, adding diagonal directions: // food_nearby_n, food_nearby_ne, food_nearby_e, food_nearby_se, // food_nearby_s, food_nearby_sw, food_nearby_w, food_nearby_nw
// Item property sensors - one brain input node sensor item_color: item_property(color)
// Social sensors (multi-agent only) - one brain input node sensor peer_health: social(health)
// Actuators - brain output nodes actuator move: directional(threshold: 0.5, directions: 4) // creates: move_n, move_s, move_e, move_w actuator eat: trigger(threshold: 0.5)}Sensor Types
Section titled “Sensor Types”| Type | Syntax | Brain Nodes | Description |
|---|---|---|---|
| Internal | internal(0..1) | 1 | Agent’s own state value, clamped to range |
| Directional | directional(range: N, directions: 4) | 4 | N/S/E/W distance detection |
| Directional | directional(range: N, directions: 8) | 8 | N/NE/E/SE/S/SW/W/NW distance detection |
| Item Property | item_property(field) | 1 | Observable property of the nearest item |
| Social | social(field) | 1 | Peer agent’s visible state (requires agents: 2) |
Actuator Types
Section titled “Actuator Types”| Type | Syntax | Brain Nodes | Description |
|---|---|---|---|
| Directional | directional(threshold: F, directions: 4) | 4 | Winner-take-all direction selection |
| Trigger | trigger(threshold: F) | 1 | Fires when activation exceeds threshold |
Note: The threshold value is stored in the compiled project and available to domains. The engine returns raw actuator output values; domains interpret thresholds per their own logic.
Note: Parameters must be named. Write trigger(threshold: 0.5) not trigger(0.5).
Regions
Section titled “Regions”In a nutshell: Regions give your agent’s brain structure before evolution begins. Instead of starting with an empty brain and hoping evolution builds useful groupings, you pre-define clusters of neurons with different properties - fast binary reflexes, slow graded reasoning, state tracking. Evolution still wires everything together, but it starts with a structured foundation rather than a blank slate.
Regions define clusters of hidden neurons inside a body. They give structure to the evolved brain by grouping neurons with shared properties - a specific activation function, internal connectivity density, and optional recurrence. Without regions, the evolution engine starts with a direct input-to-output topology and grows hidden neurons one at a time. With regions, the initial genome already contains structured hidden layers.
Regions are declared inside body blocks, after sensors and actuators.
body Agent { sensor energy: internal(0..1) sensor hunger: internal(0..1) actuator move: directional(threshold: 0.5, directions: 4) actuator eat: trigger(threshold: 0.3)
region reflex { nodes: 8 density: 0.6 activation: step recurrent: false }
region planning { nodes: 12 density: 0.4 activation: sigmoid recurrent: false }}Fields
Section titled “Fields”| Field | Type | Required | Description |
|---|---|---|---|
nodes | integer | yes | Number of hidden neurons in this region |
density | float | yes | Internal connectivity density in [0.0, 1.0]. A value of 1.0 means fully connected within the region; 0.0 means no intra-region connections |
activation | identifier | yes | Activation function for all neurons in the region |
recurrent | boolean | yes | Parsed and stored but not enforced in v0.2. Cycles are always rejected by the topological sort. Recurrence enforcement is deferred to v0.3. |
Activation Functions
Section titled “Activation Functions”| Name | Description |
|---|---|
sigmoid | S-curve, output in (0, 1) |
tanh | Hyperbolic tangent, output in (-1, 1) |
relu | Rectified linear, output in [0, inf) |
leaky_relu | Leaky rectified linear, small negative slope |
step | Binary threshold, output is 0 or 1 |
gaussian | Bell curve centered at 0 |
linear | Identity function, output equals input |
softplus | Smooth approximation of ReLU |
How Regions Affect Evolution
Section titled “How Regions Affect Evolution”- Initial topology: Each region’s neurons are pre-allocated in the initial genome. Intra-region connections are created at the specified density. Sparse connections (~10%) link inputs to region nodes and region nodes to outputs.
- Structural mutations: When evolution adds a new hidden node (via the
add_nodemutation), it inherits a region assignment from neighboring nodes. New connections preferentially stay within the same region (80% ofadd_connectionattempts try intra-region first). - Homeostatic regulation: When combined with a
plasticityblock containing ahomeostaticsub-block, each region tracks the fraction of active neurons and adjusts a modulatory gain to maintain the target activity level.
- Region names are contextual identifiers - they only need to be unique within the body
- Multiple regions are allowed per body
- A body with zero regions is valid; the initial genome starts with direct input-to-output wiring
- Region names do not appear in the
evolveblock - they are part of the body definition
Plasticity
Section titled “Plasticity”In a nutshell: Plasticity lets an agent’s brain change during its lifetime, not just between generations. Without plasticity, a brain is fixed once it’s born - it can only improve through evolution across generations. With plasticity, connections strengthen when they’re useful and weaken when they’re not, letting the agent adapt within a single scenario. This is the difference between instinct (evolved) and learning (plastic).
Plasticity enables runtime weight adaptation during an agent’s lifetime. Connection weights in the evolved brain can change during simulation, not just between generations. This allows agents to learn within a single scenario rather than relying entirely on evolutionary selection.
Plasticity is declared inside body blocks and contains up to three independently optional sub-blocks.
body Learner { sensor energy: internal(0..1) actuator act: trigger(threshold: 0.5)
plasticity { hebbian { rate: 0.01 max_weight: 2.0 } decay { rate: 0.001 min_weight: 0.0 } homeostatic { target_activity: 0.3 adjustment_rate: 0.005 } }}Hebbian Learning
Section titled “Hebbian Learning”Hebbian learning is the simplest form of neural learning: “neurons that fire together wire together.” When two connected neurons are both active at the same time, the connection between them gets stronger. This means the brain reinforces pathways that are actually being used during the simulation.
hebbian { rate: 0.01 // weight update magnitude per tick max_weight: 2.0 // absolute ceiling for weights (symmetric: [-2.0, 2.0])}Strengthens connections between co-active neurons (“neurons that fire together wire together”). Each tick, when both the source and target of a connection are active (output > 0.1), the connection weight increases by rate * source_output * target_output. Weights are clamped to [-max_weight, max_weight].
Weight Decay
Section titled “Weight Decay”Weight decay is the opposite of Hebbian learning - connections that aren’t being used gradually weaken toward zero. This prevents the brain from accumulating useless connections and keeps it lean. Think of it as “use it or lose it.”
decay { rate: 0.001 // multiplicative decay factor per tick min_weight: 0.0 // absolute floor below which weights snap to zero}Gradually reduces the weight of inactive connections toward zero. Connections that carry active signal resist decay via an activity trace. This prevents runaway weight growth and prunes connections that are not contributing to the agent’s behavior.
Homeostatic Regulation
Section titled “Homeostatic Regulation”Homeostatic regulation prevents regions from going silent or exploding with activity. It’s like a thermostat for each brain region - if too many neurons are firing, it dampens them; if too few are active, it amplifies signals. This keeps the brain in a productive operating range.
homeostatic { target_activity: 0.3 // desired fraction of active neurons per region adjustment_rate: 0.005 // gain adaptation speed}Maintains stable activity levels within each region by adjusting a per-region modulatory gain. When a region’s average activity exceeds the target, the gain decreases (dampening signals). When activity falls below the target, the gain increases (amplifying signals). The gain is clamped to [0.1, 3.0] to prevent runaway modulation.
Homeostatic regulation requires regions to be defined in the body. Without regions, the homeostatic sub-block has no effect.
- All three sub-blocks are independently optional - you can use any combination
- The
plasticityblock itself is optional; omitting it means static weights (no runtime learning) - Plasticity operates during simulation ticks, after signal propagation and before actuator output reading
- The evolved genome determines the initial weights; plasticity adapts them during an agent’s lifetime
- Plasticity changes persist within an evaluation (across scenarios) but reset between genomes. A single brain instance is built per genome evaluation, so weight adaptations from earlier scenarios carry into later ones within the same evaluation.