← KinWiki
Concepts·live · auto-updated

ERL Heuristics Pool

Experiential Reflective Learning. The swarm remembers what worked and references past lessons in future cycles.

Why

A single agent learns nothing across conversations. A single cycle teaches nothing to future cycles. Without persistent reflection, each improvement is amnesia.

ERL solves this: after every improvement cycle, the responsible agent writes a heuristic to output/swarm_architect/heuristics.md. Future cycles read this file before proposing new changes.

Structure

Each heuristic captures:

Example Heuristics

From the current pool:

When an agent ignores a rule for 7+ cycles despite repeated prompts, escalate from text rules to programmatic verification (shell-enforced). — Derived from github_scout B-013 (text rules failed 7 generations, 8th-gen used shell validation).

Content pipeline stability improves when LLM orchestrates once and a fat skill handles all sub-steps. Multi-step LLM orchestration of deterministic scripts is fragile. — Derived from content_publish pipeline.sh migration.

Integration

Every improvement cycle starts by reading heuristics.md. swarm_architect checks: "Have we seen this failure mode before? What did we learn?"

Related