MUST
Core architectural invariant or intended default behavior.
Formal Specification • Living research companion
A tighter architectural companion to the journal reference, focused on invariants, lifecycle, growth rules, routing, and implementation-guiding language.
Prepared as a structured research draft for implementation guidance, architectural clarity, and long-horizon refinement.
Status: living internal specification - some sections are normative, some remain research hypotheses.
Working definition
GrowNet is a growth-based neural architecture that begins with minimal structure, learns locally, interprets input through serial focus and anchoring, allocates new capacity only when novelty and saturation justify it, favors deterministic organization with proximity-biased routing, and aims toward active, continuously adapting intelligence rather than passive input-output inference.
Companion reading
The journal reference remains the narrative companion. This page is the tighter architectural surface for invariants, growth rules, and implementation-guiding language, while the changelog tracks only GrowNet research-facing changes.
MUST
Core architectural invariant or intended default behavior.
SHOULD
Strong design preference that should be overridden only with good reason.
MAY
Optional or experimental capability that can evolve with the research.
This specification uses the following language:
This document is not the same thing as the cross-language contract in the repository. The repository contract defines public APIs and language parity. This document defines the higher-level architecture, growth philosophy, structural lifecycle, and research direction.
GrowNet exists because the current mainstream AI paradigm leaves several foundational issues unresolved. The primary dissatisfaction, in order, is:
The architecture therefore aims to satisfy the following high-level goals:
Golden Rule: When something truly new shows up, make room. If it is not truly new, improve what already exists.
This yields the following operational rule:
| Condition | Preferred action | Meaning |
|---|---|---|
| Input matches an existing focused pattern | Adapt | Reinforce or refine existing structure |
| Input is new but local capacity still exists | Allocate slot | Add the smallest possible new local memory cell |
| A new slot is needed but the neuron is already saturated | Fallback and mark pressure | Reuse deterministically for now, but record novelty pressure |
| Fallback persists and cooldown allows it | Grow neuron | Add new same-kind local capacity exactly where pressure exists |
| Layer pressure becomes structurally meaningful | Grow layer | Add depth within the current region |
| A region becomes too deep or functionally overloaded | Create or connect to a new region | Expand into a new organizational container |
The Golden Rule is the heart of GrowNet. The system SHOULD always attempt the cheapest valid adaptation first.
GrowNet currently operates across four scales of structural organization.
| Entity | Meaning | Primary role |
|---|---|---|
| Slot | Small local memory cell inside a neuron | Local pattern storage and routing |
| Neuron | Local computational unit | Specialized local computation |
| Layer | Organized collection of neurons inside a region | Feature and abstraction depth |
| Region | Higher-order organizational container | Functional or modality-level domain |
GrowNet currently defines three neuron types:
| Type | Primary purpose |
|---|---|
| Excitatory | Carry signal, support active patterns, drive downstream computation |
| Inhibitory | Dampen instability, suppress runaway activation, stabilize loops |
| Modulatory | Regulate learning, attention-like pressure, growth, and higher-level state |
A complete GrowNet system MUST be able to support all three types, even if some experiments initially emphasize only a subset.
Growth in GrowNet is not free. The architecture assumes increasing energy cost as structural scope increases.
| Creation event | Relative cost | Intended use |
|---|---|---|
| New Slot | Lowest | Cheapest local accommodation of novelty |
| New Neuron | Low | New local computation when slots are insufficient |
| New Layer | Medium | More organized representational depth inside a region |
| New Region | Highest | New domain when a region becomes too deep or functionally overloaded |
This ordering creates a structural economy:
GrowNet is novelty-first.
The primary trigger for early growth is novelty in input patterns, not global prediction error. Error correction and goal-directed optimization may arrive later in development, but the first question GrowNet asks is:
Have I seen something like this before, and do I still have room for it?
This means GrowNet follows:
novelty first -> structure first -> optimization later
rather than:
error first -> global weight update -> fixed structure forever
GrowNet learning SHOULD remain local whenever possible. The architecture is explicitly motivated by dissatisfaction with fully global backpropagation. This does not forbid future error-based components, but they SHOULD be layered on top of local growth and local adaptation rather than replacing them as the central principle.
Repository materials distinguish several useful concepts:
This separation SHOULD be maintained because it allows human-readable interpretation while preserving stable internal routing semantics.
GrowNet focus SHOULD be treated as a serial active process rather than a dense simultaneous weighting operation. At any instant, the system SHOULD have one active focus point or one active focused frame, even if multiple candidate points are available.
The system MAY maintain multiple candidate focus points at once. Candidate points MAY be generated from:
The exact focus policy remains a research parameter, but the architecture SHOULD make the policy explicit.
GrowNet SHOULD distinguish between the currently active focus point and the more persistent anchor state used to interpret future input. A system MAY maintain a small anchor map of currently meaningful locations, patterns, or frames. This allows GrowNet to inspect one point at a time while preserving a remembered map of multiple important locations.
Focus SHOULD influence slot selection before local routing decisions are made. Novelty, fallback pressure, and growth decisions SHOULD therefore be interpretable relative to focus and anchor state, not only relative to raw input in isolation.
GrowNet SHOULD support covert / field focus, meaning a shift in processing priority without mechanical movement. Embodied or robotic GrowNet systems MAY additionally support overt / mechanical focus, meaning a physical reorientation of sensors or effectors toward a selected target.
Overt focus SHOULD be treated as more expensive than covert focus and MAY be reserved for cases where recentering perception or action is useful.
To avoid pathological fixation on one salient point, GrowNet MAY implement bounded revisit suppression or inhibition of return. This would bias the focus policy away from immediately reselecting the same point unless task relevance or persistent evidence justifies it.
GrowNet currently supports proximity connections. The default rule is:
If nearby capacity exists, connect to it first.
This rule is important for three reasons:
Connection routing SHOULD tend toward determinism, but the very first connection does not need to be perfectly predetermined. A practical working view is:
This mirrors the observation that biological growth looks exploratory at first but stabilizes later.
Regions MUST support internal dense structure and MAY support sparse cross-region connections. Long-range cross-region connectivity is expected to be important later, but SHOULD be introduced carefully to avoid global instability.
Repository materials describe a two-phase tick discipline, which is useful for keeping behavior deterministic.
A complete tick SHOULD conceptually contain:
Growth SHOULD occur at the end of a tick, not in the middle of arbitrary signal propagation.
At the region level, GrowNet SHOULD enforce one growth action per region per tick unless a future design explicitly relaxes that rule under controlled conditions.
This is a major stability safeguard.
The following are the current intended rules.
A neuron SHOULD allocate a new slot when:
If the input can be handled by an existing slot, GrowNet SHOULD adapt that slot instead.
A new neuron SHOULD be created when:
Neuron growth SHOULD create a neuron of the same kind unless a future policy explicitly states otherwise.
A new layer SHOULD be created when neuron-level pressure has become structurally meaningful within a region. This is expected to reflect repeated local saturation, not a single novelty event.
A new region SHOULD be created or connected when one region has accumulated too many layers, or when a distinct input / functional domain justifies a new organizational boundary.
A newly created region SHOULD begin with a minimal scaffold, not a full mature architecture.
Slots, neurons, layers, or regions MAY be pre-created if an experiment benefits from scaffolding. Pre-creation is a practical option, not a rejection of GrowNet's self-organizing philosophy.
A major open design theme in GrowNet is how novelty-driven growth turns into goal-directed control.
Balancing a stick is the canonical example. A stable system must:
Humans balance through nested feedback loops. GrowNet is expected to form such loops rather than having them entirely hand-designed.
Feedback loops are expected to emerge when three ingredients exist:
Once those exist, local sensor -> action -> result circuits can become specialized microcircuits.
An inverted pendulum or similar balancing problem SHOULD be treated as a primary early control benchmark for GrowNet.
A region represents something qualitatively different from just another layer stack.
Regions may emerge for two reasons:
So regions are both organizational and functional.
A region SHOULD be treated as a compute domain with dense internal organization and more selective external routing.
The observation that other brain areas can sometimes take over after damage is conceptually important for GrowNet. This suggests that functions can migrate or be re-established elsewhere when the architecture preserves reusable latent structure.
GrowNet SHOULD therefore value reorganization and takeover over immediate deletion.
Unused connections for a long time SHOULD be pruned.
Use it or lose it.
Pruning applies first to connections, not directly to neurons.
| State | Meaning |
|---|---|
| Active | Participating in live circuits |
| Dormant | Neuron still exists but currently lacks useful active connections |
| Reused | Dormant neuron is reconnected and participates again |
| Long-idle | Dormant for a very long time, never reclaimed |
| Late death | Optional later-stage removal after extreme inactivity |
A dormant neuron MAY be reused if conditions are met. Importantly, when a neuron is reused, it keeps its previous internal state rather than being wiped clean.
This is a defining design choice.
Neuron death is not the default outcome of pruning. Late death MAY occur only much later if a neuron remains dormant, never reconnects, and continued retention no longer makes structural or energy sense.
One guiding intuition behind GrowNet is that memory is not just storage, but also routing.
A system may fail to retrieve something not because the underlying memory vanished, but because the path to it degraded.
This is used here as a systems intuition rather than a clinical claim.
GrowNet's dormant-neuron model aligns with this intuition by preserving latent substrate whenever possible. This may allow reactivation, faster relearning, or takeover by other circuits.
The repository introduces Knowledge Units (KU) and Bad Knowledge Units (BKU) as evaluation concepts.
A Knowledge Unit measures how much correct, generalizable structure the model extracts from a training example.
BKU measures how much incorrect or harmful structure the model extracts from a sample, such as hallucinated facts or biased stereotypes.
GrowNet should aim for:
A useful derived measure is knowledge precision:
Knowledge Precision ≈ KU / (KU + BKU)
Because GrowNet literally allocates new structure when novelty appears, evaluation should not only ask whether it performs well after training. It should also ask:
The most realistic first successful GrowNet prototypes are expected to be in simulated 3D environments.
A simple 3D object moving around a 3D space, possibly in Blender, is the preferred early prototype. The environment may provide:
Early experiments SHOULD focus on:
Robotics and world models are expected to benefit the most from GrowNet because they stress:
GrowNet is aimed at more than passive intelligence.
Active intelligence means the system remains aware of time, carries state, maintains a world model, and does not simply wake up for a single query and then disappear.
In GrowNet, emotions are best viewed not as human-style subjective feelings, but as repeated activation of regulatory circuits or regions that bias behavior and learning.
This is powerful and potentially dangerous. It suggests a future class of systems that are more agent-like than today's tools.
Future GrowNet systems MAY include sleep-like or consolidation phases during which circuits are stabilized, reorganized, or replayed.
The long-term future is not expected to be one giant AI, but many different AIs living among us with different strengths. GrowNet is one possible path toward more active, structured, and adaptive forms of artificial intelligence.
The following are explicitly not current requirements:
GrowNet, in one paragraph
GrowNet is a novelty-driven, growth-based neural architecture in which local structure is the primary adaptive medium. It begins with minimal scaffold, allocates new capacity only when local saturation and persistent novelty justify it, organizes computation through slots, neurons, layers, and regions, interprets incoming structure through serial focus and anchoring, regulates behavior through excitatory, inhibitory, and modulatory dynamics, prunes unused connections while preserving dormant reusable substrate, and aims toward active, continuously adapting intelligence for agents, world models, and robotics.
The following are the current minimal invariants for the architecture:
The journal remains the reflective source document. This formal specification is the tighter architectural companion to it.
Both documents should evolve together.