NektronAINektronAI

Formal Specification • Living research companion

GrowNet Formal Specification

A tighter architectural companion to the journal reference, focused on invariants, lifecycle, growth rules, routing, and implementation-guiding language.

Prepared as a structured research draft for implementation guidance, architectural clarity, and long-horizon refinement.

Status: living internal specification - some sections are normative, some remain research hypotheses.

Document type: Research draft formal specification Purpose: Architecture and invariants reference Scope: Structure, growth, routing, and lifecycle Companion: The GrowNet journal reference

Working definition

GrowNet is a growth-based neural architecture that begins with minimal structure, learns locally, interprets input through serial focus and anchoring, allocates new capacity only when novelty and saturation justify it, favors deterministic organization with proximity-biased routing, and aims toward active, continuously adapting intelligence rather than passive input-output inference.

Companion reading

The journal reference remains the narrative companion. This page is the tighter architectural surface for invariants, growth rules, and implementation-guiding language, while the changelog tracks only GrowNet research-facing changes.

MUST

Core architectural invariant or intended default behavior.

SHOULD

Strong design preference that should be overridden only with good reason.

MAY

Optional or experimental capability that can evolve with the research.

1. Specification posture

This specification uses the following language:

  • MUST indicates a core architectural invariant or intended default behavior.
  • SHOULD indicates a strong design preference that can be overridden only with good reason.
  • MAY indicates an optional or experimental capability.

This document is not the same thing as the cross-language contract in the repository. The repository contract defines public APIs and language parity. This document defines the higher-level architecture, growth philosophy, structural lifecycle, and research direction.

2. Architectural thesis

GrowNet exists because the current mainstream AI paradigm leaves several foundational issues unresolved. The primary dissatisfaction, in order, is:

  1. Backprop everywhere
  2. Fixed network size
  3. Lack of local learning
  4. Weak biological plausibility
  5. Huge training cost

The architecture therefore aims to satisfy the following high-level goals:

  • Learning SHOULD be primarily local rather than globally backpropagated.
  • Capacity MUST begin small and grow only where justified.
  • Structure MUST matter as much as weights or parameters.
  • Growth MUST be bounded by energy, capacity, cooldown, and deterministic rules.
  • The long-term architecture SHOULD support active intelligence, world models, robotics, and continuous operation.
  • GrowNet is not required to outperform current systems at every task. It is intended to be stronger in domains where adaptation, structural development, and ongoing control matter most.

3. The Golden Rule

Golden Rule: When something truly new shows up, make room. If it is not truly new, improve what already exists.

This yields the following operational rule:

Condition Preferred action Meaning
Input matches an existing focused pattern Adapt Reinforce or refine existing structure
Input is new but local capacity still exists Allocate slot Add the smallest possible new local memory cell
A new slot is needed but the neuron is already saturated Fallback and mark pressure Reuse deterministically for now, but record novelty pressure
Fallback persists and cooldown allows it Grow neuron Add new same-kind local capacity exactly where pressure exists
Layer pressure becomes structurally meaningful Grow layer Add depth within the current region
A region becomes too deep or functionally overloaded Create or connect to a new region Expand into a new organizational container

The Golden Rule is the heart of GrowNet. The system SHOULD always attempt the cheapest valid adaptation first.

4. Core entities

GrowNet currently operates across four scales of structural organization.

Entity Meaning Primary role
Slot Small local memory cell inside a neuron Local pattern storage and routing
Neuron Local computational unit Specialized local computation
Layer Organized collection of neurons inside a region Feature and abstraction depth
Region Higher-order organizational container Functional or modality-level domain

4.1 Neuron types

GrowNet currently defines three neuron types:

Type Primary purpose
Excitatory Carry signal, support active patterns, drive downstream computation
Inhibitory Dampen instability, suppress runaway activation, stabilize loops
Modulatory Regulate learning, attention-like pressure, growth, and higher-level state

A complete GrowNet system MUST be able to support all three types, even if some experiments initially emphasize only a subset.

5. Resource economics and growth ladder

Growth in GrowNet is not free. The architecture assumes increasing energy cost as structural scope increases.

Creation event Relative cost Intended use
New Slot Lowest Cheapest local accommodation of novelty
New Neuron Low New local computation when slots are insufficient
New Layer Medium More organized representational depth inside a region
New Region Highest New domain when a region becomes too deep or functionally overloaded

This ordering creates a structural economy:

  • GrowNet MUST prefer reuse before creation.
  • GrowNet MUST prefer smaller structural changes before larger ones.
  • Pre-allocation MAY be used for practical experiments.
  • The more regions are pre-created, the less likely new regions need to be created later.

6. Learning model

GrowNet is novelty-first.

6.1 Novelty before error

The primary trigger for early growth is novelty in input patterns, not global prediction error. Error correction and goal-directed optimization may arrive later in development, but the first question GrowNet asks is:

Have I seen something like this before, and do I still have room for it?

This means GrowNet follows:

novelty first -> structure first -> optimization later

rather than:

error first -> global weight update -> fixed structure forever

6.2 Local learning

GrowNet learning SHOULD remain local whenever possible. The architecture is explicitly motivated by dissatisfaction with fully global backpropagation. This does not forbid future error-based components, but they SHOULD be layered on top of local growth and local adaptation rather than replacing them as the central principle.

6.3 Focus anchor and reference anchor

Repository materials distinguish several useful concepts:

  • Focus Anchor (conceptual): the currently active point or frame that the system is inspecting in behavioral terms.
  • Reference Anchor (implementation): the stable anchor used to compute deltas and bin novelty deterministically.
  • Anchor Map (working extension): a bounded remembered set of meaningful locations or frames that remains available while active focus moves serially.

This separation SHOULD be maintained because it allows human-readable interpretation while preserving stable internal routing semantics.

6.4 Serial focus and candidate generation

GrowNet focus SHOULD be treated as a serial active process rather than a dense simultaneous weighting operation. At any instant, the system SHOULD have one active focus point or one active focused frame, even if multiple candidate points are available.

The system MAY maintain multiple candidate focus points at once. Candidate points MAY be generated from:

  • energy or saliency,
  • novelty relative to current anchors,
  • familiarity or recognized usefulness,
  • task relevance,
  • bounded random choice among strong candidates,
  • deterministic sequential scan.

The exact focus policy remains a research parameter, but the architecture SHOULD make the policy explicit.

6.5 Anchoring and anchor maps

GrowNet SHOULD distinguish between the currently active focus point and the more persistent anchor state used to interpret future input. A system MAY maintain a small anchor map of currently meaningful locations, patterns, or frames. This allows GrowNet to inspect one point at a time while preserving a remembered map of multiple important locations.

Focus SHOULD influence slot selection before local routing decisions are made. Novelty, fallback pressure, and growth decisions SHOULD therefore be interpretable relative to focus and anchor state, not only relative to raw input in isolation.

6.6 Covert vs overt focus

GrowNet SHOULD support covert / field focus, meaning a shift in processing priority without mechanical movement. Embodied or robotic GrowNet systems MAY additionally support overt / mechanical focus, meaning a physical reorientation of sensors or effectors toward a selected target.

Overt focus SHOULD be treated as more expensive than covert focus and MAY be reserved for cases where recentering perception or action is useful.

6.7 Revisit and inhibition of return

To avoid pathological fixation on one salient point, GrowNet MAY implement bounded revisit suppression or inhibition of return. This would bias the focus policy away from immediately reselecting the same point unless task relevance or persistent evidence justifies it.

7. Connectivity and routing

7.1 Default routing principle

GrowNet currently supports proximity connections. The default rule is:

If nearby capacity exists, connect to it first.

This rule is important for three reasons:

  • It reduces latency and structural sprawl.
  • It encourages clustered microcircuits.
  • It supports stable specialization.

7.2 Determinism vs initial exploration

Connection routing SHOULD tend toward determinism, but the very first connection does not need to be perfectly predetermined. A practical working view is:

  • Initial connection formation MAY involve bounded exploratory choice.
  • Once a usable routing pattern is established, replay and future routing SHOULD be deterministic.

This mirrors the observation that biological growth looks exploratory at first but stabilizes later.

7.3 Cross-region connections

Regions MUST support internal dense structure and MAY support sparse cross-region connections. Long-range cross-region connectivity is expected to be important later, but SHOULD be introduced carefully to avoid global instability.

8. Tick and state semantics

Repository materials describe a two-phase tick discipline, which is useful for keeping behavior deterministic.

8.1 Tick structure

A complete tick SHOULD conceptually contain:

  1. Phase A: deliver and integrate input, choose or reinforce slot, possibly fire
  2. Phase B: propagate resulting events
  3. End-of-tick checks: evaluate growth and apply bus decay / state decay

8.2 Growth timing

Growth SHOULD occur at the end of a tick, not in the middle of arbitrary signal propagation.

8.3 Safety invariant

At the region level, GrowNet SHOULD enforce one growth action per region per tick unless a future design explicitly relaxes that rule under controlled conditions.

This is a major stability safeguard.

9. Formal growth rules

The following are the current intended rules.

9.1 Slot allocation

A neuron SHOULD allocate a new slot when:

  • the input pattern maps to a genuinely new local bin or concept, and
  • strict slot capacity has not yet been reached.

If the input can be handled by an existing slot, GrowNet SHOULD adapt that slot instead.

9.2 Neuron growth

A new neuron SHOULD be created when:

  • a new slot is required,
  • the seed neuron is already at strict slot capacity,
  • fallback or overflow pressure persists, and
  • cooldown and energy rules allow growth.

Neuron growth SHOULD create a neuron of the same kind unless a future policy explicitly states otherwise.

9.3 Layer growth

A new layer SHOULD be created when neuron-level pressure has become structurally meaningful within a region. This is expected to reflect repeated local saturation, not a single novelty event.

9.4 Region growth

A new region SHOULD be created or connected when one region has accumulated too many layers, or when a distinct input / functional domain justifies a new organizational boundary.

A newly created region SHOULD begin with a minimal scaffold, not a full mature architecture.

9.5 Pre-creation

Slots, neurons, layers, or regions MAY be pre-created if an experiment benefits from scaffolding. Pre-creation is a practical option, not a rejection of GrowNet's self-organizing philosophy.

10. Feedback loops and control

A major open design theme in GrowNet is how novelty-driven growth turns into goal-directed control.

10.1 Control intuition

Balancing a stick is the canonical example. A stable system must:

  • sense current state,
  • act on the environment,
  • observe whether stability improved or worsened,
  • repeat rapidly.

Humans balance through nested feedback loops. GrowNet is expected to form such loops rather than having them entirely hand-designed.

10.2 Automatic loop formation

Feedback loops are expected to emerge when three ingredients exist:

  1. Perception of state
  2. Ability to act
  3. Detection of stability or instability

Once those exist, local sensor -> action -> result circuits can become specialized microcircuits.

10.3 Role of neuron types in control

  • Excitatory neurons drive action paths.
  • Inhibitory neurons damp oscillation and stabilize loops.
  • Modulatory neurons adjust pressure, learning, and regulatory state.

10.4 First control benchmark

An inverted pendulum or similar balancing problem SHOULD be treated as a primary early control benchmark for GrowNet.

11. Regions, specialization, and plasticity

A region represents something qualitatively different from just another layer stack.

11.1 Why regions exist

Regions may emerge for two reasons:

  • Different input type or modality
  • Functional specialization over time

So regions are both organizational and functional.

11.2 Region structure

A region SHOULD be treated as a compute domain with dense internal organization and more selective external routing.

11.3 Plasticity analogy

The observation that other brain areas can sometimes take over after damage is conceptually important for GrowNet. This suggests that functions can migrate or be re-established elsewhere when the architecture preserves reusable latent structure.

GrowNet SHOULD therefore value reorganization and takeover over immediate deletion.

12. Pruning, dormancy, reuse, and late death

12.1 Pruning rule

Unused connections for a long time SHOULD be pruned.

Use it or lose it.

Pruning applies first to connections, not directly to neurons.

12.2 Neuron lifecycle

State Meaning
Active Participating in live circuits
Dormant Neuron still exists but currently lacks useful active connections
Reused Dormant neuron is reconnected and participates again
Long-idle Dormant for a very long time, never reclaimed
Late death Optional later-stage removal after extreme inactivity

12.3 Reuse rule

A dormant neuron MAY be reused if conditions are met. Importantly, when a neuron is reused, it keeps its previous internal state rather than being wiped clean.

This is a defining design choice.

12.4 Late neuron death

Neuron death is not the default outcome of pruning. Late death MAY occur only much later if a neuron remains dormant, never reconnects, and continued retention no longer makes structural or energy sense.

13. Memory and access paths

One guiding intuition behind GrowNet is that memory is not just storage, but also routing.

A system may fail to retrieve something not because the underlying memory vanished, but because the path to it degraded.

This is used here as a systems intuition rather than a clinical claim.

GrowNet's dormant-neuron model aligns with this intuition by preserving latent substrate whenever possible. This may allow reactivation, faster relearning, or takeover by other circuits.

14. Knowledge Units and Bad Knowledge Units

The repository introduces Knowledge Units (KU) and Bad Knowledge Units (BKU) as evaluation concepts.

14.1 Knowledge Units

A Knowledge Unit measures how much correct, generalizable structure the model extracts from a training example.

  • Approximately 1.0 KU per sample corresponds to literal memorization and little else.
  • Greater than 1.0 KU indicates that the model extracted correct implications beyond the literal sample.

14.2 Bad Knowledge Units

BKU measures how much incorrect or harmful structure the model extracts from a sample, such as hallucinated facts or biased stereotypes.

14.3 Desired profile

GrowNet should aim for:

  • high KU
  • low BKU

A useful derived measure is knowledge precision:

Knowledge Precision ≈ KU / (KU + BKU)

14.4 Why KU / BKU matter for GrowNet

Because GrowNet literally allocates new structure when novelty appears, evaluation should not only ask whether it performs well after training. It should also ask:

  • how much good knowledge was gained per sample,
  • how much bad knowledge was created per sample,
  • and how efficiently structural growth translated into reusable understanding.

15. Prototype roadmap

The most realistic first successful GrowNet prototypes are expected to be in simulated 3D environments.

15.1 First proving ground

A simple 3D object moving around a 3D space, possibly in Blender, is the preferred early prototype. The environment may provide:

  • camera or image input,
  • distance / collision information,
  • velocity and orientation,
  • action outputs such as movement, turning, force, or torque.

15.2 Target behaviors

Early experiments SHOULD focus on:

  • exploration,
  • spatial memory,
  • obstacle avoidance,
  • balancing / stabilization,
  • eventually manipulation and robotics-oriented control.

15.3 Why robotics matters

Robotics and world models are expected to benefit the most from GrowNet because they stress:

  • ongoing time,
  • novelty,
  • adaptation,
  • control,
  • and structural development under open-ended conditions.

16. Long-term direction

GrowNet is aimed at more than passive intelligence.

16.1 Active intelligence

Active intelligence means the system remains aware of time, carries state, maintains a world model, and does not simply wake up for a single query and then disappear.

16.2 Emotion-like regulation

In GrowNet, emotions are best viewed not as human-style subjective feelings, but as repeated activation of regulatory circuits or regions that bias behavior and learning.

This is powerful and potentially dangerous. It suggests a future class of systems that are more agent-like than today's tools.

16.3 Sleep and consolidation

Future GrowNet systems MAY include sleep-like or consolidation phases during which circuits are stabilized, reorganized, or replayed.

16.4 Ecosystem view

The long-term future is not expected to be one giant AI, but many different AIs living among us with different strengths. GrowNet is one possible path toward more active, structured, and adaptive forms of artificial intelligence.

17. Non-goals

The following are explicitly not current requirements:

  • GrowNet is not required to replace all deep learning.
  • GrowNet is not required to beat transformers at every language task.
  • GrowNet does not currently claim true consciousness.
  • GrowNet does not require that every biological mechanism be copied literally.
  • GrowNet is not yet a finalized architecture.

18. Open questions

  1. How exactly should novelty-driven development transition into goal optimization?
  2. What is the best stability or homeostasis signal for balancing-type control?
  3. How exploratory should first-time connection formation be?
  4. When should reuse win over fresh creation?
  5. How should long-range cross-region routing evolve without destabilizing the system?
  6. What should sleep / consolidation look like operationally?
  7. Under what exact conditions should late neuron death happen?
  8. What is the right benchmark suite to show GrowNet's advantages clearly and fairly?
  9. How should modulatory neurons be formalized for emotion-like regulatory loops without making the system unsafe?
  10. What focus policy should dominate when strong saliency, novelty, and familiarity all compete?
  11. How should covert focus and overt focus interact once GrowNet is embodied?

19. Concise formal definition

GrowNet, in one paragraph
GrowNet is a novelty-driven, growth-based neural architecture in which local structure is the primary adaptive medium. It begins with minimal scaffold, allocates new capacity only when local saturation and persistent novelty justify it, organizes computation through slots, neurons, layers, and regions, interprets incoming structure through serial focus and anchoring, regulates behavior through excitatory, inhibitory, and modulatory dynamics, prunes unused connections while preserving dormant reusable substrate, and aims toward active, continuously adapting intelligence for agents, world models, and robotics.

Appendix A. Minimal invariants

The following are the current minimal invariants for the architecture:

  • Learning SHOULD be local-first.
  • Novelty SHOULD trigger growth before global error does.
  • Growth MUST prefer smaller structural changes before larger ones.
  • Region growth SHOULD be rate-limited.
  • Proximity SHOULD be the default connection bias.
  • Active focus SHOULD be serial even when multiple candidate points are remembered.
  • Covert / field focus SHOULD be supported before overt / mechanical focus is required.
  • Pruning SHOULD remove unused connections before removing neurons.
  • Neuron reuse SHOULD preserve prior internal state.
  • Region creation SHOULD begin from a minimal scaffold.
  • KU and BKU SHOULD be part of future evaluation.
  • Active intelligence is a long-term direction, not a present claim.

Appendix B. Relationship to the journal

The journal remains the reflective source document. This formal specification is the tighter architectural companion to it.

  • The journal captures origin, intuition, philosophical motive, and exploratory thought.
  • The formal spec captures structure, invariants, lifecycle, and implementation-guiding language.

Both documents should evolve together.