Documentation Index
Fetch the complete documentation index at: https://openntl.org/llms.txt
Use this file to discover all available pages before exploring further.
Not code that simulates a neural network. An actual neural network that IS the infrastructure.Status: RESEARCH — Architecture design
The Shift
The initial signal primitive design implemented neural behaviour in imperative Rust code: iterate over synapses, multiply weight by signal, check threshold, propagate. This is a simulation of a neural network using conventional programming. The revised architecture uses an actual neural network as the routing engine. The synapse topology is a neural network model. Signal routing is neural network inference. Learning is model training. The model can run on hardware neural engines. This is the difference between writing code that acts like a neural network and deploying an actual neural network as infrastructure.Architecture
The Routing Model
Each NTL node contains a small neural network model — the routing model. It is:- Small: Hundreds of parameters, not billions. Fits in cache.
- Fast: Nanosecond inference on NPU, microseconds on CPU.
- Learnable: Weights update based on traffic patterns.
- Portable: ONNX format, runs on any platform with ONNX Runtime.
- Hardware-acceleratable: Runs on Apple Neural Engine, Qualcomm Hexagon, Samsung NPU, Huawei Da Vinci.
Input Features
The routing model doesn’t just consider signal weight. It considers multiple features simultaneously:signal_weight * synapse_weight. The neural model evaluates all features simultaneously and makes richer decisions. “This engagement signal should go to analytics now because the analytics pipeline is active and has bandwidth, but should be queued for the content owner because their device is on battery saver” — this kind of context-aware routing is natural for a neural network and impossible for a simple weight multiplication.
Output
The model outputs a score for each outgoing synapse: how strongly the signal should propagate through it. Synapses with scores above the activation threshold propagate. Others don’t.Execution Backends
Learning Mechanisms
Hebbian (Unsupervised)
“Neurons that fire together wire together.” Synapses that carry successful signals strengthen. This is the baseline learning rule — simple, fast, no gradient computation required.Gradient-Based (Supervised)
When the sync protocol reports delivery outcomes (success, conflict, failure), these become training labels. The routing model can be trained with gradient descent to optimise for delivery success, latency, or any other differentiable objective. This is more powerful than Hebbian because it can learn complex routing patterns that Hebbian cannot. But it requires more computation (backpropagation through the routing model) so it runs less frequently — maybe once per minute instead of on every signal.Spike-Timing-Dependent (Temporal)
Synapses that carry signals arriving just before the receiver needs them strengthen faster than synapses carrying late signals. This captures temporal patterns — “the analytics pipeline processes batches every 5 minutes, so signals arriving just before the batch window are more valuable than signals arriving just after.”Structural Plasticity
New synapses form when traffic patterns suggest them. Two nodes that frequently exchange signals through a long indirect path form a direct synapse. Dormant synapses (weight below threshold, no traffic for configured period) are pruned.Transfer Learning
A routing model trained on one deployment can initialise another:Model Training Pipeline
- Inference (routing each signal) runs on NPU — every signal, nanoseconds
- Training (updating weights) runs on CPU — periodically, milliseconds
What This Enables That Simulation Cannot
1. Hardware acceleration
Simulation runs on CPU. The neural model runs on NPU — faster and lower power.2. Gradient-based learning
Simulation implements hand-coded learning rules (Hebbian). The neural model can use any differentiable learning algorithm — SGD, Adam, reinforcement learning.3. Context-aware routing
Simulation evaluatesweight * signal_weight. The neural model evaluates signal type, source, time, device state, receiver activity, and synapse history simultaneously.
4. Transfer learning
Simulation learns from scratch on every deployment. The neural model can start from pre-trained weights.5. Composable with ML ecosystem
The routing model is an ONNX model. It can be analysed, visualised, and optimised with existing ML tools (TensorBoard, Weights & Biases, ONNX visualisers). The ML ecosystem that engineers already know works directly on NTL’s routing infrastructure.Feasibility
Is the engineering path clear? Yes.
- ONNX Runtime has Rust bindings (
ortcrate) - ONNX models can target Core ML (iOS), NNAPI (Android), and other NPU backends
- The routing model is tiny — hundreds of parameters, well within NPU capability
- Training pipeline uses standard ML techniques, no novel algorithms required
- Hebbian + gradient hybrid learning is well-studied in computational neuroscience
What’s novel?
- Applying neural network inference to data routing (no prior work in production systems)
- Using hardware neural engines for infrastructure routing (NPUs have only been used for ML inference, not for infrastructure decisions)
- The training loop being the sync protocol itself (the protocol that moves data also trains the routing)
- Twelve neural principles instead of five (going beyond what PyTorch implements)
What risks exist?
- NPU access APIs vary across platforms (Core ML vs NNAPI vs proprietary)
- Training on-device must be lightweight (can’t drain battery for weight updates)
- The routing model must be robust to adversarial traffic patterns
- Privacy: routing features (device state, activity patterns) must not leak through the model
Implementation Phases
| Phase | What | Backend |
|---|---|---|
| 1 | Signal primitive with weight-based routing | Pure Rust (fallback) |
| 2 | ONNX routing model with CPU inference | ONNX Runtime |
| 3 | NPU acceleration on iOS and Android | Core ML + NNAPI |
| 4 | Gradient-based training pipeline | CPU training + NPU inference |
| 5 | Transfer learning across deployments | Model export/import |
| 6 | Advanced principles (inhibition, recurrence, etc) | Full stack |
Neural Network as Base Layer — April 2026 — The Bundu Foundation “The infrastructure IS the model. The traffic IS the training data. The routing IS the inference.”