Documentation Index
Fetch the complete documentation index at: https://openntl.org/llms.txt
Use this file to discover all available pages before exploring further.
A new concurrency primitive. Not a simulation of a neural network — it IS a neural network.Proposed crate:
ntl-signal
Licence: Apache 2.0
Abstract
The signal primitive is a new concurrency paradigm for neural-style computing. Unlike channels (CSP, 1978), actors (Hewitt, 1973), or streams (reactive programming), the signal primitive provides unaddressed emission, topological routing, transformation at junctions, activation thresholds, and connection learning — all executable by hardware neural engines on modern devices. The signal primitive is not code that simulates neural behaviour. It IS a neural network, representable as a model that hardware NPUs can execute natively.1. Why a New Primitive
Existing Primitives
| Primitive | Model | Routing | Transform | Learning | Hardware |
|---|---|---|---|---|---|
| Channel | CSP | Addressed | None | None | CPU only |
| Actor | Actor model | Addressed | App logic | None | CPU only |
| Stream | Reactive | Pipeline | Operators | None | CPU only |
| Signal | Neural | Topological | At synapse | Hebbian + gradient | CPU + NPU |
2. Core Types
Node
Signal
Synapse
Learning Rules
3. Core Operations
Emission (no destination)
Activation (threshold-triggered, not pulled)
4. The Neural Network IS the Routing Engine
Not Simulation — The Real Thing
The routing model inside each node is an actual neural network model:What the Routing Model Considers
The routing model’s input is not just the signal weight. It considers multiple features simultaneously:signal_weight * synapse_weight. The neural routing model considers all these factors simultaneously and makes richer routing decisions.
Hardware Acceleration
On devices with NPUs:| Device | NPU | Performance |
|---|---|---|
| iPhone (A14+) | Apple Neural Engine | 15.8 TOPS, nanosecond inference |
| Android (Snapdragon 8 Gen 2+) | Qualcomm Hexagon | 12.4 TOPS |
| Android (Exynos 2400+) | Samsung NPU | 14.7 TOPS |
| Huawei (Kirin 9000+) | Da Vinci NPU | 8 TOPS |
5. Transport Layer Hierarchy
6. The Twelve Neural Principles
NTL draws from twelve principles, not just PyTorch’s five.The Original Five (PyTorch implements these)
- Weighted graph — Nodes connected by weighted edges
- Forward propagation — Signals flow through the network
- Junction transformation — Data changes at each connection
- Learning — Weights adjust based on outcomes
- Improvement over time — Network gets smarter with experience
The Additional Seven (NTL implements these too)
- Inhibition — Signals suppress other signals. High-priority traffic dampens low-priority.
- Recurrence — Feedback loops. Context circulates through the network, staying alive.
- Neuromodulation — Meta-signals change network-wide behaviour. “High load” reduces sensitivity globally.
- Rich plasticity — Spike-timing-dependent learning. New connections form where traffic patterns suggest them. Dormant connections die.
- Hierarchical processing — Multiple abstraction levels simultaneously. Raw data, patterns, recommendations processed in parallel.
- Sparse activation — Most nodes dormant. Minimal power when idle. Only active paths consume resources.
- Multi-scale temporality — Millisecond propagation, second adaptation, hour learning, week topology evolution.
7. Integration with SiafuDB
Current: GSPN Adapter
SiafuDB’s GSPN adapter emits signals through a local NTL node. NTL routes, transforms, and delivers. The sync protocol provides training feedback.Future: SiafuDB Internal Signals
SiafuDB’s internal components become signal nodes. Mutations propagate as continuous signals from the storage engine through the change log and out through NTL, with no paradigm boundary between database processing and network communication.8. Implementation Roadmap
| Phase | Deliverable | Hardware |
|---|---|---|
| 1 | ntl-signal crate: Node, Signal, Synapse, in-process transport | CPU only |
| 2 | Routing model: ONNX-based neural routing | CPU (ONNX Runtime) |
| 3 | Hardware acceleration: NPU integration | Apple Neural Engine, Qualcomm Hexagon |
| 4 | Local + network transport layers | CPU + NPU |
| 5 | Fabric layer with IPv6 | CPU + NPU |
| 6 | Advanced principles: inhibition, recurrence, neuromodulation | CPU + NPU |
Signal Primitive Design — April 2026 — The Bundu Foundation “Not a simulation of a neural network. It IS a neural network.”