Skip to main content

Documentation Index

Fetch the complete documentation index at: https://openntl.org/llms.txt

Use this file to discover all available pages before exploring further.

PyTorch taught computers to think. NTL teaches networks to learn.
Status: RESEARCH — Foundational insight

The Core Argument

PyTorch and TensorFlow gave the world a programming model for neural networks: define nodes, connect with weighted edges, propagate signals, learn from outcomes. This model revolutionised computation. NTL applies the same model to data transfer. The nodes are infrastructure. The connections are synapses. The signals are data. The learning is real. The hardware acceleration is the same silicon. This is not an analogy. NTL’s routing model is an actual neural network, executable by the same hardware neural engines that run PyTorch models on your phone.

The Twelve Principles

PyTorch implements five neural principles. NTL implements twelve.

Principles 1-5: What PyTorch Does

#PrinciplePyTorchNTL
1Weighted graphNeurons + weighted edgesNodes + weighted synapses
2Forward propagationInput → layers → outputEmit → synapses → activation
3Junction transformationLayer functions (ReLU, matmul)Synapse functions (PII strip, anonymise)
4LearningBackpropagation + gradient descentHebbian + gradient + spike-timing
5Improvement over timeTraining makes model betterTraffic makes routing smarter

Principles 6-12: What NTL Adds

#PrincipleWhat It Means for NTL
6InhibitionHigh-priority signals suppress low-priority. Financial transactions dampen analytics noise.
7RecurrenceFeedback loops keep context alive. Updates circulate rather than fire-and-forget.
8NeuromodulationMeta-signals change network-wide behaviour. “High load” reduces global sensitivity.
9Rich plasticityTiming matters for learning. New connections form where useful. Dead connections pruned.
10Hierarchical processingRaw data, patterns, and recommendations processed at multiple levels simultaneously.
11Sparse activationMost nodes dormant. Near-zero power when idle. Only active paths consume resources.
12Multi-scale temporalityMillisecond routing, minute adaptation, hour learning, week topology evolution.
PyTorch simplified to five because it optimised for GPU computation. NTL runs on infrastructure with access to neural hardware and can implement the full repertoire.

The Complete Mapping

ConceptPyTorch / TensorFlowNTL
NodeNeuron (math function)Infrastructure node (device, DB, service)
ConnectionWeighted edgeSynapse (weighted, transforming, learning)
SignalActivation tensorData signal (mutation, event, context)
WeightLearned parameterSynapse weight
Forward passInput → layers → outputEmit → synapses → activation
TransformationLayer functionSynapse function
Learning ruleBackprop + SGDHebbian + gradient + spike-timing
Loss functionError vs desired outputDelivery success vs intent
BatchTraining samplesSignals in time window
EpochPass through training dataLearning cycle across synapses
InferenceTrained model on new inputMature network routing new signals
TrainingWeight adjustmentTraffic adjusts synapse weights
OverfittingMemorises training dataOver-specialises to current traffic
RegularisationDropout, weight decaySynapse decay, min weight thresholds
Transfer learningPre-trained weights for new taskPre-trained topology for new deployment
HardwareGPU (CUDA, ROCm)NPU (Neural Engine, Hexagon, Da Vinci)

Hardware Neural Engines

NTL’s routing model runs on the same hardware that runs PyTorch models:
DeviceNPUTOPSNTL Routing Inference
iPhone (A14+)Apple Neural Engine15.8Nanoseconds
Snapdragon 8 Gen 2+Qualcomm Hexagon12.4Nanoseconds
Exynos 2400+Samsung NPU14.7Nanoseconds
Kirin 9000+Huawei Da Vinci8.0Nanoseconds
NTL’s routing model is tiny (hundreds of parameters). These NPUs handle billion-parameter models. Routing inference is essentially free in both time and power. Every modern phone has a neural engine sitting idle. NTL gives it a job: routing your data intelligently, learning from traffic patterns, and doing it faster and cheaper than any CPU-based routing could.

The Graph Sync Protocol as Training Loop

The Graph Sync Protocol (in SiafuDB) provides the training feedback that NTL learns from:
1. SiafuDB produces mutation
2. Sync protocol emits signal into NTL
3. NTL routing model (neural network) decides path
4. Signal propagates through synapses (transforming)
5. Receiver applies mutation
6. Sync protocol reports: success / conflict / failure
7. NTL updates routing model weights
8. Next signal routes more efficiently

Every sync cycle = one training step
This means the sync protocol is not just moving data. It is training the network. Every mutation that flows through the system makes the routing slightly smarter. The protocol and the learning are inseparable.

Engineering Implications

1. ML Analysis Tools Apply

NTL’s topology is a neural network. ML analysis tools work on it:
  • Weight distribution visualisation (strong vs weak synapses)
  • Activation maps (which nodes are active)
  • Dead neuron detection (unreachable nodes)
  • Training metrics (delivery success over time)

2. Hyperparameters Need Tuning

ML HyperparameterNTL Equivalent
Learning rateHebbian rate on synapses
Weight decaySynapse decay half-life
Batch sizeSignal aggregation window
DropoutRandom synapse deactivation
Network depthMaximum hop count
Activation thresholdNode activation threshold
Different deployments need different hyperparameters. High-traffic Mukoko deployment ≠ low-traffic IoT network.

3. Learned Transformations

Today: synapse transformations configured (“strip PII”). Future: synapse learns what receiver needs and strips what it doesn’t. This is attention applied to data transfer.

4. Transfer Learning

Pre-trained routing model from Harare deployment fine-tuned for Lusaka. Routing intelligence is portable.

5. Distributed Training

Multiple NTL nodes coordinating learning, analogous to PyTorch’s DistributedDataParallel. Network-wide routing improves through coordinated weight updates.

What This Changes

NTL is not a messaging protocol with ML features. NTL IS machine learning infrastructure. The routing IS a neural network. The learning IS training. The hardware IS neural silicon. This breaks conventional thinking because:
  • Protocols don’t learn. HTTP doesn’t get better at routing over time. TCP doesn’t strengthen paths that carry successful traffic. NTL does.
  • Transfer layers don’t use neural hardware. No existing transfer protocol runs on NPUs. NTL’s routing model does, because it’s a neural network.
  • Infrastructure and ML are separate fields. NTL unifies them. The infrastructure IS the model. The traffic IS the training data. The routing IS the inference.
If this is achievable — and the engineering path is clear — it represents a genuine break from how data infrastructure has worked for fifty years.
Machine Learning at the Transfer Layer — April 2026 — The Bundu Foundation “PyTorch taught computers to think. NTL teaches networks to learn.”
Last modified on April 23, 2026