High-Definition Neural Network Simulator
Why another neural network simulator? We designed HDNNSim from the ground up to have three important properties. Properties that other simulators lack. They are: bitstream signalling, snapshots and plasticity bridges.
These properties allow HDNNSim to be used for a specific, outstanding, application: artificial consciousness.
What happens if you take hifi audio and put it through a traditional spiking neuron? Sound quality degrades. Considerably. Beyond recognition.
HDNNSim was built to improve on this. It accomplishes what nature was trying to (and supposedly even did) achieve; a one-bit computational system that is actually useful.
HDNN bitstreams convey high-definition graded-response signals as well as saturated binary signals. Hence a neural network can operate in the "analog" domain, leveraging a vast theoretical basis, in the "digital" domain, governed by Boolean logic, and even in any conceivable hybrid.
Snapshots can be used for checkpointing, but also as a means of distributing a neural network as an application, a "mindware app".
Note that HDNNSim does not offer supervised learning. However, you can use TensorFlow for that and import the trained model after converting it to an HDNN snapshot.
On the other hand, HDNNSim does offer unsupervised learning -- at runtime that is --, making it even more important that the full simulation state, including filter states, internal buffers, etc., can be saved and restored.
The human brain continuously produces new neurons, in the striatum and other brain areas -- a process called neurogenesis. This, of course, has a purpose: long-term memory. To accomodate this, HDNNSim features "plasticity bridges".
Plasticity bridges are highly configurable processes that bridge between signal space and configuration space, enabling signals in the network to steer the configuration of the network, both in terms of connection weights and delays, and in terms of creating and destroying network nodes, or even changing their type.