Principles of NNSP
Neural network synchronization is a special case of online learning process. Two neural networks set their weights in random way and gain input vector, compute their output and exchange it among themselves in each step. Synaptic weights are modified according learning rule.
Complex neural networks called Tree Parity Machines (TPM) show new phenomenon - synchronization is faster than classic learning. This phenomenon solves basic cryptographic problem when two active participants of communication (A and B) need to exchange secret message through unsecure communication channel. Participant A has to use secret key to encrypt that message. Participant B then needs to know that secret key for decryption purposes. However, there is also opponent E who wants to catch that message and try to decrypt it. This situation is shown on following figure.
That type of communication protocol is possible to create by synchronization of neural networks and TPM networks are suitable candidates because of higher security against passive listeners in communication channel. Participants A and B are able to synchronize their weights faster than opponent E can learn his neural network by public information.
TPM networks contain K hidden neurons and each neuron has N input units and one output unit. TPM structure is drawn on the following figure.
All input values x has binary form {-1,+1} and weights w are discrete numbers (-L;+L). State h of each neuron in hidden layer is compute as multiplication of input value and appropriate weight. Output delta is defined as sign(h), thus delta has binary form {-1,+1} according sign. Whole output t of TPM is computed as multiplication (parity) of deltas of all neurons. Output t indicates whether number of neurons with delta = -1 is even (t = +1) or odd (t = -1).
At the beginning of synchronizing stage weights are created in random way. Therefore, they are uncorrelated. Input vectors x are generated in each step and output bits t are then computed. Exchange of output bits between networks A and B are done in each step. Weights are not modified if output bits are different; otherwise Hebbian learning rule is applied to neuron weights. Also, only neurons which have delta equals to t are modified by learning rule.
Learning process has to be repeated until weights in networks A and B are identical (A and B are synchronized). Further learning process will not break synchronization because modification of weights lies on inputs and weights only and they are identical in both neural networks at that time.
Principles of neural network synchronization in more detail can be found in dissertation work Neural Synchronization and Cryptography by Andreas Ruttor.
Once the neural network is synchronized corresponding neurons on both sides have the same values of their weights. Secret key can be now derived from those values at both sides (A and B). Derivation is done by serializing all weights into one array of bytes and SHA-256 hash algorithm is performed on it. Created hash represents 256-bit symmetric secret key which is used to secure communication by Rijndael (AES) encryption scheme.