# Principles of NNSP

Neural network synchronization is a special case of online learning process. Two neural networks set their weights in random way and gain input vector, compute their output and exchange it among themselves in each step. Synaptic weights are modified according learning rule.

Complex neural networks called Tree Parity Machines (TPM) show new phenomenon - synchronization is faster than classic learning. This phenomenon solves basic cryptographic problem when two active participants of communication (A and B) need to exchange secret message through unsecure communication channel. Participant A has to use secret key to encrypt that message. Participant B then needs to know that secret key for decryption purposes. However, there is also opponent E who wants to catch that message and try to decrypt it. This situation is shown on following figure.

That type of communication protocol is possible to create by synchronization of neural networks and TPM networks are suitable candidates because of higher security against passive listeners in communication channel. Participants A and B are able to synchronize their weights faster than opponent E can learn his neural network by public information.

TPM networks contain ** K** hidden neurons and each neuron has

**input units and one output unit. TPM structure is drawn on the following figure.**

*N*All input values ** x** has binary form

**and weights**

*{-1,+1}***are discrete numbers**

*w***. State**

*(-L;+L)***of each neuron in hidden layer is compute as multiplication of input value and appropriate weight. Output**

*h***is defined as**

*delta***, thus**

*sign(h)***has binary form**

*delta***according sign. Whole output**

*{-1,+1}***of TPM is computed as multiplication (parity) of**

*t***of all neurons. Output**

*deltas***indicates whether number of neurons with**

*t***is even (**

*delta = -1***) or odd (**

*t = +1***).**

*t = -1*At the beginning of synchronizing stage weights are created in random way. Therefore, they are uncorrelated. Input vectors ** x** are generated in each step and output bits

**are then computed. Exchange of output bits between networks A and B are done in each step. Weights are not modified if output bits are different; otherwise Hebbian learning rule is applied to neuron weights. Also, only neurons which have**

*t***equals to**

*delta***are modified by learning rule.**

*t*Learning process has to be repeated until weights in networks A and B are identical (A and B are synchronized). Further learning process will not break synchronization because modification of weights lies on inputs and weights only and they are identical in both neural networks at that time.

Principles of neural network synchronization in more detail can be found in dissertation work **Neural Synchronization and Cryptography** by Andreas Ruttor.

Once the neural network is synchronized corresponding neurons on both sides have the same values of their weights. Secret key can be now derived from those values at both sides (A and B). Derivation is done by serializing all weights into one array of bytes and SHA-256 hash algorithm is performed on it. Created hash represents 256-bit symmetric secret key which is used to secure communication by Rijndael (AES) encryption scheme.