The logic of thinking. Part 11: Dynamic Neural Networks. Associativity





This series of articles describes the wave model of the brain, is very different from traditional models. It is highly recommended for those who just joined, start reading from the first part of the .

Most are easy to understand and simulate neural networks, in which information is distributed sequentially from layer to layer. A signal applied to the input, you can also consistently calculate the status of each of the layers. These states can be interpreted as a set of descriptions of the input signal. While not change the input signal remains unchanged and its description.

A more complicated situation arises, if we introduce a neural network feedback. In order to calculate the state of the network is no longer sufficient one pass. As soon as we change the status of the network according to the input signal feedback to adjust the input picture, which would require a new translation of the entire network, and so on.

The ideology of the recurrent network depends on how correlated delay feedback and interval between images. If the delay is much less than the interval change, we are likely interested in only the final equilibrium state, and the intermediate iteration be taken as purely computational procedure. If they are comparable, then comes to the fore is the dynamics of the network.



F. Rosenblatt described multilayer perceptrons with cross-linked, and showed that they can be used for modeling of selective attention and playback sequence of reactions (Rosenblatt, Principles of Neurodynamic: Perceptrons and the Theory of Brain Mechanisms, 1962).

In 1982, John Hopfield proposed a design of a neural network with feedback such that its state can be determined from the calculation of the minimum of the energy functional (Hopfield, 1982). It turned out that Hopfield network have a remarkable property - they can be used as a memory addressable content.



Hopfield Network i>

Signals in the Hopfield network take spin values ​​{-1, 1}. Education network - it saves multiple images. Memorization is due balance settings so that:

Which leads to an algorithm that simple non-iterative calculation to determine the parameters of the network:



where - matrix composed of the weights of neurons, and - the number of stored images. Diagonal matrix relies zero, which means no influence on the neurons themselves. Weight specified in such a way determined by the stability status of the network, the corresponding vectors memorized.

The activation function of neurons is of the form:


The input image is assigned to the network as an initial approximation. Then begins an iterative procedure, which, if you're lucky, converges to a stable state. Thus steady state is likely to be one of the stored images. Moreover, it will be the image that is most similar to the input signal. In other words, the image associative associated with it.

Can introduce the concept of energy networks as:



wherein - the number of neurons used. Each stored image in this case corresponds to a local energy minimum.

Although the Hopfield network and are very simple, they illustrate the three fundamental properties of the brain. The first - is the existence of brain dynamics, when any external or internal perturbations forced to leave the state of the current local minimum of energy and move on to find a new dynamic. Second - is the ability to come to quasistability as defined by the previous memory. Third - associativity transitions when changing the descriptive states constantly traced certain generalized proximity.

As Hopfield network, our model was originally dynamic. It can exist in a static condition. Any wave exists only in movement. Caused by the activity of neurons triggers a wave of identifiers. These waves cause the activity patterns, recognized the familiar wave combinations. Patterns are launching new wave. And so without stopping.

But the dynamics of our networks is fundamentally different from the dynamics of Hopfield networks. In traditional networks, the dynamics is reduced to an iterative procedure convergence of network status to a stable state. We have the same dynamics similar to those processes that occur in the algorithmic computers.

Wave dissemination of information - a data transfer mechanism. The dynamics of this program - it is not iterative convergence dynamics and spatial dynamics of identity propagation of the wave. Each cycle of the wave propagation of information changes the picture evoked activity of neurons. The transition to a new state of the cortex is replaced by the current description and accordingly change the picture of our thoughts. The dynamics of these changes again is not some iterative procedure, and generates a sequence of images of our perception and thinking.

Our wave model is much more complex than a simple dynamic network. Later, we will describe a lot unbanal mechanisms regulating its work. Not surprisingly, the notion of associativity and in our case is more complicated than associative Hopfield networks, and more like a concept inherent in computer models of data.

In programming under associativity is the ability of the system to search for data, similar to the sample. For example, if we turn to the database, specifying query criteria, then, in fact, we want to get all the elements related to the sample, described this request.

On the computer to solve the problem of associative search can be simple search by checking all data matches the search criteria. But time-consuming exhaustive search, the search is much faster if the data is prepared in advance so that let you quickly generate samples that meet certain criteria. For example, a search in the address book can be greatly accelerated if you create a table of contents on the first letter.

Prepared properly called associative memory. Practical implementation of associativity can use the table indexes, hash addressing, binary trees, and similar tools.

Regardless of the implementation, associative search requires the specification of criteria of similarity or proximity. As we have said, the main types of proximity - it's proximity to the description, the proximity of the joint expression and proximity to the general phenomenon. The first speaks of the similarity in form, and the rest of the similarity to a certain point. It reminds product search in the shop. If you enter the name, you will get a list of products whose names will be close to the target. But other than that you are likely to get another list, which will be called something like "those who are looking for this item, often pay attention to ...". The elements of the second list are in their descriptions will not resemble the description set forth the original query.

Let's see how arguments about associativity apply to terms used in our model. The notion we have - a set of patterns of neural detectors tuned to the same image. It seems that one pattern corresponds to a separate cortical minicolumn. Image - this is some combination of signals on the receptive field of neurons. On the sensory areas of the cortex image signals can be generated topographic projection. At higher levels of the image - this pattern activity occurring around the neuron, when passing by a wave of identifiers.

To go to the state of the neuron evoked activity requires that the image accurately enough coincided with a picture of the scale at its synapses. So if a neuron tuned to the detection of a particular combination of concepts, the wave identifier must contain a substantial portion of the identifiers of each of them.

Triggering neuron when repeated much of the concepts that make up its characteristic stimulus is called recognition. Recognition and related evoked activity do not give a generalized associativity. In this mode, the neuron does not respond to the concept of having a temporal proximity, any concepts, although included in the description of the characteristic of the neuron stimulus, but met separately from the rest of the description.

Neurons that we use in our model, and we compare with the actual neurons in the brain, capable of storing information, not only due to changes in synaptic weights, but also due to changes in metabotropic receptive clusters. Due to recent neurons are able to remember fragments of paintings environmental activity. Repetition of such a stored picture does not lead to the appearance of induced neuron activity, consisting of a series of pulses, and is a single spike. Remember neuron or not anything so determined by several factors one of which - a change in the membrane potential of the neuron. The increase in membrane potential causes of split ends discrepancy metabotropic receptors, which causes them to acquire the necessary sensitivity to remember.

When a neuron learns distinctive image and changes its state evoked activity, it significantly increases the membrane potential, which gives him the opportunity to store on the part of the membrane surrounding extrasynaptic his painting activity.

We can assume that at the moment caused by the activity of neurons stores all around him wave patterns. This operation has a definite meaning. Suppose that we have a complete description of what is happening, is composed of a large number of concepts. Some of them allow the neuron to learn the concept of a detector which is its pattern (minicolumn). The recognition of this concept will provide participation in the general description. At the same time on the neurons responsible for the very concept of locks which other concepts included in the full narrative painting.

This fixation will happen to all the neurons that make up our detection pattern (minicolumn). The regular repetition of this procedure will result in the accumulation on the surface of neurons detectors receptive clusters are sensitive to specific combinations of identifiers.

It can be assumed that the more any notion will meet in conjunction with the activity detection pattern, the higher the likelihood that it will be determined by an independent appearance extrasynaptic receptors and cause a single spike. That is, if there is consistency between the concepts in the manifestation, the neurons of the concept of a single spike will react when a wave description will appear more associatively related concept.

Simultaneous single neuron spike detection of a pattern - is the foundation to run the appropriate identification of the wave. It turns out that by specifying one concept and extending its wave identifier can be obtained in the wave description of a set of concepts, associatively connected with it. This can be compared with the way the database on request lists all the records that fall under the criteria of the query. After receiving the list from the database, we use additional algorithms to continue with this list. Similarly, one can imagine, and the brain. Gaining access to the associations - this is just one of the stages of his work. On the activity of neurons can impose additional restrictions that allow only a portion of associations to get into the wave description. The essence of these additional restrictions largely determines information procedures inherent in thinking.

References

Continued

Previous parts:
Part 1. Neuron
Part 2. Factors
Part 3: Perceptron, convolutional network
Part 4. Background activity
Part 5. The brain waves
Part 6. System projections
Part 7: Human-Computer Interface
Part 8: Allocation of factors in the wave networks
Part 9 patterns of neural detectors. Rear projection
Part 10: Spatial self-organization

Alex Redozubov (2014)

Source: habrahabr.ru/post/215701/

Tags

See also

New and interesting