1564
Logic thinking. Part 8: Allocation of factors in the wave networks
In the previous parts we describe a model of a neural network, called the wave. Our model differs significantly from the traditional wave models. Usually based on the fact that each neuron own peculiar oscillations. Teamwork such prone to systematic ripple neurons leads to the classical models of a particular general synchronization and the emergence of global rhythms. We invest in wave activity bark entirely different meaning. We have shown that neurons are able to capture information not only by changing the sensitivity of their synapses, but also due to changes in membrane receptors located outside the synapse. As a result, the neuron acquires the ability to respond to a large set of specific patterns of activity surrounding neurons. We have shown that activation of several neurons, forming a certain pattern, always starts a wave propagating in the cortex. Such a wave is not just a disturbance is transmitted from neuron to neuron, and creates a signal as you move a certain pattern of neuronal activity that is unique to each pattern reject it. This means that at any place on the crust to the pattern that has brought the wave patterns can be determined for any crust studies in activity. We have shown that in a small fiber bundles wave signals can be projected onto other areas of the cortex. Now we talk about how learning can occur synaptic neurons in our wave networks.
Isolation wave factors
Take an arbitrary cortical neurons (Figure below). He has a receptive field within which it has a dense network of synaptic connections. These compounds include both the surrounding neurons and axons entering into the bark bearing signals from other brain regions. With this neuron can monitor the activity of a small surrounding area. If the area of the cortex, to which he belongs, has to topographic projection, then the neuron receives signals from those axons that fall within its receptive field. If there are active on the bark patterns evoked activity, the neuron sees fragments of identity waves from them when they pass by him. Similarly, with waves that arise from the wave of tunnels that carry the wave pattern from one brain region to another.
Information sources for the isolation factor. 1 - cortical neurons, 2 - the receptive field, 3 - topographic projection, 4 - evoked activity patterns, 5 - wave tunnel i>
In the activity detectable neuron in its receptive field, regardless of its origin, the main principle is observed - each unique phenomenon is unique, unique to this phenomenon pattern. Repeated phenomenon - the repeated pattern of activity, visible neuron.
If an event contains several phenomena, superimposed on each other several patterns. When applying the patterns of activity can not be matched in time, ie wave fronts can razminutsya. To account for this, we choose a show time interval equal to the period of one wave cycle. Has accumulated for each input neuron synaptic activity during this time. That is just a sum as spikes come to a particular input. As a result, we obtain the input vector that describes an integrated picture of the cycle of synaptic activity. With this input vector, we can use for all neuron previously described methods of teaching. For example, we can turn to the filter neuron Hebb and make it stand out a principal component contained in the input data stream. By implication it will identify those inputs, which often manifested incoming signals together. Applied to the waves of the identification - that is a neuron determines which waves have regularity occur from time to time, together, and to adjust their weights recognition of this combination. That is, selecting such a factor would be the exercise-induced neuron activity, when it will recognize a familiar combination of identifiers.
Thus, the neuron will acquire the properties of the neuron-detector tuned to a certain phenomenon detected in their characteristics. In this case, the neuron will fire not only as a presence sensor (a phenomenon - there is no phenomenon), it will be the level of their activity signal the severity of the factor for which he was trained. Interestingly, when this is not essential nature of synaptic signals. With the same success neuron can be configured to process the wave patterns, patterns of topographic projection or joint activity.
It should be noted that Hebbian learning, selects the first principal component is given purely illustrative to show that the local receptive field of each neuron's crust contains all the necessary information to study it as a universal detector. Real collective learning algorithms neurons that produce a wide variety of factors, organized a bit more complicated.
Stability - plasticity
Hebbian learning is very clear. It is useful to illustrate the essence of the iterative learning. If we talk only about activating connections, then as the neuron is trained, its weight adjusted for a certain image. For a linear combiner activity is defined:
Coincidence signal to the image that stands out on the synaptic weights, causing a strong response neuron mismatch - weak. Teaching Hebbian, we strengthen the weight of the synapse, which receives a signal at the moments when the neuron itself is active and weaken those of weight, which at this time there is no signal.
To avoid infinite growth of weights are introduced normalizing routine that keeps them amount within certain limits. This logic leads, for example, to the rule of Oia:
The most annoying thing in the standard Hebbian learning - is the need to introduce speed ratio of training, which should be reduced as far as learning the neuron. The fact is that if you do not, then the neuron is trained by any way, then, if the nature of the input signals change, retrain for a new allocation of factors specific to the altered data stream. A reduction in the speed of training, firstly, of course, slows the learning process, and secondly requires no obvious decrease in these control methods. Rough handling at the speed of learning can lead to "woodiness" of the entire network and immunity to the new data.
All this is known as stability-plasticity dilemma. The desire to respond to new experiences change threatens the balance previously trained neural stabilization also leads to the fact that a new experience ceases to affect the network and is simply ignored. We have to choose either the stability or flexibility. To understand what mechanisms can help in solving this problem, go back to the biological neurons. Let us examine a little more detail the mechanisms of synaptic plasticity, that is so, due to which there synaptic learning real neurons.
The essence of the phenomena of synaptic plasticity that synaptic transmission efficiency is not constant and may vary depending on the current activity pattern. And the duration of these changes can vary greatly and cause different mechanisms. There are several forms of plasticity (see Figure below).
Dynamics of synaptic sensitivity. (A) - Facilitation, (B) - the gain and depression, (C) - post-tetanic potency (D) - Long-term potency and long-term depression (J. Nicholls., Martin R., Wallace B., P. Fuchs, 2003) i>
Short volley of spikes can cause relief (facilitation) mediator release from the corresponding presynaptic terminal. Facilitation appears instantly saved during the volley and much more visible about 100 milliseconds after the stimulation. The same short exposure can lead to suppression (depression) mediator release, which lasts a few seconds. Facilitation can move into the second phase (gain) duration similar to the duration of the depression.
Continuous high-frequency pulse train is commonly called tetanus. The name is due to the fact that this series is preceded by tetanic muscle contraction. Receipt of tetanus at the synapse can cause post-tetanic potency mediator release, watch for a few minutes.
Repetitive activity may cause long-term changes in the synapses. One reason for these changes - increasing the concentration of calcium in the postsynaptic cell. The strong increase of the concentration of a cascade of secondary mediators, leading to the formation of additional receptors in the postsynaptic membrane, and an overall increase in the sensitivity of receptors. More than a slight increase in the concentration of the opposite effect - reduced the number of receptors, decreases sensitivity. The first condition is called long-term potency, the second - long-term depression. The duration of these changes - from several hours to several days (J. Nicholls., Martin R., Wallace B., P. Fuchs, 2003).
How to change the sensitivity of a single synapse in response to receipt of external impulses, whether there will be increased or depression occurs, depends on many processes. We can assume that this is largely dependent on how the overall picture is composed of neuronal excitation and at what stage of training it is.
The described behavior synaptic sensitivity allows further assumed that the neuron is capable of the following operations:
fast enough to tune in a certain way - facilitation; to reset this setting at an interval of 100 milliseconds or transfer it to a longer retention - gain and depression; reset state gain and depression, or convert them into long-term or long-term potency depression.
The network adaptive resonance ART i>
The practical implementation of this theory - ART network. First network ART does not know anything. First filed her image creates a new class. The very image is copied as the prototype class. The following images are compared with existing classes. If the image is already close to create a class that is causing resonance, there is an image of remedial training class. If the image is unique and is not similar to one of the prototypes, it creates a new class, with a new image becomes its prototype.
If it is assumed that the formation of the detectors in the cortex neurons occurs in a similar manner, the phases of synaptic plasticity can be given the following meanings:
neuron has not yet received a specialization as a detector, but came in activity due to the wave of activation, quickly changing the weight of their synapses, tuning in to a picture of the activity of its receptive field. These changes are in the nature of facilitation and last about one measure wave activity;
if it turned out that in the immediate vicinity have enough neurons detectors tuned to this stimulus, the neuron is reset, otherwise it synapses carried forward to the longer retention of the image;
If during the amplification stage to meet certain conditions, the neuron synapses carried forward to the long-term storage of the image. Neuron detector becomes an appropriate stimulus.
to use them can be quite fully and adequately describe what is happening; to such a description to isolate the basic laws inherent in current events.
Well-known approach is based on the optimal data compression. For example, using factor analysis, we can get the main components, which account for the major share of variability. Leaving the values of some first component and discarding the rest, we can significantly reduce the length of the description. In addition, the values of the factors tell us about the severity of the event described in the phenomena that correspond to these factors. But such compression has a downside. For real events of the first major factors explain together usually only a small percentage of the total variance. Each of insignificant factors though inferior to many times the value of the first factor, but it is the sum of these insignificant factors responsible for basic information.
For example, if you take a few thousand movies and get their assessment affixed hundreds of thousands of users, such data can be performed factor analysis. The most important will be the first four - five factors. They will meet the basic directions of the genre film: Adventure, comedy, romance, detective fiction. For Russian users moreover highlighted a strong factor describing our old Soviet cinema. Dedicated factors have a simple interpretation. If we describe a film in the space of these factors, it will be a description of the coefficients speaking how a given factor is expressed in the film. Each user has a certain genre preferences that affect its rating. Factor analysis allows to isolate the main directions of influence and turn them into factors. But it turns out that the first significant factors explain only about 25% of the variance estimates. Everything else falls into a thousand other small factors. That is, if we try to compress the movie to his portrait in the major factors that we will lose the bulk of information.
In addition, it is impossible to talk about important at low explanatory power. Thus, taking the several films one director, the evaluation likely will closely correlated with each other. Relevant factors would explain a significant percentage of the variance estimates of these films, but only these. This means that, since this factor does not appear in other films, it explains the interest in the entire volume of data will be negligible. But it is precisely for these films, he will be much more important than the first principal component. And so for almost all small factors.
The arguments given to the factor analysis, it is possible to shift to other methods of encoding information. David Field in the 1994 article "What is the goal of sensory coding?» (Field, 1994) considered these questions regarding the mechanisms inherent in the brain. He came to the conclusion that the brain is not engaged in data compression and tends to compact form data. The brain is more convenient to view them dolled up when having to describe a variety of different signs, he also uses only a small part of them (see Figure below).
A compact encoding (A) and economical distributed coding (B) (Field, 1994) i>
And factor analysis, and many other methods of description are repelled from the search for specific patterns and highlight the relevant factors or signs of classes. But often there are data sets where this approach is quite impractical. For example, if we take the position clockwise, it turns out that she has no preferred direction. She moves uniformly around the dial, counting the hours and hours.
References
Previous parts:
Part 1. Neuron
Part 2. Factors
Part 3: Perceptron, convolutional network
Part 6. System projections
Part 7: Human-Computer Interface
The logic of thinking. Part 7. Human-Computer Interface
Logic thinking. Part 9. The pattern of neural detectors. Rear projection