Results for ""
Attractor networks are a special category of recurrent dynamical networks in which the dynamics tend to stabilize over time.
Attractors in a network tend towards a fixed-point (steady) state, a cyclic (repeating) state, a chaotic (unstable locally but not globally) state, or a stochastic (unpredictable) state. Attractor networks are widely utilized in biologically inspired machine learning approaches and computational neuroscience to mimic neural functions, including associative memory and motor behaviour.
In attractor networks, a node system evolves towards a closed subset of states A called the attractor. In a network, the global dynamics stabilize at an attractor state. When the network encounters a cyclic attractor, it evolves towards a fixed set of conditions within a limit cycle. Continuously navigated, non-repeating bounded attractors are what we call chaotic attractors.
The collection of all feasible node states constitutes the network state space. The set of nodes on an attractor is called the attractor space. The input pattern is used to initialize the attractor network. The network nodes can have a different dimensionality than the input pattern. The trajectory is the sequence of states the network traverses as it evolves towards the attractor. In statistical mechanics, an attractor is a set of states known as the basin of attraction.
Attractors come in many flavours and can mimic various network dynamics. However, typical networks are fixed-point attractor structures.
From the Hopfield network, we can infer the fixed point attractor. In a standard implementation of this approach, the fixed points stand in for stored memories. These models have been used to elaborate on associative learning and memory, categorization, and pattern completion. Hopfield networks have an inherent energy function that causes them to converge to a stationary state asymptotically. When an input starts a specific type of point attractor network, the network tends to stabilize after removing the input. In another sort of attractor network, input is used to test the significance of predetermined weights. A model of associative memory is provided if this stable state changes before and after the input. However, the network can be used for pattern completion if there is no difference between the states before and after input.
Oculomotor control research makes use of both line and plane attractors. These neural integrators, or line attractors, characterize how the eyes move in response to inputs. Rat and mouse behaviour have been modelled using ring attractors.
Animals' oscillatory activities like chewing, walking, and breathing can be modelled using central pattern generators using cyclic attractors.
The patterns of odour recognition might be seen in chaotic attractors (also known as odd attractors). No scientific proof exists that chaotic attractors are superior because they converge on limit cycles more quickly.
Continuous attractors (also known as constant attractor neural networks) have stable states (fix points) that correspond to neighbouring values of a constant variable.
Assuming the presence of ring attractors in the medial entorhinal cortex provides a satisfactory explanation for the observed activity of grid cells. Similar ring attractors have recently been postulated to exist in the lateral entorhinal cortex and play a role extending to the encoding of new episodic memories.
In most cases, memory models with fixed-point attractors have created attractor networks. However, challenges in designing the attractor landscape and network wiring, which lead to spurious attractors and poorly conditioned basins of attraction, have proved to be more workable for computational applications. In addition, unlike other methods like k-nearest neighbour classifiers, training on attractor networks is typically quite computationally intensive.
Locomotor function, memory, and decision-making are just a few examples of the many biological processes that can be better understood with the help of these organisms.
Image source: Unsplash