Researchers at the Neural Computation Institute of Ruhr University Bochum, Germany, have constructed a computer model that learns spatial information in a pattern similar to that of rodents.
In the process, individual sequences of nerve cell activities in the hippocampus are played back repeatedly according to specific priorities. If an artificial intelligence follows the same pattern, it learns spatial information more quickly than if the sequences were repeated at random. Nicolas Diekmann and Professor Sen Cheng published their findings in the journal eLife on March 14, 2023.
The brain revisits routes while we sleep
The hippocampus brain region is of great importance in memory formation. This has been illustrated by famous cases such as that of the patient H.M., who was unable to form new memories after large parts of his a hippocampus had been removed. Studies on rodents have demonstrated the role of the hippocampus in spatial learning and navigation.
An important discovery in this context was cells that fire at specific locations, known as place cells. “They play a role in a fascinating phenomenon known as replay,” explains Nicolas Diekmann. “When an animal moves around, certain place cells fire one after the other along the animal’s route. Later, at rest or during sleep, the same place cells can be reactivated either in the same order as they were experienced or in reverse order.”
The sequences observed during repetition don’t just reflect earlier behavior. Sequences can also be reassembled, they can adapt to structural changes in the environment or represent places not yet visited but seen.
“We were interested in how the hippocampus produces such a variety of replay types efficiently and what purpose they serve,” says Nicolas Diekmann. The researchers therefore built a computer model in which an artificial intelligence model learns spatial information. Ultimately, they studied how quickly the AI agent finds an exit from a specific spatial situation. The better it knows it, the faster it is.
Playback follows certain rules
The AI agent also learns by repeating neuronal sequences. However, they are not played back randomly, but prioritized according to certain rules. “Sequences are played back stochastically according to their prioritization,” points out Diekmann.
Familiar sequences are prioritized. Positions associated with a reward are also played back more frequently. “Our model is biologically plausible, generates a manageable computational overhead and learns faster than agents where sequences are replayed at random,” says Nicolas Diekmann. “This gives us a little more detail on how the brain learns.”
More information:
Nicolas Diekmann et al, A model of hippocampal replay driven by experience and environmental structure facilitates spatial learning, eLife (2023). DOI: 10.7554/eLife.82301
Journal information:
eLife
Source: Read Full Article