365NEWSX
365NEWSX
Subscribe

Welcome

Artificial Neural Networks Learn Better When They Spend Time Not Learning at All - Neuroscience News

Artificial Neural Networks Learn Better When They Spend Time Not Learning at All - Neuroscience News

Artificial Neural Networks Learn Better When They Spend Time Not Learning at All - Neuroscience News
Nov 20, 2022 1 min, 51 secs

Summary: “Off-line” periods during AI training mitigated “catastrophic forgetting” in artificial neural networks, mimicking the learning benefits sleep provides in the human brain.

Artificial neural networks leverage the architecture of the human brain to improve numerous technologies and systems, from basic science and medicine to finance and social media.

In some ways, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect: When artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called catastrophic forgetting. .

Writing in the November 18, 2022 issue of PLOS Computational Biology, senior author Bazhenov and colleagues discuss how biological models may help mitigate the threat of catastrophic forgetting in artificial neural networks, boosting their utility across a spectrum of research interests. .

They found that when the spiking networks were trained on a new task, but with occasional off-line periods that mimicked sleep, catastrophic forgetting was mitigated.

Memories are represented in the human brain by patterns of synaptic weight — the strength or amplitude of a connection between two neurons. .

“Synaptic plasticity, the capacity to be altered or molded, is still in place during sleep and it can further enhance synaptic weight patterns that represent the memory, helping to prevent forgetting or to enable transfer of knowledge from old to new tasks.”.

When Bazhenov and colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting. .

“Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation” by Maxim Bazhenov et al

Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation

Artificial neural networks overwrite previously learned tasks when trained sequentially, a phenomenon known as catastrophic forgetting

Here we used spiking network to study mechanisms behind catastrophic forgetting and the role of sleep in preventing it

Interleaving new task training with periods of off-line reactivation, mimicking biological sleep, mitigated catastrophic forgetting by constraining the network synaptic weight state to the previously learned manifold, while allowing the weight configuration to converge towards the intersection of the manifolds representing old and new tasks

The study reveals a possible strategy of synaptic weights dynamics the brain applies during sleep to prevent forgetting and optimize learning

Summarized by 365NEWSX ROBOTS

RECENT NEWS

SUBSCRIBE

Get monthly updates and free resources.

CONNECT WITH US

© Copyright 2023 365NEWSX - All RIGHTS RESERVED