Neural networks are composed of interconnected nodes which mimic the neurons in the human brain. When learning, the brain creates synapses, or connections between the neurons in the neocortex, the region of the brain responsible for higher-level cognition. Meanwhile, the hippocampus is responsible for converting short-term memories into long-term ones and preserving knowledge.
While the field of neuroscience still has much to discover about the brain, we do know that the brain excels at internal optimization. Neuroplasticity, or brain plasticity, refers to the brain’s ability to restructure itself for continual learning. Synaptic connections used more often become stronger, while those used less frequently wither and eventually disappear.
Plasticity is what allows people to regain lost abilities, such as speech or motion, after suffering a traumatic brain injury. Without neural plasticity, humans would not be able to learn as they grow. The brains of babies and young children have greater plasticity, which is why they are able to learn languages so easily as compared to typical adults.
Artificial neural networks work similarly in that they adjust their weights in response to new data, much as the brain forges new synaptic connections. The hidden layers between the input and output of a neural network can shift over time. When neural networks overprioritize new data over previous knowledge, they can over-adjust their weights: rather than expand its knowledge, the model effectively replaces its previously knowledge with the new data.