Predictive coding is a leading neuroscience theory of how the brain works. Can it be useful for AI?
Current AI systems rely on deep neural networks trained by backpropagation. However, this method has some limitations, such as inefficiency, and a lack of robustness, and is unlikely to work like the brain. An alternative approach called predictive coding, inspired by neuroscience theories, is a candidate to overcome these limitations.
Predictive coding posits that the brain has an internal model of the world. Top-down connections transmit predictions to lower levels, while bottom-up connections transmit prediction errors. By minimizing these errors across a hierarchy, the brain performs inference and learning.
Predictive coding as a free-energy minimization schema
A new paper by researchers from several labs, primarily led by the VERSES AI Research Lab, including renowned neuroscientist Karl Friston, provides a comprehensive overview of predictive coding, covering the theoretical foundations, prominent computational accounts, diverse applications to machine learning problems, plausibility in neuroscience, and hardware considerations.
They see key advantages of predictive coding in robustness and the potential for highly parallel and neuromorphic implementations. It is also consistent with findings on cortical structure and function, and as such is biologically plausible. They also explore how “the perception of what PC is has changed through the years”, with their definition being “Predictive coding is an evidence maximization (or free-energy minimization) schema for hierarchical Gaussian generative models”, following Friston’s work on the free energy principle.
Predictive Coding could become a powerful alternative paradigm in AI
While predictive coding has not yet reached the scale of backpropagation-based methods, recent progress shows great promise for advancing brain-inspired artificial intelligence, they said. Predictive coding has shown success in tasks such as discriminative learning, natural language processing, computer vision, temporal modeling, lifelong learning, and robotics, with some examples matching the performance of backpropagation at MNIST or CIFAR10.
However, while there is potential for predictive coding, the field is still “far away from large-scale applications that might call for significant investment in PC research,” the team says. The main goal of the survey, they say, is to encourage researchers to build on the results of previous research and focus on the challenges of predictive coding.
“Community effort will be needed to advance predictive coding from both the software and the hardware standpoint; particularly, to develop computational schemes that exploit the advantages on offer, such as its parallelism and sparse, local, and potentially energy-efficient computations. Although research progress in predictive coding has been consistent over the past decades, we may be only starting to realize the benefits afforded to artificial intelligence by reverse engineering the cortex and other biological structures.”
From the paper.
With further research, they suggest, it could become a powerful alternative paradigm in AI.