Close Menu
  • Home
  • Technology
  • Science
  • Space
  • Health
  • Biology
  • Earth
  • History
  • About Us
    • Contact Us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
What's Hot

Florida Startup Beams Solar Power Across NFL Stadium in Groundbreaking Test

April 15, 2025

Unlocking the Future: NASA’s Groundbreaking Space Tech Concepts

February 24, 2025

How Brain Stimulation Affects the Right Ear Advantage

November 29, 2024
Facebook X (Twitter) Instagram
TechinleapTechinleap
  • Home
  • Technology
  • Science
  • Space
  • Health
  • Biology
  • Earth
  • History
  • About Us
    • Contact Us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
TechinleapTechinleap
Home»Science»Unlocking the Brain’s Flexibility: A Revolutionary Algorithm for Continuous Learning
Science

Unlocking the Brain’s Flexibility: A Revolutionary Algorithm for Continuous Learning

October 10, 2024No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Email Telegram

Researchers at Caltech have developed a groundbreaking algorithm inspired by the human brain’s remarkable ability to learn and adapt. This new approach, called the Functionally Invariant Path (FIP) algorithm, enables neural networks to continuously update with new data without losing previously encoded information. This breakthrough has wide-ranging applications, from improving online recommendations to fine-tuning self-driving cars. The algorithm’s development was influenced by neuroscience research on how animals and humans can rewire their brains to relearn skills after injury. This discovery could pave the way for more flexible and resilient artificial intelligence systems. Artificial neural networks, machine learning

New algorithm enables neural networks to learn continuously
Differential geometric framework for constructing FIPs in weight space. a, Left: conventional training on a task finds a single trained network (wt) solution. Right: the FIP strategy discovers a submanifold of isoperformance networks (w1, w2…wN) for a task of interest, enabling the efficient search for networks endowed with adversarial robustness (w2), sparse networks with high task performance (w3) and for learning multiple tasks without forgetting (w4). b, Top: a trained CNN with weight configuration (wt), represented by lines connecting different layers of the network, accepts an input image x and produces a ten-element output vector, f(x, wt). Bottom: perturbation of network weights by dw results in a new network with weight configuration wt + dw with an altered output vector, f(x, wt + dw), for the same input, x. c, The FIP algorithm identifies weight perturbations θ* that minimize the distance moved in output space and maximize alignment with the gradient of a secondary objective function (∇wL). The light-blue arrow indicates an ϵ-norm weight perturbation that minimizes distance moved in output space and the dark-blue arrow indicates an ϵ-norm weight perturbation that maximizes alignment with the gradient of the objective function, L(x, w). The secondary objective function L(x, w) is varied to solve distinct machine learning challenges. d, Path sampling algorithm defines FIPs, γ(t), through the iterative identification of ϵ-norm perturbations (θ*(t)) in the weight space. Credit: Nature Machine Intelligence (2024). DOI: 10.1038/s42256-024-00902-x

Crossing the Frontier in Neural Networks

Neural networks have proven very good at learning tasks like how to identify handwritten digits. Yet, these models have long struggled with an issue called catastrophic forgetting. Even if trained on other tasks, they can successfully learn the new assignments but will continue to “forget” how to perform the originals. For example, intelligent systems inspired by human brain (called artificial neural network) such as the one a self-driving car uses has a serious limitation: once it is programmed to do something, e.g. recognize an apple on the road and take specific action to avoid it, in order to teach their AI new tricks tens of thousands of times more effort than training casual human responds would require complete reprogramming.

But living brains — like the ones that human and animals have — are so much more flexible No one is born walking, talking or playing games, all these are acquired skills and an individual can soon acquire how to play a new game without having relearn how to step again. Taking a cue from this innate plasticity, researchers at Caltech have developed an algorithm that can help neural networks continue to learn by updating with new datasets, eliminating the need for training them from scratch.

The FIP Algorithm: The Brain’s Hidden Power

The researchers continued to design a new algorithm for this purpose, called Functionally Invariant Path (FIP) algorithm, in the lab of Matt Thomson, assistant professor of computational biology and Heritage Medical Research Institute (HMRI) Investigator. Inspired by neuroscience work being done at Caltech, and in particular research on birds that can rewire their brains to learn how to sing again after suffering a brain injury (like the work of Carlos Lois, a Research Professor of Biology at Caltech)

The 6-compartment FIP model was derived according to the principles of differential geometry. This framework rewrites the weights of a neural network without forgetting their previous encoded information, which in turn, solves catastrophic forgetting. All of this, says Thomson, was a years long project that began with an inquiry into the basic science behind how brains learn flexibly. I need to give that to an artificial neural network.

The Real World — The Visions for Flexible AI

The brains behind the FIP algorithm, Guru Raghavan and Matt Thomson, are working with Julie Schoenfeld at Caltech to develop a business venture around it; through Julie’s prodding, Yurts has officially launched. FIP algorithm can be used across an array of different use cases, from making better recommendations on online stores all the way to tuning self-driving cars.

This breakthrough has consequences beyond the applications. The FIP algorithm presents an important advancement towards the creation of more flexible, adaptable artificial intelligence designs, based on the unusual flexibility in the way the human brain functions. The emergence of the FIP algorithm is a wonderful example of how, as AI advances, groundwork is laid for evolutionary machine learning; a future in which artificial intelligence learns and adapts throughout its lifetime – as did the spectacular biological brains that gave it rise.

Artificial intelligence artificial neural networks brain research continuous learning machine learning analysis
jeffbinu
  • Website

Tech enthusiast by profession, passionate blogger by choice. When I'm not immersed in the world of technology, you'll find me crafting and sharing content on this blog. Here, I explore my diverse interests and insights, turning my free time into an opportunity to connect with like-minded readers.

Related Posts

Science

How Brain Stimulation Affects the Right Ear Advantage

November 29, 2024
Science

New study: CO2 Conversion with Machine Learning

November 17, 2024
Science

New discovery in solar energy

November 17, 2024
Science

Aninga: New Fiber Plant From Amazon Forest

November 17, 2024
Science

Groundwater Salinization Affects coastal environment: New study

November 17, 2024
Science

Ski Resort Water demand : New study

November 17, 2024
Leave A Reply Cancel Reply

Top Posts

Florida Startup Beams Solar Power Across NFL Stadium in Groundbreaking Test

April 15, 2025

Quantum Computing in Healthcare: Transforming Drug Discovery and Medical Innovations

September 3, 2024

Graphene’s Spark: Revolutionizing Batteries from Safety to Supercharge

September 3, 2024

The Invisible Enemy’s Worst Nightmare: AINU AI Goes Nano

September 3, 2024
Don't Miss
Space

Florida Startup Beams Solar Power Across NFL Stadium in Groundbreaking Test

April 15, 20250

Florida startup Star Catcher successfully beams solar power across an NFL football field, a major milestone in the development of space-based solar power.

Unlocking the Future: NASA’s Groundbreaking Space Tech Concepts

February 24, 2025

How Brain Stimulation Affects the Right Ear Advantage

November 29, 2024

A Tale of Storms and Science from Svalbard

November 29, 2024
Stay In Touch
  • Facebook
  • Twitter
  • Instagram

Subscribe

Stay informed with our latest tech updates.

About Us
About Us

Welcome to our technology blog, where you can find the most recent information and analysis on a wide range of technological topics. keep up with the ever changing tech scene and be informed.

Our Picks

Mysteries of Meibomian Gland Dysfunction: A Groundbreaking Rat Model

November 2, 2024

From Test Tubes to Tarts: How a Chemist Conquered the Great British Bake Off

September 26, 2024

Shedding Light on Plastic Pollution: Spectral Insights into Beach Litter Detection

November 2, 2024
Updates

The Closest Exoplanet to Earth: A Tantalizing Discovery

October 1, 2024

Unraveling Pollen Allergy: How Common Allergens Interact in Northwestern China

October 21, 2024

Mental Health Stigma on the Rise: A Concerning Trend in Public Attitudes

October 11, 2024
Facebook X (Twitter) Instagram
  • Homepage
  • About Us
  • Contact Us
  • Terms and Conditions
  • Privacy Policy
  • Disclaimer
© 2025 TechinLeap.

Type above and press Enter to search. Press Esc to cancel.