September 9, 2024 By Aili McConnon 4 min read

While large language models are becoming exceptionally good at learning from vast amounts of data, a new technique that does the opposite has tech companies abuzz: machine unlearning.

This relatively new approach teaches LLMs to forget or “unlearn” sensitive, untrusted or copyrighted data. It is faster than retraining models from scratch and retroactively removes specific unwanted data or behavior.

No surprise then that tech giants like IBM, Google and Microsoft are hustling to get machine unlearning ready for prime time. The growing focus on unlearning, however, also highlights some hiccups with this technique: models that forget too much and a lack of industry-wide tools to evaluate the effectiveness of the unlearning.

From learning to unlearning

Trained on terabytes of data, LLMs “learn” to make decisions and predictions without being explicitly programmed to do so. This branch of AI known as machine learning has soared in popularity as algorithms imitate the way that humans learn, gradually improving the accuracy of the content they generate.

But more data also means more problems. Or as IBM Senior Research Scientist Nathalie Baracaldo puts it, “Whatever data is learned—the good and the bad—it will stick.”

And so ever larger models can also generate more toxic, hateful language and contain sensitive data that defy cybersecurity standards. Why? These models are trained on unstructured and untrusted data from the internet. Even with rigorous attempts to filter data, aligning models to define what questions not to answer and what answers to provide and using other guardrails to inspect a model’s output—still, unwanted behavior, malware, toxic and copyrighted material creep through.

Retraining these models to remove the undesirable data takes months and costs millions of dollars. Furthermore, when models are open sourced, any vulnerabilities in the base model are carried onto many other models and applications.

Unlearning approaches aim to alleviate these problems. By identifying unlearning targets such as specific data points like content containing harmful, unethical or copyrighted language or unwanted text prompts, unlearning algorithms efficiently remove the effect of targeted content. 

Forgetting Harry Potter

A team of researchers from Microsoft used this unlearning approach to see if they could make Meta’s Llama2-7b model forget copyrighted material from Harry Potter, which it had been trained on from the internet. Before unlearning, when the researchers entered a prompt such as “Who is Harry Potter?” the model responded: “Harry Potter is the main protagonist in J.K. Rowling’s series of fantasy novels.”

After fine-tuning the model to “unlearn” copyrighted material, the model responds with the following to the same prompt: “Harry Potter is a British actor, writer, and director…”.

“In essence, every time the model encounters a context related to the target data, it ‘forgets’ the original content,” explained the researchers Ronen Elden and Mark Russinovich in a blog post. The team shared their model on Hugging Face so the AI community could explore unlearning and tinker with it as well.

In addition to removing copyrighted material, removing sensitive material to protect individuals’ privacy is another high-stake use case. A team, led by Radu Marculescu at the University of Texas at Austin, collaborating with AI specialists at JP Morgan Chase, is working on machine unlearning for image-to-image generative models. In a recent paper, they showed that they were able to eliminate unwanted elements of images (the “forget set”) without degrading the performance of the overall image set.

This technique could be helpful in scenarios such as drone surveys of real estate properties, for instance, said Professor Marculescu. “If there were faces of children clearly visible, you could blot those out to protect their privacy.”

Related: Tackling bias in machine learning models

Google is also busy tackling unlearning within the broader open-source developer community. In June 2023, Google launched its first machine unlearning challenge. The competition featured an age predictor that had been trained on face images. After the training, a certain subset of the training images had to be forgotten to protect the privacy or rights of the individuals concerned.

While it’s not perfect, the early results from various teams are promising. Using machine unlearning on a Llama model, for instance, Baracaldo’s team at IBM was able to reduce the toxicity score from 15.4% toxicity to 4.8% without affecting the accuracy of other tasks the LLM performed. And instead of taking months to retrain a model, not to mention the cost, unlearning took all of 224 seconds.

Speed bumps

So why isn’t machine unlearning widely used?

“Methods to unlearn are still in their infancy and they don’t yet scale well,” explains Baracaldo.

The first challenge that looms large is “catastrophic forgetting”—meaning a model forgets more than the researchers wanted so the model then no longer performs key tasks it was designed for.

The IBM team has developed a new framework to improve the functioning of models post training. Using an approach they describe as split-unlearn-then-merge or SPUNGE they were able to unlearn undesirable behavior such as toxicity and hazardous knowledge such as biosecurity or cybersecurity risks, while preserving the general capabilities of the models.

Developing comprehensive and reliable evaluation tools to measure the effectiveness of unlearning efforts also remains an issue to resolve, say researchers across the board.

The future of machine unlearning

While unlearning may still be finding its feet, researchers are doubling down since there is such a broad array of potential applications, industries and geographies where it could prove useful.

In Europe for instance, the EU’s General Data Protection Regulation protects individuals’ “right to be forgotten.” If an individual opts to remove their data, machine unlearning could help ensure companies adhere to this legislation and remove critical data. Beyond security and privacy, machine unlearning could also be useful in any situation where data needs to be added or removed when licenses expire or clients, for instance, leave a large financial institution or hospital consortium.

“What I love about unlearning,” says Baracaldo, “Is that we can keep using all our other lines of defense like filtering data. But we can also ‘patch’ or modify the model whenever we see something goes wrong to remove everything that’s unwanted.”

eBook: How to choose the right foundation model
Was this article helpful?
YesNo

More from Artificial intelligence

New IBM study: How business leaders can harness the power of gen AI to drive sustainable IT transformation

3 min read - As organizations strive to balance productivity, innovation and environmental responsibility, the need for sustainable IT practices is even more pressing. A new global study from the IBM Institute for Business Value reveals that emerging technologies, particularly generative AI, can play a pivotal role in advancing sustainable IT initiatives. However, successful transformation of IT systems demands a strategic and enterprise-wide approach to sustainability. The power of generative AI in sustainable IT Generative AI is creating new opportunities to transform IT operations…

IBM Research data loader enhances AI model training for open-source community

3 min read - How do you overcome bottlenecks when you’re training AI models on massive quantities of data? At this year’s PyTorch conference, IBM Research showcased a groundbreaking data loader for large-scale LLM training. The tool, now available to PyTorch users, aims to simplify large-scale training for as broad an audience as possible. The origins of the research The idea for the high-throughput data loader stemmed from practical issues research scientists observed during model training, as their work required a tool that could…

How IBM Data Product Hub helps you unlock business intelligence potential

4 min read - Business intelligence (BI) users often struggle to access the high-quality, relevant data necessary to inform strategic decision making. These professionals encounter a range of issues when attempting to source the data they need, including: Data accessibility issues: The inability to locate and access specific data due to its location in siloed systems or the need for multiple permissions, resulting in bottlenecks and delays. Inconsistent data quality: The uncertainty surrounding the accuracy, consistency and reliability of data pulled from various sources…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters