When AI remembers everything

17 April 2025

Author

Sascha Brodsky

Tech Reporter, Editorial Lead

IBM

ChatGPT can now remember who you are, and that simple upgrade may change how we relate to artificial intelligence.

The rollout of memory features by OpenAI marks the first time a widely used AI assistant can persistently recall information across interactions. While the feature is opt-in, it reflects a broader shift: AI systems are being designed to retain what they learn about users over time. The goal is to make interactions smoother, more relevant and more efficient. But building memory into AI also raises more profound questions—about privacy, transparency and the level of control users have.

“Memory is a critical step toward making AI more adaptive, useful and human-like,” Payel Das, Principal Researcher at IBM Research, tells IBM Think in an interview. “AI memory can provide better accuracy and adaptivity, especially when paired with mechanisms like persistent and episodic memory modules.”

Unlike human memory, which is often subjective and selective, AI memory is a technical architecture—a structured store of information within neural networks or external databases. Persistent memory retains long-term facts, such as a user’s job title, while episodic memory stores recent interactions or contextual information.

OpenAI’s memory implementation gives users some agency, allowing them to review and delete what’s stored. Other companies, including Anthropic and Google DeepMind, are pursuing similar capabilities. Despite differences in execution, the direction is shared: memory is becoming a foundational feature of next-generation AI.

Supporters argue that this functionality is critical to moving AI beyond static, one-off responses. A memory-enabled assistant can continue conversations over time, follow up on unresolved tasks and tailor responses to individual preferences. In real-world use cases, such as customer support, tutoring or healthcare, this continuity could drive significant gains in effectiveness.

IBM is exploring these possibilities from an enterprise lens. “We are exploring long-term memory in ways that align with enterprise safety standards,” Das said. “Our work on persistent and episodic memory focuses on giving users clarity and oversight on what is retained and how it's used.”

Digital memory, real concerns

Still, not everyone is convinced this path is risk-free. Vasant Dhar, a Professor at NYU's Stern School of Business and a longtime expert in data governance, sees the trend as part of a broader pattern.

“It’s the Wild West—companies are hoovering up data without rules, and users have little real control,” Dhar tells IBM Think in an interview. Memory features, he warned, deepen existing risks tied to surveillance and consent. “If you can predict better, the model becomes more valuable. So, in a nutshell, that’s what’s going on.”

Dhar draws a connection to past waves of personalization on platforms like Facebook and Google, which relied on behavior tracking to refine content and advertising. But with AI, the user input is more nuanced. Conversations may reveal more than clicks—and may persist longer.

“Sure, people should be concerned,” Dhar said. “But what are they going to do? Turn it off? And even then, how do you know it's really off?”

The implications go beyond user control. Dhar warns that memory may also shape the models themselves. In some architectures, user interactions aren’t just remembered—they’re used to retrain or adapt the underlying model.

“In some ways, the LLM itself acts like a memory system—its weights encode accumulated knowledge, including potential patterns from user interactions,” Dhar said. “Its long-term memory isn’t just about facts; it could include information about you, and what you’ve said to it.”

This raises thorny questions: What qualifies as training data? Can personal memory stay siloed, or might it influence broader model behavior?

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

Balancing innovation with oversight

Das emphasized that IBM treats this distinction with care. “Enterprises need confidence that proprietary or personal information won’t inadvertently leak into public systems,” she says. “Our memory design reflects that priority.”

Some researchers are exploring interfaces where memory is organized into readable, editable chunks—like digital notes, Das says. Others are developing more implicit systems, where memory is guided by importance or frequency of reference.

“There are trade-offs between transparency and cognitive load,” says Das. “Too much visibility into memory may overwhelm users. But too little undermines trust.”

Navigating a world where AI remembers you

Memory could also influence how AI reasons. With persistent memory, systems might operate across longer-term workflows or adapt more fluidly to evolving user needs. That could open doors in fields like education, therapy or chronic care.

But experts caution against assuming memory inherently improves accuracy or fairness. If a system misremembers—or if biased interactions are retained—those issues can compound.

Regulators are starting to respond. The EU’s AI Act includes provisions for transparency and user rights related to data storage and memory. In the US, the FTC has expressed concern over how companies handle personal data in AI contexts.

Observers say many users may not realize what's happening. “We’re entering a phase where people assume AI is working for them, when really, it’s collecting from them,” Dhar says.

Despite these concerns, many in the field see opportunity. “This is an exciting direction,” Das asserts. “The key is to make memory accountable, explainable and aligned with human values. That’s the challenge ahead.”

AI Academy

Why foundation models are a paradigm shift for AI

Learn about a new class of flexible, reusable AI models that can unlock new revenue, reduce costs and increase productivity, then use our guidebook to dive deeper.

Related solutions
Foundation models

Explore Granite 3.2 and the IBM library of foundation models in the watsonx portfolio to scale generative AI for your business with confidence.

Explore watsonx.ai
Artificial intelligence solutions

Put AI to work in your business with IBM’s industry-leading AI expertise and portfolio of solutions at your side.

Explore AI solutions
AI consulting and services

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

Explore AI services
Take the next step

Explore the IBM library of foundation models in the IBM watsonx portfolio to scale generative AI for your business with confidence.

Explore watsonx.ai Explore AI solutions