Innovators from Ferrari, Meta, Lumen and other major players took the stage at IBM Think 2025 this week to share how they’re using AI to solve real-world problems—from deploying purpose-built AI to extracting real value from enterprise data.
What emerging tech breakthroughs could shape your industry next? Find out with the sharpest takes to come out of Think 2025.
10:30 a.m. EDT | May 6, 2025
Have you ever seen a fuel-injected data processing machine that can travel at 230 mph? If you’re on-site at IBM Think, head to the demo floor to see the Scuderia Ferrari HP Formula 1 race car. With sensors that detect sense heat, pressure, friction and fuel, this machine generates more than one million data points per second. It also explains why IBM’s Chairman and CEO Arvind Krishna invited Scuderia Ferrrari HP's team principal, Frédéric Vasseur, onstage this morning during his keynote kicking off Think 2025.
“AI is the productivity engine,” Krishna explained. “AI unlocks the value in your data.” In this case, IBM watsonx is using generative AI to transform the torrent of track data that Scuderia Ferrari HP collects into a new, more personalized and interactive mobile experience for passionate global Ferrari fans. This, in turn, helps Ferrari (and its partners) increase engagement with its core customers.
Those core customers or fans are shifting gears, too. “It’s not just the historic fans anymore,” Vasseur said. New fans are “more focused on the name of the driver’s girlfriend than lap times.” That core fan base has tripled in the past five years and now counts more than 390 million followers around the world. That’s 10 times the population of Canada. Understanding what those newcomers want is becoming just as important as winning on the track.
Moving from the racetrack to the factory floor, where millions of sensors generate a mountain of manufacturing data, Krishna also spoke with Lumen Technologies President and CEO Kate Johnson about bringing the AI brainpower of watsonx to Lumen’s fiber network (the largest in the US) and its edge data centers. In industry, as in racing, “every millisecond matters," said Johnson. "Not quite like a Ferrari, but the same concept.”Â
Why is Lumen partnering with IBM? “To help companies get the most from their AI,” she said. Judging by Krishna’s keynote, this will be a common refrain all week, especially since 99% of all enterprise data has been untouched by AI to date. “If you can unlock the value from that data, that's a huge opportunity for the enterprise,” said Krishna.
“But not all AI is built the same, and not all AI is built on the enterprise,” Krishna added. “If you need to go unlock the value from that 99%, you will need an approach to AI that is tailored for the enterprise.” This can mean using smaller models, which are much more cost-effective and can be more accurate than larger models.
11:30 a.m. EDTÂ | May 6, 2025
AI is booming, and enterprise infrastructure is racing to keep up.
On Tuesday at IBM’s Think conference, a panel of enterprise experts led by Rob Thomas, IBM’s Senior Vice President of Software and Chief Commercial Officer, delivered a clear message: generative AI is moving fast, but most infrastructure is not ready to support it. The keynote session, What’s the Rush?, spotlighted the growing mismatch between AI ambition and operational reality, and what enterprises can do to close it.
Cloud environments today are often fragmented, and the demands of AI at scale only put more pressure on them. HashiCorp Co-Founder and CTO Armon Dadgar described a situation that thousands of enterprises face: disconnected app teams, scattered systems and growing inefficiency. “What we see day in and day out is gaps,” he said. But closing them is possible, he insisted, by standardizing workflows and building unified platforms that support the whole application and infrastructure lifecycle.
Deutsche Telekom is already seeing measurable results from such automation. The company cut patching time by 80 percent, allowing teams to manage growing security threats without overwhelming their operations. “It’s maybe not the sexy stuff that you read about,” said CIO Peter Leukert, “but it’s the core of our duties that we need to do well.”
IBM’s Madhu Kochar, Vice President of Automation, emphasized that most tech stacks were not designed to scale for the future AI is creating. “Hybrid environments are more complex than ever, and gen AI is adding to it,” she said. To clarify the complexity of a business's APIs, apps, integration endpoints, data and events, IBM offers its webMethods Hybrid Integration platform, giving organizations unified control and real-time visibility across environments.
Kochar stressed that businesses need to shift toward agentic automation and AI-driven orchestration, adding that “rigid, static integrations won't work.” By adopting these methods, enterprises can reduce months of work to hours.
Cloud adoption may be widespread, but results remain uneven. “ROI is a little less certain, 20% or so,” Thomas said. The missing ingredient, he said, is hybrid infrastructure—an architecture that connects public cloud, private systems and on-premise environments into a cohesive, orchestrated whole.
Looking ahead, the experts emphasized that the landscape is poised for rapid growth. “Over the next three years, 1 billion applications will be built on the basis of generative AI,” Thomas said. “They will need to be able to work with each other seamlessly.”
5 p.m. EDTÂ | May 6, 2025
With agents come opportunities. Take Pepsi: the company is already working on more than 1,500 bots, assistants and agents—stats shared by Magesh Bagavathi, an SVP and Global Head of Data, Analytics and AI at PepsiCo, during a keynote at Think 2025.
“These bots are all starting to expand at scale, and it’s all coming together because we’ve got a very platform-centric approach,” he continued. That means with agentic AI, PepsiCo can save thousands of hours across its value chain.
By deploying agents, interacting with enterprise systems will no longer require specialized skills or knowledge. “The next big step change is systems of intelligence powered by AI agents,” said Ritika Gunnar, a General Manager of Data and AI at IBM. “They're doers. They don’t just report insights. They act autonomously, orchestrating workflows across your enterprise. This is putting AI to work for business.”
This comes with a caveat, she cautioned: “Now, the explosion of AI agents across the enterprise holds immense promise. But let’s be real for a minute: it also creates significant complexity.”
A major culprit? Businesses are deploying agents in silos, and they’re not always well-integrated into existing tools and applications. This was the catalyst behind the update to IBM watsonx Orchestrate, which offers agent orchestration to handle the multi-agent, multi-tool coordination needed to tackle complex projects across vendors. “Building powerful agents is just the start,” Gunnar said. “The real magic happens when they connect.”
Peter Doolan, Chief Consumer Officer at Slack, believes agentic platforms will unlock unprecedented levels of growth for companies. He cites the example of one of Slack’s customers, Adecco, a human resources provider and temporary staffing firm that receives 300 million applicants every year.
“There's no way that Adecco is able to have enough humans to be able to read and parse through all those 300 million applications,” Doolan said. “And so this is a perfect example of where digital labor has the ability to supercharge a company while [helping] people refocus on what their superpower is, which is understanding the perfect job for you and the perfect time to have that job.”
And that’s a reason to be optimistic, said Doolan. “There’s an extraordinary optimism about the future, about how we can take this untapped value and release it across all of our workforces.”
9:30 a.m. EDTÂ | May 7, 2025
Enterprise data has taken center stage at Think 2025—particularly the fact that less than 1% of it currently fuels the massive language models that dominate AI headlines.
“Data is your true differentiator,” said Matt Hicks, CEO of Red Hat, in an energetic keynote conversation this morning with Ric Lewis, IBM’s SVP of Infrastructure. Inside that data is everything that makes a business unique: its customer relationships, operational history and institutional know-how.
But that data is typically scattered—across on-prem systems, public and private clouds, and edge environments—creating a hybrid setup almost by default. “To make AI effective, we need to bring AI to all those locations where you have data, where you have applications, where you have customers,” said Hillery Hunter, CTO and GM of Innovation for IBM Infrastructure. “Including [at] the heart of your mission-critical enterprise.”
One approach gaining traction is hybrid by design: a strategy that integrates data and systems across environments from the outset, rather than layering them together after the fact. IBM and its subsidiary Red Hat are leaning on this model as a central pillar of enterprise readiness for AI. With this strategy, enterprises are managing and monitoring the full lifecycle of AI models—both predictive and generative—across environments ranging from single servers to distributed systems.
While designing such systems from the get-go is the optimal approach, connecting legacy architecture to get better access to scattered, siloed data is entirely possible—and essential to unlock the value of a company's proprietary data. In fact, IBM works with many companies to do just that, including BNP Paribas.
BNP Paribas Group CTO Jean-Michel Garcia joined Hunter on stage to describe how the bank has applied such a hybrid-by-design model, working with both IBM and Red Hat. The approach has supported the company’s efforts to extract value from its data across diverse environments while meeting regulatory and security requirements.
This strategy is the result of a partnership that has been in place for seven years, but a newly signed 10-year extension will propel the band into a new phase—building what Garcia calls an “AI Factory.” This will be a purpose-built infrastructure that includes hardware, software and workflows for developing, training and deploying AI at scale. And it’s possible, he continues, “as a result of having a strong hybridization, plus a strong containerization platform, plus the right data management model.”
As companies move from the “promise of AI to the profit of AI” as Hunter put it, “doing AI fast but carefully” will need to be top of mind, said Garcia. “I like to call it AI with seatbelts,” Hunter added.
12:30 p.m. EDTÂ | May 7, 2025
When Meta released its first open Llama model two years ago, many questioned the wisdom of spending billions to develop models that would be shared for free. Some doubted that it was even possible for the company to develop high-performing open-source models for the broader community while also developing models for its own products. Still, Meta bet that going open would speed up innovation.
That bet appears be paying off. In a keynote conversation with Dinesh Nirmal, SVP for Software Products at IBM, Ash Jhaveri, VP of Reality Labs and AI Partnerships at Meta, shared that the company has now seen 1.2 billion downloads of its Llama model series. “The beauty of open is that the community becomes a force multiplier,” said Jhaveri. Then developers wanted a framework, so Meta also created a modular framework called the Llama Stack, “a modular toolbox that you can plug in things like agents, post training, evals, and they all snap together like Lego pieces,” he explained.
That modular design makes Meta’s Llama Stack a natural fit for IBM, which shares the core belief that the AI community works better when it works together. Now, watsonx is integrated as an API provider within the Llama Stack, enabling enterprises to deploy generative AI at scale, with openness at the core.
But not every industry can go fully open. As Nirmal noted, sectors like defense, security and healthcare face different constraints, including the need for strict compliance, regulatory oversight and risk thresholds. Enter Richard Vitek, Vice President of Data, Analytics and AI Enablement at Lockheed Martin. “We want to be able to take advantage of all those open tools [and] get the benefit of all that knowledge,” he said, “but not expose any security risk.”
That push-pull dynamic—open innovation versus closed-system safety—is one IBM is helping enterprises navigate. It starts by baking governance and security into the earliest stages of preparing enterprise data for AI tools, said Nirmal. Using the watsonx.data platform, for example, companies can ensure that proprietary data is governed as it’s ingested and stored, making it secure from the start. The best part, he added, is “the watsonx.data platform can now prepare AI-ready data for enterprises in under five minutes.”
While the keynote was focused on data, it also underscored something broader: the strength of the industry ecosystem. “To me, the future isn't just about open versus closed,” said Jhaveri, “It's really about who you're building with.”
2:30 p.m. EDTÂ | May 7, 2025
Today’s enterprise leaders are no stranger to reconciling short-term demands with long-term innovation—and the stakes are high. “CEOs are facing the competing pressures of funding existing operations and investing in innovation,” said Mohamad Ali, Senior Vice President at IBM Consulting, during his keynote at Think 2025.
Quoting a new study from the IBM Institute for Business Value (IBV), which surveyed 2,000 leaders, Ali noted that CEOs are not only embracing AI—they’re actively adopting AI agents and preparing to implement them at scale.
Where leadership is tested most, Ali said, is balancing the pressure to show quarterly results, reduce costs and keep core operations running smoothly with the need to invest in next-generation tools, data architecture and workforce upskilling.
Striking that balance between efficiency and innovation takes more than vision. And companies like Heineken and Cencora are building the technical foundations required to turn AI ambition into results.
To advance its digital transformation, Heineken, one of the most iconic beer brands in the world, is overhauling its data stack to accelerate AI-driven value delivery. Ronald den Elzen, Chief Digital and Technology Officer at The Heineken Company, acknowledged from the stage that the company still has a ”massive amount of work to do [in order to] build a secure and resilient digital backup.”
Pawan Verma, Executive Vice President and Chief Data and Information Officer at the pharmaceutical company Cencora, underscored that idea. “You need reliable data—in particular, data that can propagate,” he said. “You combine that with a culture that looks at [the data] every day, because in our industry, what is most critical is reliability and accuracy.”
For both companies, scalable and trustworthy AI starts with strong, enterprise-wide data infrastructure. But infrastructure alone isn’t enough. AI needs to be integrated into workflows in ways that support human talent, not replace it.
IBM has been its own “client zero” in embedding AI into real operations. In HR, for example, the company has automated one million tasks, and now resolves 95% of employee requests—across 11.5 million interactions— without escalation.
“It’s an astonishing number,” said Neil Dhar, a Global Managing Partner at IBM Consulting. “What does that do? It creates a world-class HR function that’s able to flex across the entire organization. We can empower our people, allow them to do what they need to do.”
Meanwhile, agentic AI is now becoming an “intelligent layer” across vast systems, helping teams resolve issues quickly and integrate data seamlessly. That’s the idea behind IBM’s partnership with Oracle, which brings watsonx into the workflow.
“This truly is an innovative time in our lives, where there is really an opportunity where everyone needs to think about AI agents in the workflow,” said Laura-Elizabeth Ware, a Senior VP of North America HCM Cloud Applications at Oracle. “But even before you get started—in the area of HR—you really need to look at skills, and defining your skills strategy.”
That workforce focus is timely. The IBV CEO Study found that more than a third of leaders expect to retrain and reskill their teams, and half anticipate hiring for roles that didn’t exist a year ago. That means that a clear skills inventory is critical, and understanding where agents fit into real workflows and skillsets is emerging as a new frontier of enterprise readiness.
5:30 p.m. EDTÂ | May 7, 2025
Is IBM trying to kill the prompt?
It sure looked that way as the company introduced a new concept for “generative computing” during its final keynote of Think 2025. Generative computing replaces prompt engineering with structured programming for large language models (LLMs)—and is part of a broader push to fuse artificial intelligence, classical high-performance computing and quantum systems into a unified model for enterprise IT. The focus is on immediate, real-world utility, rather than distant speculation.Â
“These are not dreams anymore,” said Zaira Nazario, Director of Science and Technology at IBM. “We're already connecting HPC and quantum systems to enable quantum-centric supercomputing.”
IBM experts say prompting has become an untenable method for building complex AI systems. Sriram Raghavan, a VP of IBM Research for AI, argued that prompt-based interactions are too brittle and model-specific to scale.
“Very soon, we're going to have books and books of prompts,” he said, calling much of the current practice “security by prayer.”Â
Generative computing, by contrast, allows developers to build AI applications through a runtime that enforces clear rules, constraints and behaviors. In testing, IBM’s new framework achieved comparable accuracy to 70-billion-parameter models using systems with only one billion parameters.
IBM’s generative computing approach brings software engineering principles to AI by letting developers define behavior through modular components, instead of natural language. The runtime supports features like multi-step generation, strict formatting rules and model-specific routing—all handled programmatically. This means developers can build applications without rewriting code every time the underlying model changes. The result, Raghavan said, is faster development, better security and more reliable performance across models of different sizes and capabilities.
Alongside its work in AI, IBM has advanced the frontier of quantum computing, reaching what it calls a new phase of technical capability. Jay Gambetta, IBM Fellow and Vice President of IBM Quantum, said the company has reached quantum utility, meaning its quantum computers can now solve problems that classical systems cannot simulate.
He described experiments pairing a 45-qubit processor with Japan’s Fugaku supercomputer to simulate complex molecules, with results that match the best classical methods. A 77-qubit follow-up showed similar promise. Gambetta said that the company expects to demonstrate quantum advantage by 2026, a milestone where quantum systems outperform classical computers on meaningful, real-world problems with verified accuracy.
Marking the 100-year anniversary of quantum mechanics, Gambetta noted that IBM’s foundational work, from pioneering quantum hardware to launching the first cloud-accessible quantum computer, positioned it to lead this new era.
“We're not interested in demos,” Gambetta said. “We're building systems that actually work.”
The future of application resource management with IBM Turbonomic
IBM is unveiling groundbreaking advancements within IBM Turbonomic, designed to help organizations enhance their application performance and efficiency posture. IBM Turbonomic now integrates with GitHub and HashiCorp Terraform, enabling cloud-based application workloads defined as code to be safely optimized. This ensures that organizations can maintain robust application performance and efficiency at scale, while leveraging the governance and agility that Infrastructure as Code (IaC) brings.
Introducing the upcoming IBM watsonx Code Assistant for i
IBM is introducing the upcoming IBM watsonx Code Assistant for i, a coding assistant that is purpose-built to accelerate the modernization of IBM i applications. It is expected to empower developers by accelerating RPG modernization tasks with AI-powered capabilities made available directly in the integrated development environment (IDE).
IBM Granite 4.0 Tiny Preview: A sneak peek at the next generation of Granite models
We’re excited to present IBM Granite 4.0 Tiny Preview, a preliminary version of the smallest model in the upcoming Granite 4.0 family of language models, to the open source community.
Ushering in the agentic enterprise: Putting AI to work across your entire technology estate
IBM is providing a comprehensive suite of enterprise-ready agent capabilities in watsonx Orchestrate to help businesses put them into action. The portfolio includes: build-your-own-agent in under five minutes, pre-built domain agents, integration with 80+ leading enterprise applications, and agent orchestration and visibility.
Unlocking unstructured data for generative AI: watsonx.data's evolution
Unstructured data is one of the most valuable but underutilized resources in the enterprise. IBM is evolving watsonx.data to help organizations activate this data to drive more accurate, effective AI. The new watsonx.data will bring together an open data lakehouse with data fabric capabilities—like data lineage tracking and governance—to help clients unify, govern and activate data across silos, formats and clouds. Enterprises will be able to connect their AI apps and agents with their unstructured data using watsonx.data, which tests show can lead to 40% more accurate AI than conventional RAG.
IBM webMethods Hybrid Integration: Integration for the agentic AI era
IBM is introducing webMethods Hybrid Integration, a next-generation solution that replaces rigid workflows with intelligent and agent-driven automation. It will help users manage the sprawl of integrations across apps, APIs, B2B partners, events, gateways, and file transfers in hybrid cloud environments.
Lumen and IBM collaborate to unlock scalable AI for businesses
Lumen Technologies and IBM announced a new collaboration to develop enterprise-grade AI solutions at the edge—integrating watsonx, IBM's portfolio of AI products, with Lumen's Edge Cloud infrastructure and network.
IBM and Oracle expand partnership to advance agentic AI and hybrid cloud
IBM is working with Oracle to bring the power of watsonx to Oracle Cloud Infrastructure (OCI). Leveraging OCI’s native AI services, the latest milestone in IBM’s technology partnership with Oracle is designed to fuel a new era of multi-agentic, AI-driven productivity and efficiency across the enterprise.
To use it as a catalyst for agentic AI, IBM and Salesforce are providing customers access to their business data in IBM Z mainframes and Db2 databases so it can power AI use cases on the Salesforce Agentforce platform, the digital labor platform for augmenting teams with autonomous AI agents. IBM is also introducing new agents built with IBM watsonx Orchestrate that work with Salesforce technologies, and IBM Granite model performance.
IBM study: CEOs double down on AI while navigating enterprise hurdles
A new global study by the IBM Institute for Business Value found that surveyed CEOs are committed to advancing AI solutions across their organization, even as they face challenges from accelerating technology adoption.
Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.
Put AI to work in your business with IBM's industry-leading AI expertise and portfolio of solutions at your side.
Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.