Two years ago, OpenAI, a San Francisco startup founded as a nonprofit in 2015 by Sam Altman, Greg Brockman, Elon Musk, Ilya Sutskever, Wojciech Zaremba and John Schulman, opened its AI chatbot for testing. The release sparked enthusiasm, questions and even some experiments (remember this episode of The Daily?). Fast forward to November 2024, and ChatGPT now has more than 200 million weekly active users, while OpenAI continues to expand through partnerships with tech giants like Microsoft and Apple.
So, how did ChatGPT’s release change the AI landscape? We spoke to IBM experts about how AI has evolved in the last two years and continues to transform.
Artificial Intelligence dates back decades. In the fifties, for example, IBM was already training some of the earliest neural networks, using checkers and backgammon to train machines to beat grandmasters. But the release of ChatGPT, with its simple and free-to-use natural language interface, marked a major turning point.
“What was different about ChatGPT is it really gave consumers a great experience and allowed non-technical people to use AI for incredible things,” says IBM Distinguished Engineer Chris Hay.
The effect has been to put a powerful technology—one that had previously mostly been accessible only to the wealthiest businesses—in the hands of anyone with a computer and an internet connection. Meanwhile, OpenAI’s move forced other companies to release their own powerful tools, fueling the rise of a new class of programmers and entrepreneurs focused on AI innovation worldwide.
“The global economic [landscape] fundamentally changes when AI becomes as accessible as electricity,” says Shobhit Varshney, a VP and Senior Partner at IBM Consulting. One sign of this shift, Varshney says, is the unprecedented number of programmers joining GitHub from around the globe, with Python, a programming language mostly used for AI and machine learning, becoming the most popular language on the platform.
“Now people can leverage these tools without needing millions of dollars of investment,” he says. “We will soon see one-person unicorns.”
Chris Hay says his “aha” moment with ChatGPT came just a few weeks after the chatbot’s release when he was trying to work on his own programming language. He rapidly found himself asking GPT for feedback.
“That’s when I thought, ‘Holy crap! I'm having a proper conversation here about designing my programming language. I've got a collaborator!’”
Two years later, 92% of business leaders surveyed by the IBM Institute for Business Value say they will integrate AI into their workflows by 2025. There has already been a push to integrate AI into consumer-facing services and products, like Apple Intelligence, Adobe and Meta AI.
“Now, everything that we touch is gen AI,” says Hay.
LLMs started with text. But the industry has since begun moving toward multimodal models, which can integrate and process multiple data types, including images, audio, video, text or code.
Models can now engage with and process several types of data beyond the written word, and it is opening up an entirely new world of applications for AI, like drug discovery and climate tracking.
“It does not have to be only text or code, so there's been a huge explosion of the different modalities that AI models can understand, represent and then create,” says Varshney.
Humans combine all their senses to make a decision—and that is where AI is headed, he says.
One major development this year has been the emergence of so-called “reasoning models.” These models, such as OpenAI’s o1 series, apply chain-of-thought reasoning, meaning they can mirror human reasoning, and use logical deductions to solve complex problems.
“What we're seeing now is that models are beginning to think more and come to an answer,” says Hay. “It's not human reasoning yet, and ‘reasoning’ is an overused term, but at least it's able to reflect on what it's doing and come up with a higher-quality answer by investing more time.”
The original GPT-3 boasted 175 billion parameters. Today, smaller models—some with as few as 2 billion parameters—are outperforming the best models from 18 months ago.
“Models are getting smaller all the time, and they are getting more capable because they're being trained better,” says Hay. “That is the path that I want to see: the larger a model is, the slower it is. I want models that run on my own device.”
Is AI reaching its limit? The question is always looming, and a recent Reuters story quoting an OpenAI co-founder on pre-training and scaling AI sparked a lot of public commentary about delays in training new language models to use more human-like ways to “think.”
But Varshney believes that evaluating AI in terms of human abilities is misguided.
“AI shouldn’t be judged on human intelligence levels, because humans are good at some stuff, and terrible at others,” he says. “We don’t want AI to replicate humans; we want AI to be really good at what it does.”
As an example, he points to the history of a common household appliance. “We did not expect a dishwasher to stand up and start washing the dishes the way a human does,” he says. He believes we need better ways of evaluating what AI is actually good at.
Far from hitting a wall, Hay believes that AI still has a lot of room to grow, thanks in large part to synthetic data. “We’re going to need less data to train models, and models are going to be focused on different kinds of architectural changes,” he says. “There will be breakthroughs, and the hardware is going to get better. So I don’t think there’s going to be a wall.”
OpenAI is still seen as a leader in the race for AI innovation and deployment. Varshney believes that OpenAI still has no serious contender, two years after the release of ChatGPT. Hay agrees, largely because he feels that OpenAI still offers the best user experience for consumers.
“I still use ChatGPT, because it's better consumer experience,” he says. Still, he acknowledges that the field is moving rapidly. Already, several former OpenAI leaders and scientists have gone on to found new companies, such as Anthropic and Safe Superintelligence Inc., and the market continues to be flooded with new features and functionalities.
“Who will win out the race? I hope nobody,” says Hay. “In a world where [only] one company wins the race and we all lose, whereas [when] there's healthy competition, we all get access to great AI innovations.”