This article was featured in the Think newsletter. Get it in your inbox.
Few people have spent more time thinking about how AI will reshape work than Ruchir Puri. For years, Puri, Chief Scientist at IBM, has been building the systems that some workers now fear will replace them. He has watched the technology cross thresholds that once seemed decades away. And yet even he has been caught off guard by the speed of change in AI.
“I think it’s fair to say everybody who was working in AI has been surprised, and in some cases shocked, with how fast this technology has moved,” he said in an interview with IBM Think. “The speed of innovation and speed of progress has been nothing short of awe-inspiring.”
The numbers tell a story of transformation at scale. According to the World Economic Forum’s Future of Jobs Report 2025, which surveyed more than 1,000 employers representing 14 million workers across 55 economies, by 2030, AI will create 170 million new roles while displacing 92 million jobs—a net gain of 78 million positions. But that net positive obscures considerable churn: 22% of all jobs will be disrupted in the next five years. Employers expect 39% of skills required in the job market to change by 2030, and 41% of employers say they plan to reduce their workforce where AI can automate tasks. Meanwhile, Goldman Sachs estimates that if current AI applications were expanded across the economy, 2.5% of American employment would face displacement.
Every major technological shift has sparked predictions of mass unemployment. The car would eliminate the horse industry and everyone connected to it. The computer would render clerks obsolete. The internet would hollow out retail. And each time, the economy created more jobs than it destroyed. Approximately 60 percent of American workers today hold jobs that did not exist in 1940, which means more than 85 percent of employment growth over the past eight decades came from technology driven job creation. The question is whether this time is different.
The vast majority of job titles will persist. That is the view of Robert Seamans, a Professor of Management and Organizations at NYU Stern School of Business and Director of the Center for the Future of Management. “Only very few jobs will disappear,” he said in an interview. But that reassurance came with a warning: “Most jobs will change, however, and workers that don’t lean into those changes will find themselves with an antiquated set of skills that don’t have value on the job market.”
What makes a job vulnerable? According to Seamans, the positions most at risk are those with a limited number of discrete tasks and minimal interaction with other roles. Consider a telemarketer. The job involves a circumscribed set of activities: call, respond to queries and log results. There is no complex group coordination, no ambiguous human situations to navigate. “The telemarketer doesn’t interact with others in a complex group task setting,” he said. “It can be systematized and, therefore, replicated.”
A similar framework emerged from the conversation with Puri. The jobs at risk, he said, are those that involve “mechanical motion” in language and knowledge domains. “Somebody calls, you go look up a manual or a record book, get the answer, say the answer,” he explained. “That kind of job can be eliminated.” He was careful about his word choice. “I’m deliberately not using the word ‘menial,’ actually. Mechanical in nature.”
Software development offers a preview of what is coming elsewhere. In 2011, Marc Andreessen declared that software was eating the world. A decade later, Puri said, the meal was finished. “Software has eaten the world. And it was clear in 2020 that AI is eating software.”
The job of a software developer, Puri said, is evolving at an unprecedented pace. “I’m not in the camp that those jobs will be eliminated. I’m more in the camp of so much more is to be done, so their jobs will evolve.”
Not every job that looks automatable actually is, Puri said. AI now reads millions of medical scans, but a radiologist does far more than interpret images. The work involves connecting patient history, conducting examinations and synthesizing complex information. “It’s just much more complicated than just reading an imaging report,” he said. The same principle applies across professions: the more a job involves weaving together disparate threads of judgment and context, the harder it is to automate.
For workers in vulnerable roles, the takeaway is clear: “If you are in that job, you’d better start figuring out what value add you are bringing, because that kind of job can be automated,” Puri said. “But these tools give you the means to reimagine your job, too, rather than making you so nervous,” he added, noting that workers can experiment with the technology on their phones and propose new roles for themselves.
“Go to your manager and say, ‘This is possible,’” he said. “That gives you a say in your job and what you do in the future, rather than having that future be controlled by someone else.”
The question of which jobs survive may have less to do with the work itself than where it sits in the hierarchy. Gillian Lester, former dean of Columbia Law School, sees a potential “hollowing out of the middle” in the workforce. In her telling, the jobs most likely to persist are at the extremes: highly specialized professionals whose judgment cannot be replicated, and low-wage service workers performing physical tasks that can’t be easily automated. “The professionals, the surgeon and the custodial worker are going to be okay,” she told IBM Think in an interview. But in between lies a vast white-collar middle that may prove vulnerable. “People working in the insurance industry, where they’re sort of checking data sets to figure out whether somebody is eligible for recovery,” Lester said, “that’s just not going to require humans anymore going forward.”
What about middle management and executive functions? Information technology has already reshaped these roles, Puri said, putting “more powerful tools at their disposal” and enabling them to manage larger teams with “better intelligence, better decision-making capabilities at your fingertips.”
But he pushed back against the notion that management itself will be automated away. “I don’t think we are going to get into a scenario where we, en masse, are like, ‘We don’t need middle management,’” he said. “That would be almost equal to saying we don’t need decision making—we’re just going to offload everything to these machines that can make decisions for us as well.”
That scenario is either “nirvana or a dystopian view, depending on whose view you take,” he said. But in either case, it is distant. “Until we get to trust these machines with literally our lives and everything else, there is a long transition period between when the jobs are being reshaped rather than being eliminated.”
What happens next may be less like a wave of layoffs than a sorting. Nearly every knowledge-based job will soon split in two, according to Gabe Goodhart, Chief Architect of AI Open Innovation at IBM. “The AI-enabled version of that job will be in extremely high demand,” he said in an interview with IBM Think. Those who master the tools will command a premium; those who do not will find themselves competing in a shrinking market for traditional work, he noted.
The gap between these two camps, Goodhart warned, will be filled by workers who “do not have the skillset to use AI effectively, but try to use it anyway.” These are workers who may produce more but understand less, who trust the output without grasping the process, who cannot tell when the machine has erred.
But the same tools that threaten some workers are empowering others. Consider the emergence of “vibe coding,” in which AI agents translate design concepts directly into working software. Traditionally, designers lacked the programming skills to build their own prototypes. Now they can. “This is a cross-over skill where the key inputs—design—can now translate directly to the target output of working software,” Goodhart explained. The barriers between disciplines are dissolving.
This democratization may be one of the most profound changes underway. “The current generative AI technology has made coding skills and data science skills accessible to non-coders in an almost English language way,” Puri said. He pointed to predictions of single-person billion-dollar valuation companies. “What used to take probably 20 people before, that can be done by one or two people now. And by the way, those people do not have, quote unquote, ‘software skills.’”
The result is counterintuitive: non-technical workers may actually be facing expanded opportunities. “They are the ones who are brought in now,” Puri said. “They were kind of left out before, actually. Now they are brought in.”
Industry newsletter
Get curated insights on the most important—and intriguing—AI news. Subscribe to our weekly Think newsletter. See the IBM Privacy Statement.
Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.
If technology is advancing at a pace that even its creators find surprising, the transformation of the economy is proceeding more cautiously. The year 2025 marked “the start of that massive on-ramp,” as Puri put it, a moment when enterprises begin integrating AI into real operations, restructuring workflows and cautiously granting systems more autonomy.
“Enterprises are still worried about giving too much agency to agents,” he said. “You don’t have control over them, they can wreak havoc.”
A related challenge is knowing whether AI systems are doing what they are supposed to do. “Benchmarking will, or at least should, become a first-class competency across businesses,” David Cox, Vice President for AI Foundations at IBM Research, said in an interview with IBM Think. “How do I probe a system and construct tests and benchmarks to know if the system is doing what it is intended to do? This is a glaring gap that I see today.”
The tools for evaluating AI, Cox argues, will require entirely new disciplines, different from traditional software testing because these systems are “much less deterministic than the software we’re used to.”
As Puri sees it, it may take as long as a decade for AI to fully “percolate through the fabric of our society—with the next three to five years being a period of hyper acceleration,” followed by a longer settling.
What will the jobs of the future look like? The vision that emerged from these conversations is one of humans managing “armies of agents” and overseeing AI systems, ensuring they behave as intended and correcting them when they veer off course. “It’s like a job of a manager,” Puri said. “Your value-add moves to the next layer.”
Subject matter experts will work directly with technologists to create AI agents, drawing on their domain knowledge to define system behavior and guardrails. They will also be responsible for maintaining those agents, applying human judgment to evaluate consequences, spot risks and navigate edge cases. “AI technologists … normally know AI cannot create AI agents alone,” Puri said. “There has to be an intersection of AI and subject matter expertise.”
Goodhart put the point bluntly: “Human judgment is the key skill needed to leverage AI in the workplace safely.” That means that workers who understand what AI is and what it is not, who can deploy it effectively while recognizing its limitations, will be essential. “People without this knowledge who try to use AI will fall into that gulf of relying on AI without understanding the risks and end up making mistakes that could have big consequences.”
Three capabilities will matter most in the coming years, according to Seamans: “The ability to upskill oneself, without having to be asked to do it; a hunger for knowledge; and a desire to help others, especially those on your team.” None of these is a technical competency. Instead, they all require a particular disposition toward work and colleagues, Seamans said.
The emphasis on soft skills came up again and again in the interviews. “Problem-solving skills, communication skills—get them,” Puri advised. “They are probably more important than ever, because these machines can do a lot of potentially mechanical tasks.” What remains distinctly human, he argued, is emotional intelligence. “The emotional quotient is still very much a human trait. Empathy is a human trait.”
Drawing on her experience in legal education, Lester stressed the importance of critical thinking and creativity, the capacity to approach problems from unexpected angles, to synthesize information in novel ways and to exercise intuition alongside analysis.
“A huge part of what we’re teaching is what AI can give you,” she said, referring to factual knowledge. “AI can tell you how many planets are orbiting the sun. But a huge part of what we’re doing is teaching people how to solve problems. We’re teaching people how to think creatively, maybe by following different pathways than usual—critical thinking skills. We’re also teaching where intuition and gut might be as helpful as a formula.”
These skills have always underpinned education, but they are becoming increasingly important as the informational functions of learning can be offloaded to AI. The challenge now is to make the non-transferable parts—judgment, creativity and interpersonal fluency—the centerpiece of professional development. “Figuring out how to teach people those skills and continuing to teach those skills is, I’d say, one of the number one challenges for higher education,” Lester said.
Something changes when human oversight is replaced by algorithmic monitoring. AI surveillance in the workplace can measure keystrokes, analyze facial expressions, track tone of voice, and follow physical movements with a granularity no human manager could sustain. “We’ve always had bosses,” Lester said. “Sometimes micromanaging, sometimes brutal foremen on a shop floor, sometimes a white-collar supervisor.” What’s new, she said, is that AI can “amplify and systematize those harms,” turning ordinary supervision into something more abstract, more pervasive and harder to push back against.
More troublingly, AI supervision can remove the human element from workplace relationships—relationships that can be oppressive, yes, but also nurturing. “There’s an anonymized quality to it,” Lester said. “For better or for worse, we’ve had to work with those people who are our supervisors or our mentors, and some of those relationships are very positive relationships.”
While much of the conversation around AI and work centers on efficiency and control, Lester was more interested in its effect on the human atmosphere of the workplace. “I think AI could end up having profound effects on interpersonal relationships within the workplace,” she said, “including those that involve growth, professional growth, learning, advancement through interpersonal learning, advising, mentoring, as well as bossing. And I think that’ll be really interesting. We’re going to have to figure out how to balance the value and the usefulness of these tools against interpersonal socialization.”
One consequence of the shift underway is that learning can no longer be treated as a one-time event. “Learning and education are not the process when you get your degree, and your learning or education is done,” Puri said. “Things are changing all the time.”
He advocates for universal “AI savvy,” a basic literacy that allows workers in any field to engage with the technology reshaping their industries. The tools themselves make this easier than ever, he argued, because employees can communicate with AI in natural language.
Nonetheless, a generation of students now finds itself in “a holding pattern,” as Lester put it. Young people who followed the rules and cultivated the expected skills are now finding that AI systems can reproduce those skills in seconds—even journalism, Lester acknowledged. “People can just get their news at the click of a button, and AI gives you a summary of what just happened. People want to act. They’re in the moment of their life. They want to make decisions, and they want to acquire the skills they need to acquire, and sometimes they don’t even know what the right answer is.”
A generational divide has emerged in attitudes toward AI, though it is not the one you might expect. Younger people simply use it all the time. “I was born in 1964, and I don’t reach for it automatically,” Lester said, “whereas my kids, who are college age—they just see it as a workaday tool that they use all the time.”
Lester’s advice to her own son, who works at IBM, is to combine technological fluency with interpersonal investment. “You can’t put your head in the sand,” she said. “You need to understand the basic toolkit.” But she also counsels him to observe mentors, to learn how teams are built, to develop judgment that extends beyond “the computational wizardry of the AI.” She tells her son to focus on “why they’re good at what they do, why they’re good at building teams and thinking tactically or strategically about building projects.”
Puri’s advice to a hypothetical 20-year-old entering the workforce is similar. “Never fear the technology; embrace it,” he said. “I know that these are unprecedented times. It may appear very uncomfortable. But having been in the field for very long, I can assure you, people who embrace the technology are the ones who will be defining the future.”
There is a question lurking beneath all of this about whether some forms of work should remain human-led by design. When the question was posed to Lester, she paused a long time before answering.
“Children need to be cared for by humans,” she finally said. “They need the physical interaction. They need eye contact, they need touch, they need compassionate discipline that understands the human.” The same principle might apply to teaching, to certain forms of healthcare, to the fields that involve negotiations between parties to prevent conflicts. These are areas where efficiency gains may not be worth the cost in human connection—where, as Lester put it, “we need to be intentional as we go forward, in carving out and protecting some areas of human endeavor.”
The specter of a universal basic income substituting for meaningful work troubles Puri. “People work because it gives them a purpose,” he said. “It’s not just about money. I get it—money is critically important, but purpose in life is equally important.” Lester described a scenario where displaced workers receive “USD 20,000 a year, and then they can start their own business.”
But she remained skeptical that guaranteed income or subsidies alone could replace the meaning that comes from productive activity. “I’m skeptical that that’s enough to give people the meaning, the purpose, the interpersonal interactions that can’t really be priced out,” she said.
And yet, optimism persists—at least among those building these systems. The new technology will create more opportunities than it destroys, Puri believes. It will solve problems “we never imagined could be solved.” In the eyes of experts like Puri, a future with AI is one that is hopeful. “We potentially will find cures for diseases that we never thought we would find,” he said. “We will discover sustainable materials. And this will enable really unprecedented innovation in society.”
But he is also clear-eyed about what cannot be automated. “A lot of work that we do with our hands, it cannot be reimagined,” he said. “There’s a leak in my house. I need someone who knows how to fix that leak. My machine is broken. I need someone who’s going to fix that, as well. I need someone to reimagine how these things are designed, how they are built, how they are maintained, how they are fixed.”
The question is whether workers will be scared of the future, or choose to embrace it. The technology is already here. It is remaking software development, customer service, legal research and countless other fields, and it will likely reshape domains that feel untouchable today. But it remains a tool—an astonishingly powerful one, but a tool nonetheless.
“Whether you happen to be an English writer, whether you happen to be in the entertainment business, whether you happen to be a technologist, whether you happen to be a medical professional,” Puri said, “embrace the technology, because it is amazingly powerful.”