12 minute reading time
AI has captured the media zeitgeist in recent years, especially since the release of OpenAI’s ChatGPT in 2022. Countless stories probe the technology’s potential future, exploring the advancements of AI and the changes it might bring about. But what has been the real-world impact so far?
The potential of AI algorithms to effect positive change must be considered alongside the risks. Exploring AI’s impacts, both advantageous and disadvantageous, is essential to guiding the responsible use of AI in the years ahead.
As the use of AI models spreads, engineers, scientists, policymakers and business leaders explore its potential in their respective fields. AI technology providers issue grand claims about the future of AI, citing everything up to and including “fixing the climate, establishing a space colony and the discovery of all of physics.”1
But what of the measurable, quantifiable and provable benefits of AI? Sweeping promises aside, how have AI-related technological advancements verifiably benefited society today? To date, the demonstrable positive impact of AI has included several benefits, such as:
Increased business performance
Weather forecasting and disaster prediction
More efficient software development
New chip technology
Per- and polyfluoroalkyl substances (PFAS) harm mitigation
IT outage protection
Drug discovery
Nuclear fusion research
From AI chatbots to self-service interfaces and other intelligent systems, business leaders credit AI adoption with gains in revenues and profits. AI-powered business intelligence tools can reduce human error by enabling leaders to perform data-driven decision-making. Meanwhile, AI apps and workflow enhancements streamline operations for increased efficiency. Overall, generative AI adoption might lead to global GDP growth as high as 7% over 10 years.2
IBM’s AI in Action 2024 report found that 67% of surveyed leaders reported revenue increases of 25% or more due to including AI in business operations. The report found similar sentiment for profit boosts: 66% of polled leaders credited AI systems and tools for profit margin increases of at least 25%.
How are these leaders gaining business benefits from AI? Communication and planning are essential: 85% of leaders claimed to follow an AI roadmap, and 72% achieved alignment between C-suites and IT leadership. Al programs can help business leaders improve decision-making processes, leading to more informed decisions. Across the four industries studied in the report—finance, telecoms, retail and manufacturing—the leading business use cases for AI are:
Finance: Virtual assistants for external applications and AI-enabled search engines
Telecoms: IT operations and automation, virtual assistants for internal applications
Retail: Improved customer experience
Manufacturing: IT operations and automation
A closer look at the manufacturing sector reveals further details for AI in operations management and other business areas. The median response in an IBM survey reported a 30% improvement in forecast accuracy, 25% in product defects, 20% in excess inventory and similar benefits in other metrics.
In 2023, IBM and NASA collaborated on a foundation model that helps scientists analyze data on the effects of previous floods and wildfires. With NASA’s training data, the publicly available model was also used to assess reforestation efforts in Kenya and heat islands—concentrated urban areas of higher temperatures—in the United Arab Emirates.
Based on this model, IBM and NASA released a new open source model in September 2024 designed to make climate applications faster and more accessible. Use cases include flood warnings, hurricane predictions and gravity wave estimations. Getting ahead of these natural events can potentially mitigate the resulting loss of life and property damage.
The open source AI tool is customizable for specialized use, such as in a collaboration with Environment and Climate Change Canada, to make accurate rainfall “nowcasts” several hours out. And it’s light enough to run on a single desktop computer.
Announced in July 2024, Google’s NeuralGCM model3 combines two approaches to deep learning and weather predictions. It first applies traditional modeling to assess atmospheric conditions, then brings in AI to keep predictions on track.
A commonly cited benefit of AI is that it automates repetitive tasks for workflow optimization, freeing up workers to focus on more demanding priorities. AI-powered agents take things one step further by autonomously determining and pursuing a course of action to achieve high-level tasks.
In the field of computer science, software engineering (SWE) agents can autonomously resolve GitHub tickets to streamline workflows. For example, large language model (LLM) agents can locate bugs on behalf of developers and propose fixes. Developers can review and approve the proposal, giving the agent the go-ahead to update the code.
Generative AI models are compute-hungry and run on high-powered graphics processing units (GPUs) made by manufacturers such as Nvidia and AMD. GPUs have historically been the most powerful chips capable of handling the advanced computations required by machine learning algorithms.
GPU shortages persisted since the COVID-19 pandemic disrupted global supply chains, but the need for better chip performance incentivized the development of more efficient chips. The artificial intelligence unit (AIU) NorthPole AI inference chip unveiled last year demonstrated a 46.9-fold increase in speed at an energy consumption 72.7 times more efficient than the H100. In the US, the Bipartisan Senate AI Working Group has pledged to support the research and development of new AI chips.4
For sustainable applications of AI, new chip technology is paramount. Developments such as the AIU NorthPole indicate a future in which LLMs can continue to provide positive benefits with less energy use, and by extension, less climate impact.
Per- and polyfluoroalkyl substances (PFAS) are a group of chemicals that are used in nonstick cookware, cosmetics, food packaging and mobile phone screens. But PFAS take thousands of years to break down in soil and also accumulate in the blood and liver of humans and animals. PFAS cannot be metabolized and have been linked to cancers and other diseases.
As part of PFACTS, a USD 5 million PFAS replacement program initiated by the US National Science Foundation, researchers are using generative AI to discover potential PFAS substitutes. AI applications generate complex molecule structures projected to deliver similar functionality as PFAS with lower toxicity. The model has produced at least 6,000 potential substitutes and is being expanded to cover extra considerations.
When an IT outage occurs, response teams must diagnose the problem, create a remedy and update the buggy software as soon as possible. Problem-solving AI solutions can make these processes faster.
AI-enabled IT management platforms monitor client environments and detect potential threats. When such an event is found, the AI system summarizes it, identifies potential causes and guides response teams through a solution. Real-time AI assistance streamlines decision-making and helps teams respond faster to mitigate the repercussions of IT outages.
In response to the COVID-19 pandemic, pharmaceutical and healthcare companies raced to research, test and deploy life-saving vaccines. Drug research is complicated and requires an intimate understanding of proteins and how they fold in three-dimensional space.
In late 2020, almost a year after COVID-19 made worldwide landfall, a research team at Google DeepMind announced an AI protein fold prediction tool named AlphaFold2.5 This tool could predict, with over 90% accuracy, the three-dimensional shape of a protein based on its one-dimensional molecular code.
Simultaneously, an IBM research team created a foundation model and used it to generate four COVID-19 antivirals. Because viruses mutate over time, making known vaccines less effective, breakthroughs in AI-assisted antiviral discovery can offer new solutions to counter these threats.
The benefits of neural networks in healthcare are not limited to the search for new drugs. The United Nations Development Program (UNDP) advocates for the application of AI in assisting people living with disabilities.6 In 2024, researchers at the National Institute of Health achieved success in applying AI to medical diagnoses, though the AI struggled to explain how it reached its conclusions.7
Natural gas, oil and electricity prices skyrocketed as a result of pandemic-related shortages and increased global demand. With AI-driven hyperscale data centers devouring power by the megawatt,8 the need for new technologies in the energy sector is clear.
In nuclear fusion, superheated plasma must be contained in a magnetized vessel, one type of which is known as a tokamak. If the tokamak’s magnetic field falters, the plasma can escape containment in a “tearing mode instability.” In 2024, a team of researchers at Princeton University developed an AI model that can predict and avoid tearing mode instabilities in tokamaks.9
The data science team successfully reduced tearing incidents with an AI tokamak controller trained by reinforcement learning.10 Soon, nuclear fusion as an energy source might not be limited to works of science fiction. AI-enabled tokamaks equipped with real-time adaptability can pave a promising path to a future powered by sustainable nuclear fusion.
The benefits of AI do not exist in a vacuum. While revenue increases can stem from productivity boosts, profit growth might come with AI-related layoffs. New chips and energy sources show promise, but the current AI landscape rests heavily on power-hungry GPUs, fossil fuels and water. The generative models that yield new drugs can violate copyrights and create deceptive deepfakes.
While AI shows promise in many fields, it also raises ethical considerations—especially without appropriate guardrails in place. The potential AI risks include:
Job displacement
Energy and resource overconsumption
Privacy concerns
Copyright infringement
Misinformation, scams and the loss of public trust
Fears and reports of job loss have long accompanied the rise of generative AI.11 And these fears are not entirely unfounded. Consider this example from China, where artists reported losing work only to be hired back for less money as AI art retouchers.12
A December 2023 poll of 750 leaders saw 44% affirming AI-related layoffs in 2024.13 And a poll of 2,000 C-suite executives from 2024 found that 41% of them expected to reduce their workforces in the next five years as a result of AI implementation. However, many also believed new jobs would arise to support AI initiatives.14
A 2024 report estimates that nearly 30% of hours worked by humans in the US might be automated by 2030—8% more when omitting the impact of generative AI in the workplace. Not all workers are equally vulnerable. The future of customer service, office support and food service will likely face AI-related job loss. But STEM, creative and other knowledge workers might likely see their workflows adjusted rather than lose their jobs.15
To minimize layoffs, address workplace inequalities and avoid attempting to replace human intelligence with AI, companies must adopt clear AI strategies. Implementing generative AI will require leaders to enable and encourage their teams to upskill. Effective upskilling relies on investment and commitment, but the rewards are well worth the process. An intentional up- and reskilling strategy that operates at every career level will enable employers to retain talent and their embedded institutional knowledge.
Generative AI requires enormous amounts of water and electricity—water to cool the hyperscale data centers that house the servers hosting the AI, and electricity to power them. Sometimes, these data centers are built in areas where access to water and electricity is already scarce or becomes so after local communities are made to compete with the new developments.16
Northern Virginia is one of the most popular locations in the US for data centers. In 2023, residents protested the pending construction of what would be one of the world’s largest data centers at the time, citing electricity demand among other concerns. However, county supervisors voted to approve construction of the facility.17
Some developments indicate that AI might not continue to draw as many resources away from local communities. An off-the-grid hyperscale data center slated for construction near Houston, Texas will run on hydrogen power.18 Microsoft has moved to recommission one of the nuclear reactors at Three Mile Island, pledging to purchase all the electricity it produces over a 20-year agreement.19
If high-performance, energy-efficient chips such as the AIU NorthPole can run on sustainable energy sources, perhaps AI capabilities can continue to evolve without worsening energy and resource scarcity.
On the bright side, we can offset some of the risks of AI through effective application of the technology. AI can help organizations become more climate-resilient and reduce their environmental impact. AI is crucial to the future of sustainable business practices. According to IBM’s latest State of Sustainability Readiness report, nine out of 10 business leaders surveyed agreed that AI will help achieve their sustainability goals.
“Uphold your privacy and confidentiality commitments,” warned the US Federal Trade Commission (FTC) to AI companies in a January 2024 statement.20 The FTC expressed concern over the conflict of interest between AI providers’ obligations to protect personally identifiable information (PII) and other user data, and the ever-growing need to expand model training datasets.
The US lacks federal regulation for AI-related data protection, both at work and in everyday life. Only at the state level do some Americans enjoy relatively broad data privacy protection, such as under California’s California Consumer Privacy Act (CCPA).21 Shortly after commencing his second term, President Trump revoked a Biden-era executive order aimed at protecting personal data and addressing other AI ethical concerns. Trump issued his own executive order on AI that promised to deregulate the industry in the name of promoting innovation and “enhancing America’s leadership in AI.”22
Policymakers in the European Union have put such sentiments into law, passing the EU Artificial Intelligence Act in 2024 to regulate AI development, implementation and use in the region.23 For example, the act bans the scraping of facial images from the internet to protect against facial recognition threats. The broad-sweeping AI Act goes into effect in 2026. In the meantime, the onus is on AI providers to cultivate responsible AI ethics practices and safeguards, and advocate for others to do the same.
Sometimes, the data used to train LLMs includes copyrighted materials such as news articles and works of art. Some companies have openly acknowledged their use of copyrighted materials during training, citing this practice as falling under fair use.
Image generation has also endured its share of intellectual property controversies. Working artists have vocally opposed commercial image generation, notably in a sitewide protest across portfolio website ArtStation in 2022.24 The directors of the 2024 film Heretic included a disclaimer in the credits that assured viewers that generative AI played no role in the making of the movie.25
So far, government offices in the US appear to side with copyright holders. The US Copyright Office decided in 2023 that AI-generated images are ineligible for copyright protections.26 In a 2024 lawsuit, the US District Court for the Northern District of California ruled that both AI providers and users can be found liable for copyright infringement stemming from image generation.27
Advocates have hailed generative AI as a powerful tool for “democratizing creativity.”28 But the same tools are just as easily applied to deceptive ends.
AI-created misinformation has proliferated since 2023 at lightning speed and includes images, video and screenshots with fraudulent text.29 AI-powered bots flood social media networks with misleading posts and comments.30 Bad actors can use AI to manipulate audio and video recordings or images and create realistic deepfakes intended to deceive. Some are harmless fun, such as the viral image of Pope Francis in a fashionable white puffer jacket, but others have more insidious effects.
During the early weeks of Russia’s 2022 invasion of Ukraine, a video appeared online that appeared to show Ukrainian President Volodymyr Zelensky urging citizens to cease fighting against Russian soldiers.31 The following year, a deepfaked pro-China video campaign spread across Facebook and Twitter featuring AI-generated newscasters.32
In the lead-up to the 2024 American presidential election, some voters received automated calls with a deepfaked recording from President Biden urging them not to vote in the upcoming Democratic primary.33 President Trump shared several deepfaked images that appeared to depict an endorsement from music superstar Taylor Swift.34
Cybercriminals can use AI deepfakes to perpetrate voice scams and fool victims into transferring money to them.35 Cybersecurity advocates promote fraud detection techniques such as establishing verification protocols with family members. Learning to detect AI scams can help shore up vulnerabilities in at-risk populations.
The more convincing generative AI output becomes, the thinner the veil between actual and manufactured reality grows. While some researchers are exploring the potential benefits of deepfakes in counterterrorism campaigns,36 media consumers must learn to evaluate the images and videos they see through a critical lens. In the meantime, tech companies and governments must collaborate to mitigate harms and guide responsible and ethical AI use.
1. “The Intelligence Age,” Sam Altman, 23 September 2024.
2. “Generative AI could raise global GDP by 7%,” Goldman Sachs, 5 April 2023.
3. “Neural general circulation models for weather and climate,” Kochkov et al, Nature, 22 July 2024.
4. “Driving U.S. Innovation in Artificial Intelligence,” Schumer et al, The Bipartisan Senate AI Working Group, May 2024.
5. “How AI Revolutionized Protein Science, but Didn’t End It,” Yasemin Saplakoglu, Quanta Magazine, 26 June 2024.
6. “The AI Revolution: Is it a Game Changer for Disability Inclusion?,” Hudoykul Hafizov, UNDP Uzbekistan, 18 July 2024.
7. “NIH findings shed light on risks and benefits of integrating AI into medical decision-making,” Jin et al, National Institutes of Health, 23 July 2024.
8. “The Billion-Dollar AI Gamble: Data Centers As The New High-Stakes Game,” Emil Sayegh, Forbes, 30 September 2024.
9. “Engineers use AI to wrangle fusion power for the grid,” Colton Poore, Princeton Engineering, 21 February 2024.
10. “Avoiding fusion plasma tearing instability with deep reinforcement learning,” Seo et al, Nature, 21 February 2024.
11. “AI in Hiring and Evaluating Workers: What Americans Think,” Rainie et al, Pew Research Center, 20 April 2023.
12. “AI is already taking video game illustrators’ jobs in China,” Viola Zhou, Rest of World, 11 April 2023.
13. “Recent data shows AI job losses are rising, but the numbers don't tell the full story,” Rachel Curry, CNBC, 16 December 2023.
14. “AI will shrink workforces within five years, say company execs,” Anna Cooban, CNN, 5 April 2024.
15. “Generative AI and the future of work in America,” Ellingrud et al, McKinsey Global Institute, 26 July 2023.
16. “Amid explosive demand, America is running out of power,” Evan Halper, The Washington Post, 7 March 2024.
17. “Virginia county approves data center project after 27-hour public hearing,” Matthew Barakat, AP, 13 December 2023.
18. “ECL says it will build a 1GW hydrogen-powered AI data center in Texas, with Lambda as its first tenant,” Sebastian Moss, Data Center Dynamics, 25 September 2024.
19. “Why Microsoft made a deal to help restart Three Mile Island,” Casey Crownhart, MIT Technology Review, 26 September 2024.
20. “AI Companies: Uphold Your Privacy and Confidentiality Commitments,” Staff in the Office of Technology, Federal Trade Commission, 9 January 2024.
21. “The privacy paradox with AI,” Gai Sher and Ariela Benchlouch, Reuters, 31 October 2023.
22. “Fact sheet: President Donald J. Trump takes action to enhance America's AI leadership,” The White House, 23 January 2025.
23. “EU Artificial Intelligence Act,” 2 February 2025.
24. “Artists stage mass protest against AI-generated artwork on ArtStation,” Benj Edwards, Ars Technica, 15 December 2022.
25. “‘Heretic’ Directors Used End Credits to Warn Hollywood About AI: ‘Let’s Bury It Underground With Nuclear Warheads, Cause It Might Kill Us All’,” William Earl, Variety, 4 November 2024.
26. “Artificial Intelligence and Copyright,” The Copyright Office, Library of Congress, Federal Register, 30 August 2023.
27. “Andersen v. Stability AI Ltd., 2024 U.S.P.Q.2d 1470 (N.D. Cal. 2024), Court Opinion,” William H. Orrick, Bloomberg Law, 12 August 2024.
28. “Democratized Creativity: The Evolution And Impact Of AI,” Sachin Dev Duggal, Forbes, 9 August 2024.
29. “AI image misinformation has surged, Google researchers find,” Angela Yang, NBC News, 29 May 2024.
30. “Social media platforms aren’t doing enough to stop harmful AI bots, research finds,” Brandi Wampler, Notre Dame News, 14 October 2024.
31. “Deepfakes and fake news pose a growing threat to democracy, experts warn,” Jackson Cote, Northeastern Global News, 1 April 2022.
32. “The People Onscreen Are Fake. The Disinformation Is Real.,” Adam Satariano and Paul Mozur, The New York Times, 7 February 2023.
33. “AI fakes raise election risks as lawmakers and tech companies scramble to catch up,” Shannon Bond, NPR, 8 February 2024.
34. “How did Donald Trump end up posting Taylor Swift deepfakes?,” Nick Robins-Early, The Guardian, 26 August 2024.
35. “AI voice scams are on the rise. Here's how to protect yourself.,” Megan Cerullo, CBS News, 17 December 2024.
36. “The Rise of Artificial Intelligence and Deepfakes,” Buffet Institute for Global Affairs, Northwestern University, July 2023.
We surveyed 2,000 organizations about their AI initiatives to discover what's working, what's not and how you can get ahead.
IBM® Granite™ is our family of open, performant and trusted AI models tailored for business and optimized to scale your AI applications. Explore language, code, time series and guardrail options.
Access our full catalog of over 100 online courses by purchasing an individual or multi-user subscription today, enabling you to expand your skills across a range of our products at a low price.
Led by top IBM thought leaders, the curriculum is designed to help business leaders gain the knowledge needed to prioritize the AI investments that can drive growth.
Want to get a better return on your AI investments? Learn how scaling gen AI in key areas drives change by helping your best minds build and deliver innovative new solutions.
Learn how to confidently incorporate generative AI and machine learning into your business.
Dive into the three critical elements of a strong AI strategy: creating a competitive edge, scaling AI across the business and advancing trustworthy AI.