HomeSeven Bets

The trend: Tech-led disruptions are accelerating, driven by generative AI

Explore the trends
Generative AI
Sustainability + profit
Digital products
Customer experience
Metaverse
Resilient agility
The new social contract
Leaders must understand the now, the new, and the next of technology disruption, and embrace the opportunities while protecting against the risks.

Generative AI, made popular by the consumer application ChatGPT, has democratized AI and accelerated the largest commercial opportunity in today’s economy, sized at $15.7 trillion of GDP by 2030. While this next generation AI represents a significant inflection point, it is not alone in the string of technology-led disruptions facing business and society:

$10.5 trillion

in annual damages from cyber attacks by 2025

$700 billion

in value from quantum computing by the next decade

Our insights
  • For years we have known that AI would transform business in most industries, but adoption—while accelerating—was slow and expensive. Foundation models have changed that: pre-trained AI can easily be used almost “out of the box” for tasks that now can be automated and improved with minimum additional training. Generative AI further expands the scope of what can be automated, especially in administrative, marketing, and service fields. And user-friendly interfaces, such as chat or voice, have lowered or eliminated the friction at adoption.

    It’s clear that AI will transform how we work. CEOs and boards of directors must understand how to seize the opportunities and as importantly, mitigate the enhanced risks AI presents to business. In the last few months, AI has been used to create voice cloning applications that break banking contact center security, deep fakes of humans used for nefarious purposes, and generative artwork based on copyrighted works by human artists that have resulted in major intellectual property infringement lawsuits.

    This is one reason that organizations’ spend on AI ethics doubled between 2018 and 2021, rising from 3% to 6% of overall AI spend. And they expect to invest 40% more over the next three years, as AI ethics laws are passed and regulatory oversight increases.
  • It takes 277 days and somewhere between $5-10 million to contain a data breach today. Executives who implement a zero trust security strategy, which requires all users to be continuously validated, can reduce that expense by $1.5 million. Those who invested in extended detection and response technologies (XDR) reduced breach life cycles by 29 days.

    Plus, recent research from the IBM Institute for Business Value found that, over a five-year span, organizations with more mature security capabilities have shown a 43% higher rate of revenue growth than their less mature peers. Additionally, two out of three executives now view cybersecurity as a revenue enabler, rather than a cost center.
  • In 2022, the US White House issued a national security memorandum warning that existing systems could be vulnerable to future quantum computers. Yet, only 18% of executives are actively investing in quantum-safe capabilities.

    By 2030, quantum computers may be capable of cracking some commonly used approaches to data encryption. Organizations will need to roll out quantum-safe encryption as soon as it is available to reduce the future fallout of quantum hacks.

Bookmark this report


Additional content

Meet the authors

John Granger

Connect with author:


, Senior Vice President, IBM Consulting


Jesus Mantas

Connect with author:


, Senior Managing Partner, IBM Consulting


Salima Lin

Connect with author:


, Vice President and Senior Partner, Strategy, Transformation, and Thought Leadership, IBM

Download report translations


    Originally published 08 May 2023

    The priorities

    Companies have long expected that AI would change everything one day—and that day has finally arrived. Organizations are now racing to incorporate all forms of AI, looking for ways to boost productivity faster and more effectively than the competition. However, productivity, security, privacy, and intellectual property rights must remain top of mind.

    • Generative AI significantly expands the scope of tasks that can effectively be automated by technology. Foundation models lower the cost and time to implement AI. This results in a change in cost, effort and scope that can generate significant productivity improvements in the enterprise.

      Many design, composition, and summarization tasks in processes were not addressed by the prior automation wave, and in some cases, even those areas automated with AI peaked at lower levels of performance than are possible today. It is imperative for enterprises to iterate in their existing productivity programs with new expectations and expanded possibilities.
    • As part of a zero trust security strategy, organizations need to develop a culture of modern security practices and automated controls. In the event of a breach, this type of security posture helps organizations contain risks, limiting the likelihood of a material loss. For example, research from the IBM Institute for Business Value revealed that 55% of zero trust leaders were able to prevent malware propagation, compared to just 35% of others.
    • New legislation around ethical uses of AI includes regulations around data privacy and governance. The EU AI Act, for example, will require AI incidents to be managed like data security incidents. The act would also create regulatory oversight for high-risk AI applications, including hiring software and medical devices.

      ChatGPT has recently illustrated the multiple categories of privacy and intellectual property risks introduced to businesses by foundation models such as OpenAI’s GPT4. The intellectual property used to train the models to generate derivative works is not protected; the privacy and confidentiality of new prompts and training data is not assured; and the generated work (text, code, images) can’t be copyrighted.

      In this environment, three out of four executives say it’s important for their companies to address data privacy and AI ethics. However, building trustworthy AI requires significant commitments across product engineering, IT, and governance. Tools that detect bias, diverse and inclusive teams, and guidelines for AI design can help companies develop AI that will create positive change—and an AI risk Center of Excellence can help ensure no important steps are skipped, including establishing policies for AI ethics.

      IBM’s principles for Trust and Transparency outline a framework for the development and use of ethical AI and can be a place to start:
      • • The purpose of AI is to augment human intelligence.
      • • Data and insights belong to their creator.
      • • New technology, including AI systems, must be transparent and explainable.
    The bet
    Implement secure, AI-first intelligent workflows to run the enterprise
    Actions to take
    • Change the enterprise mindset from “adding AI” to “starting with AI,” reinventing processes, tasks, workflows, and jobs to deliver productivity improvements.

      Reevaluate prior automation scope based on the new generative AI capabilities.

      Redefine jobs and skills based on the higher-value-added tasks where AI is less useful.
    • Make sure use cases are easily explainable, that AI-generated artifacts are clearly identified, and that AI training is transparent and open to continual critique.

      To manage risk, document—with fact sheets—every instance of AI use in the organization and the current governance around it. Ensure AI-generated assets can be traced back to the foundation model, dataset, prompt (or other inputs), and seed in digital asset management (DAM) and other systems. Be prepared to make adjustments based on regulation changes.

      Re-skill the employee base to understand AI and the proper and improper use of it. Build AI ethics and bias identification training programs for employees and partners to comply with AI ethics regulations.
    • Implement AI-enabled security intelligence and ensure clear incident escalation policies are documented at every level, including board of directors.

      Establish role-based controls for access to data. Implement multifactor authentication (MFA) for critical apps and data assets.

      Start a Quantum Center of Competency with initial focus on quantum-safe.
    Fallback
    See the bet in action
    A global payments company uses AI to improve productivity and customer experience

    There’s little room for error in the competitive financial services sector. When customers complain, companies need to act fast—and not just to resolve individual issues. They need to understand where systemic problems are creating a bad customer experience and make necessary changes across the board.

    • But when millions of customer complaints come in each year, it can be tough to separate isolated issues from systemic problems. This is where the transformative power of AI comes into play. Rather than manually categorizing and analyzing complaints, which took weeks, the company wanted to leverage AI foundation models to gain immediate, actionable insights.

      IBM Consulting trained a large language model (LLM) on public banking datasets, and then fine-tuned the model to align with the specific business context. The resulting AI model delivered near real-time insights and 91% accuracy for granular classification of complaints. As a result, it now takes the company fewer than 15 minutes to identify emerging issues— compared to three weeks before.

      “I am confident we have a fantastic product that will make my team’s workload shift from slow manual processes to focusing on protecting our brand, stopping customer harm sooner, and building better products,” said a senior manager of global commercial services for the payment company.

    Bookmark this report


    Additional content

    Meet the authors

    John Granger

    Connect with author:


    , Senior Vice President, IBM Consulting


    Jesus Mantas

    Connect with author:


    , Senior Managing Partner, IBM Consulting


    Salima Lin

    Connect with author:


    , Vice President and Senior Partner, Strategy, Transformation, and Thought Leadership, IBM

    Download report translations


      Originally published 08 May 2023