My IBM Log in Subscribe

What is shadow AI?

25 October 2024

Authors

Alexandra Jonker

Editorial Content Lead

Amanda McGrath

Writer

IBM

What is shadow AI?

Shadow AI is the unsanctioned use of any artificial intelligence (AI) tool or application by employees or end users without the formal approval or oversight of the information technology (IT) department.

A common example of shadow AI is the unauthorized use of generative AI (gen AI) applications such as OpenAI’s ChatGPT to automate tasks like text editing and data analysis. Employees often turn to these tools to enhance productivity and expedite processes. However, since IT teams are unaware of these apps being used, employees can unknowingly expose the organization to significant risks concerning data security, compliance and the company’s reputation.

For CIOs and CISOs, developing a robust AI strategy that incorporates AI governance and security initiatives is key to effective AI risk management. By committing to AI policies that emphasize the importance of compliance and cybersecurity, leaders can manage the risks of shadow AI while embracing the benefits of AI technologies.

Shadow IT versus shadow AI

To understand the implications of shadow AI, it's helpful to distinguish it from shadow IT.

Shadow IT

Shadow IT refers to the deployment of any software, hardware or information technology on an enterprise network without an IT department or CIO’s approval, knowledge or oversight. Employees might turn to unsanctioned AI technology when they find existing solutions insufficient or believe that the approved options are too slow. Common examples include using personal cloud storage services or unapproved project management tools.

Shadow AI

While shadow IT focuses on any unauthorized application or service, shadow AI zeros in on AI-specific tools, platforms and use cases. For instance, an employee might use a large language model (LLM) to quickly generate a report without realizing the security risks. The key difference lies in the nature of the tools being used: Shadow AI is about the unauthorized use of artificial intelligence, which introduces unique concerns related to data management, model outputs and decision-making.

What are the risks of shadow AI?

From 2023 to 2024, the adoption of generative AI applications by enterprise employees grew from 74% to 96% as organizations embraced AI technologies.1 Alongside this growth came a rise in shadow AI. Today, over one-third (38%) of employees acknowledge sharing sensitive work information with AI tools without their employers' permission.2

Shadow AI can expose companies to several risks including data leakage, fines for noncompliance and severe reputational damage:

Data breaches and security vulnerabilities

One of the foremost risks associated with shadow AI is the potential for data breaches. When there is a lack of oversight regarding AI usage, employees can inadvertently expose sensitive information leading to data privacy concerns. According to a recent poll of CISOs, 1 in 5 UK companies experienced data leakage because of employees using gen AI.3 The heightened risk of data leakage might explain why 3 quarters of respondents also stated that insiders pose a greater risk to the organization than external threats.4

Noncompliance with regulations

In many industries, regulatory compliance is non-negotiable. Using shadow AI can lead to compliance issues, especially regarding data protection and privacy. Organizations may be required to adhere to regulations like the General Data Protection Regulation (GDPR). Fines for noncompliance with the GDPR can be substantial: major infringements (such as processing data for unlawful purposes) can cost companies upwards of EUR 20,000,000 or 4% of the organization’s worldwide revenue in the previous year—whichever is higher. 

Reputational damage

Relying on unauthorized AI models can impact decision-making quality. Without proper governance, the outputs generated by these models might not align with the organization’s objectives or ethical standardsBiased data, overfitting and model drift are a few examples of AI risks that can lead to poor strategic choices and harm a company’s reputation. 

Unauthorized AI use might also contradict a company’s quality standards and undermine consumer trust. Consider the backlash when Sports Illustrated was exposed for publishing articles written by AI-generated authors or when Uber Eats was called out for using AI-generated food images.

Causes of shadow AI

Despite the risks, shadow AI is becoming more commonplace for several reasons. Organizations are embracing digital transformation, and by extension, the integration of AI technologies to reimagine workflows and decision-making. 

The proliferation of user-friendly AI tools means employees can easily access advanced AI solutions to enhance their capabilities. Many AI applications are available as software as a service (SaaS) products, allowing individuals to quickly adopt these tools without necessarily involving IT or security teams. Through the democratization of AI, employees are finding new ways to:

  • Enhance productivity: Employees often use shadow AI tools to increase their productivity and circumvent operational inefficiencies. By using gen AI apps, individuals can automate repetitive tasks, generate content quickly and streamline processes that would otherwise take much longer.
  • Accelerate innovation: Shadow AI can foster a culture of innovation, enabling teams to experiment with new AI tools without waiting for official approval. This agility can lead to creative solutions and improved workflows, giving organizations a competitive edge in rapidly changing markets.
  • Streamline solutions: Often, shadow AI allows teams to address challenges in real-time. Employees can find ad hoc solutions quickly using available AI tools, rather than relying on traditional, slower methods. This responsiveness can enhance customer service and improve operational efficiency.
3D design of balls rolling on a track

The latest AI News + Insights 


Expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

Examples of shadow AI

Shadow AI manifests in various ways across organizations, often driven by the need for efficiency and innovation. Common examples of shadow AI include AI-powered chatbots, ML models for data analysis, marketing automation tools and data visualization tools.

AI-powered chatbots

In customer service, teams might turn to unauthorized AI chatbots to generate answers for inquiries. For instance, a customer service representative might try to answer a customer’s question by asking a chatbot for answers instead of looking at their company’s approved materials. This can result in inconsistent or false messaging, potential miscommunication with customers and security risks if the representative’s question contains sensitive company data.

ML models for data analysis

Employees might use external machine learning models to analyze and find patterns within company data. While these tools can yield valuable insights, the unauthorized use of AI services can create security vulnerabilities. For example, an analyst might use a predictive behavior model to better understand customer behavior from a proprietary dataset, unknowingly exposing sensitive information in the process.

Marketing automation tools

Marketing teams might seek to optimize campaigns using shadow AI tools, which can automate email marketing efforts or analyze social media engagement data. Use of these tools can lead to improved marketing outcomes. However, the absence of governance might result in noncompliance with data protection standards, particularly if customer data is mishandled.

Data visualization tools

Many organizations use AI-powered data visualization tools to quickly create heat maps, line charts, bar graphs and more. These tools can help bolster business intelligence by displaying complex data relationships and insights in a way that is easy to understand. However, inputting company data without IT approval can lead to inaccuracies in reporting and potential data security issues.

How to manage the risks of shadow AI

To manage the risks of shadow AI, organizations might consider several approaches that encourage responsible AI usage while recognizing the need for flexibility and innovation:

Emphasize collaboration

Open dialog between IT departments, security teams and business units can facilitate a better understanding of AI capabilities and limitations. A culture of collaboration can help organizations identify which AI tools are beneficial while also helping ensure compliance with data protection protocols.

Develop a flexible governance framework

Governance frameworks can accommodate the fast-paced nature of AI adoption while maintaining security measures. These frameworks can include clear guidelines on which types of AI systems can be used, how sensitive information should be handled and what training employees need regarding AI ethics and compliance.

Implement guardrails

Guardrails around the use of AI can provide a safety net, helping ensure that employees only use approved tools within defined parameters. Guardrails can include policies regarding external AI use, sandbox environments for testing AI applications or firewalls to block unauthorized external platforms.

Monitor AI usage

It might not be feasible to eliminate all instances of shadow AI. Therefore, organizations can implement network monitoring tools to track application usage and establish access controls to limit nonapproved software. Regular audits and active monitoring of communication channels can also help identify if, and how, unauthorized apps are being used.

Reiterate the risks

The landscape of shadow AI is constantly evolving, presenting new challenges for organizations. Companies can establish regular communications, such as newsletters or quarterly updates, to inform employees about shadow AI and the associated risks.

By enhancing awareness of the implications of using unauthorized AI tools, organizations can foster a culture of responsible AI usage. This understanding might encourage employees to seek out approved alternatives or consult with IT before deploying new applications.

Footnotes

All links reside outside ibm.com

1 Sensitive Data Sharing Risks Heightened as GenAI Surges, Infosecurity Magazine, 17 July 2024.

2 Over a Third of Employees Secretly Sharing Work Info with AI, Infosecurity Magazine, 26 September 2024.

3 Fifth of CISOs Admit Staff Leaked Data Via GenAI, Infosecurity Magazine, 24 April 2024.

4 Article 99: Penalties, EU Artificial Intelligence Act.

Related solutions

Related solutions

IBM® watsonx.governance™

Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.

Discover watsonx.governance
AI governance solutions

See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation, and improve customer trust.

Discover AI governance solutions
AI governance consulting services

Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.

Explore AI governance services
Take the next step

Direct, manage and monitor your AI with a single portfolio to speed responsible, transparent and explainable AI.

Explore watsonx.governance Book a live demo