Democratizing AI: What does it mean and how does it work?
5 November 2024
Authors
Alice Gomstyn IBM Content Contributor
Alexandra Jonker Editorial Content Lead

Blink and you might miss the latest group of people using artificial intelligence (AI).

One day, it’s taco lovers ordering from friendly bots at their local drive-through.1 The next, it’s perfumers using AI tools to design sustainable fragrances.2 Or organic vegetable farmers deploying robotic weeders.3 Or bifocal wearers getting AI-powered eye exams.4

It doesn’t take 20/20 vision to get the picture: As the use cases and benefits of AI grow at a dizzying speed, so does the sheer number of humans who employ it. Once esoteric, advanced AI technology empowers consumers and business users alike. And this level of ubiquity is arguably an indicator of AI's democratization.

A closer look at current AI practices, however, suggests that there’s still room for improvement in democratizing AI. To understand why, it’s important to consider what democratization entails, how it impacts individuals and businesses today, and how it might affect them in the future.

3D design of balls rolling on a track
The latest AI News + Insights 
 Expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 
What is AI democratization?

The definition of democratizing AI has varied over the years. Broadly speaking, it can be considered the more equitable spread of AI applications and capabilities across society. On a more granular level, researchers typically agree on at least 3 core aspects of AI democratization:5

  • Democratizing AI use
  • Democratizing AI development
  • Democratizing AI governance
AI Academy
Trust, transparency and governance in AI

AI trust is arguably the most important topic in AI. It's also an understandably overwhelming topic. We'll unpack issues such as hallucination, bias and risk, and share steps to adopt AI in an ethical, responsible and fair manner.

Democratizing AI use

Democratizing AI use refers to providing AI access to a wider range of users beyond machine learning (ML) experts. Common means of improving access include reducing AI costs and incorporating AI into tools and platforms people are already using.

It’s a notion that’s been years in the making, long before AI entered the public discourse. In 2016, for instance, Microsoft declared that it would democratize AI with an approach to “take it from the ivory towers and make it accessible for all.”The implication of democratized AI use is that more people will benefit from AI capabilities, both in their personal and working lives.

The release and rapid adoption of consumer-facing generative AI (gen AI) applications suggest that the democratization of AI use among consumers is underway. A 2023 global consumer sentiment survey found that 75% of respondents used AI-driven tools.7 The most popular consumer-facing large language model (LLM) application, OpenAI’s ChatGPT, claims more than 200 million active weekly users.

However, in business, the use of AI varies by size and industry. For example, research that was commissioned by IBM found that 42% of enterprise-level organizations—those with more than 1,000 employees—actively use AI systems while another 40% are exploring the technology. However, a survey that included smaller businesses (with an average employee headcount below 48) determined that less than 4% of companies use AI to produce goods and services.

In that survey, which was conducted by the US Census Bureau, adoption rates also varied by industry, with food services and construction companies reporting the lowest use. Tech companies, not surprisingly, boasted the highest usage rates.8

Democratizing AI development

Democratizing the development of AI refers to including more people in the creation of AI solutions. But exactly who those people are depends on your interpretation of the concept. Often, it’s about providing developers, researchers and data scientists with free or low-cost computing resources and technical tools that are already accessible to those employed by large tech firms.

Still, in other cases, democratizing development entails including nontechnical users in AI solutions and model development. It means looking beyond rarified expert circles to those people who don’t necessarily possess a deep understanding of AI algorithms, data sets and computer science.

This can be achieved by providing tools that help users without tech know-how still build and adapt AI-powered applications. This concept bears some similarity to data democratization within enterprises—the process of creating systems and adopting tools that allow any employee, regardless of their technical background, to incorporate data science into their decision-making processes.

In both cases, the democratization of AI development is considered to be a good thing for the future of AI innovation. Such innovation could optimize AI models to more effectively serve a broader array of stakeholders and users than it does currently. For example, smaller businesses that previously couldn’t afford to create bespoke AI applications might find such endeavors more feasible due to more affordable tools and services.

Meanwhile, consumers from underrepresented groups might also benefit because democratized development could help prevent AI bias, which is when the biases of society are inadvertently embedded in algorithmic design, AI training data and other aspects of AI development. AI bias can produce outcomes that are unhelpful or even harmful to people from underrepresented groups, hindering their ability to participate in the economy and society.

Part of the bias problem stems from the fact that, as researchers from the Centre for the Governance of AI note, leading AI companies typically employ “a narrow demographic” of developers. The researchers concluded that including more people in AI development could result in applications that serve more diverse interests.9

For now, the bulk of AI development and innovation remains concentrated in certain countries and the for-profit sector. According to one 2024 study, developers in the United States produced 5 times as many AI foundation models in a single year as in China, home to the second-highest level of development. Meanwhile, developers within the tech industry created nearly 4 times the number of models as those hailing from academia.10

Democratizing AI governance

AI governance refers to the processes, standards and guardrails that help ensure that AI systems and tools are safe and ethical. Accordingly, democratizing AI governance is the idea that more people and organizations, beyond developers and tech companies, have influence over the safe and ethical deployment of AI technology.

Such democratization, governance advocates say, can help minimize harms related to AI deployment, such as discrimination or privacy violations. It might also help encourage greater AI explainability, interpretability, transparency and other characteristics that improve trust in AI systems.

However, as with the democratization of AI development, the specifics of who, exactly, should participate in governance democratization, can vary. Some argue it should pointedly include those who are impacted by AI deployment.11 Others suggest that, in some fashion, all members of society should be involved in AI governance.12

Governance democratization measures can be undertaken at an enterprise level, with companies gathering input on the governance of their AI systems from employees or customers. On a wider scale, democratization efforts are taking place through government actions—namely, voluntary frameworks and mandatory regulations—and collaborative initiatives in the private and public sectors.

AI democratization tools and technologies

Different tools and technologies support AI democratization by enabling more individuals and organizations to develop their own AI applications.

Open source software

Open source software is software developed and updated collectively by a community of users. It’s also made available for anyone to use, alter and redistribute at no cost. Concerning AI, open source model libraries, such as those offered by IBM partner Hugging Face, include foundation models that businesses can adapt for specific use cases.

Additional open source tools can help users make the most of available models. For example, InstructLab, an open source project by IBM Research® and Red Hat®, generates synthetic data, helping to accelerate LLM training. Synthetic data can be tailored to specific goals, values and use cases while collecting real-world data that meets similar specifications can be arduous and prohibitively expensive.

Software as a service (SaaS)

The infrastructure required to successfully tailor and deploy AI systems can be a major hurdle for organizations seeking to adopt AI solutions. Such infrastructure includes data storage solutions, compute resources, machine learning frameworks and machine learning operations (MLOps) platforms.

Fortunately, software as a service models empower businesses to accelerate AI adoption without major infrastructure investments. A collaboration between IBM and Amazon might make it easier for businesses to access AI-focused SaaS, with IBM now offering key data storage and AI governance solutions through Amazon’s AWS Marketplace.

No-code tools

Thanks to no-code tools and platforms, those with limited or no coding skills can create some AI applications. No-code solutions such as Amazon SageMaker Canvas offer the automation of AI development workflows and feature drag-and-drop interfaces for a visualization-centered approach.

AI democratization initiatives

In recent years, several initiatives have emerged in the private and public sectors to advance all 3 forms of AI democratization: use, development and governance. These initiatives include:

AI Alliance

The AI Alliance is an international community of AI developers, researchers and adopters collaborating to advance open, safe and responsible AI. The group, launched in 2023 by IBM and Meta, includes leaders from universities, industries and governments. The group’s goals include developing benchmarks and evaluation tools to enable the responsible development and use of AI, advance the development of open source foundation models and develop educational content to inform the public and policymakers.

AI Governance Alliance

The AI Governance Alliance (AIGA) was launched by the World Economic Forum in 2023 following WEF’s Responsible AI Leadership Summit. The AIGA promotes inclusion, ethics and sustainability in the development and deployment of AI. Its steering committee, which is charged with advising on the alliance’s outputs, includes academic leaders, government officials and executives from technology organizations such as Google, IBM, Meta and OpenAI.

National AI Research Resource Pilot

The US National Science Foundation’s National AI Research Resource (NAIRR) pilot is an effort to connect researchers across the United States with AI infrastructure resources. The pilot consists of a partnership with 12 other federal agencies and 26 other organizations, including Amazon Web Services, Google, Hugging Face, IBM, Intel, Meta, Microsoft and OpenAI.

Trustworthy AI frameworks

Different governments and intergovernmental organizations have developed trustworthy AI frameworks to promote greater fairness and transparency, among other key qualities, in the development and deployment of AI systems. Such frameworks include the Organisation for Economic Co-operation and Development’s AI Principles and the National Institute of Standards and Technology’s AI Risk Management Framework. The principles of at least one framework, the European Union’s Ethics Guidelines for Trustworthy Artificial Intelligence, were later incorporated into legislation: The EU AI Act.

AI upskilling programs

AI upskilling prepares employees with the skills and education necessary to use AI at work. Disciplines where AI upskilling has proven especially helpful for workers include customer service, financial services, healthcare, human resources and web development. While workers or their employers can invest in paid AI training programs, several tech companies and universities offer AI courses for free, including Amazon, IBM, Harvard University and the University of Pennsylvania.13

Footnotes

1Taco Bell is rolling out AI ordering in hundreds of drive-thrus. Here's how it works.” ZDNET. 1 August 2024.

2Is the Future of Fragrance In the Hands of AI?” Fashion. 2 January 2024.

3Carbon Robotics raises $70M to scale up AI-powered robotic farming solutions.” Silicon Angle. 21 October 2024.

4Meet the 'Eyebot': An AI-Powered, 90-Second Vision Test.” CNet. 17 October 2024.

5 “‘Democratizing AI’ and the Concern of Algorithmic Injustice.” Philosophy & Technology. 14 August 2024.

6, 12Democratizing AI.” Microsoft. 26 September 2016.

7Consumers Know More About AI Than Business Leaders Think.” BCG. 24 April 2024.

8How Many U.S. Businesses Use Artificial Intelligence?” United States Census Bureau. 28 November 2023.

9, 11  “Democratising AI: Multiple Meanings, Goals, and Methods.” Association for Computing Machinery Digital Library. 29 August 2023.

10Artificial Intelligence Index Report 2024.” Stanford University Human-Centered Artificial Intelligence. Accessed 28 October 2024.

13Here are 7 free AI classes you can take online from top tech firms, universities.” Fortune. 5 September 2024.

Related solutions IBM® watsonx.governance™

Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.

Discover watsonx.governance
AI governance consulting services

Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.

Explore AI governance services
IBM OpenPages®

Simplify how you manage risk and regulatory compliance with a unified GRC platform.

Explore OpenPages
Take the next step

Direct, manage and monitor your AI using a single platform to speed responsible, transparent and explainable AI.

Explore watsonx.governance Book a live demo