By 2026, over 80% of enterprises will deploy AI APIs or generative AI applications. AI models and the data on which they’re trained and fine-tuned can elevate applications from generic to impactful, offering tangible value to customers and businesses.

For example, the Masters Tournament generative AI-driven golf fan experience uses real-time and historical data to provide insights and commentary for over 20,000 video clips. The quality and quantity of data can make or break AI success, and organizations that effectively harness and manage their data will reap the most benefits. But it’s not so simple. Data is exploding, both in volume and in variety.

According to International Data Corporation (IDC), by 2025, stored data will grow 250% across on-prem and across cloud platforms. With growth comes complexity. Multiple data applications and formats make it harder for organizations to access, govern, manage and use all their data for AI effectively. Leaders must rethink the use of prohibitive on-premises approaches and monolithic data ecosystems while reducing costs and ensuring proper data governance and self-service access to more data across disparate data sources.

Scaling data and AI with technology, people and processes

Enabling data as a differentiator for AI requires a balance of technology, people and processes. To scale AI use cases, you first need to understand your strategic objectives for your data, which have likely changed because of generative AI. Align your data strategy to a go-forward architecture, with considerations for existing technology investments, governance and autonomous management built in. Look to AI to help automate tasks such as data onboarding, data classification, organization and tagging. This will require you to evolve your data management processes and update learning paths.   

Building an open and trusted data foundation

Organizations must focus on building an open and trusted data foundation to access trusted data for AI. Open is creating a foundation for storing, managing, integrating and accessing data built on open and interoperable capabilities that span hybrid cloud deployments, data storage, data formats, query engines, governance and metadata. This allows for easier integration with your existing technology investments while eliminating data silos and accelerating data-driven transformation.

Creating a trusted data foundation is enabling high quality, reliable, secure and governed data and metadata management so that it can be delivered for analytics and AI applications while meeting data privacy and regulatory compliance needs. The following four components help build an open and trusted data foundation. 

1. Modernizing your data infrastructure to hybrid cloud for applications, analytics and gen AI

Adopting multicloud and hybrid strategies is becoming mandatory, requiring databases that support flexible deployments across the hybrid cloud. Gartner predicts that 95% of new digital initiatives will be developed on cloud-native platforms, essential for AI technologies requiring massive data storage and scalability.

2. Powering data-driven applications, analytics and AI with the right databases and open data lakehouse strategy

For storing and analyzing data, you must use the right database for the right workload, data types and price performance. This ensures you have a data foundation that grows with your data needs, wherever your data resides. Your data strategy should incorporate databases designed with open and integrated components, allowing for seamless unification and access to data for advanced analytics and AI applications within a data platform. This enables your organization to extract valuable insights and drive informed decision-making.

For example, organizations require high-performance, secure, resilient transactional databases to manage their most critical operational data. With hybrid cloud availability, organizations can use their databases to modernize legacy apps, build new cloud-native apps and power AI assistants and enterprise applications.

As data types and applications evolve, you might need specialized NoSQL databases to handle diverse data structures and specific application requirements. These include time series, documentation, messaging, key-value, full-text search and in-memory databases, which meet various needs, such as IoT, content management and geospatial applications.

To power AI and analytics workloads across your transactional and purpose-built databases, you must ensure they can seamlessly integrate with an open data lakehouse architecture without duplication or additional extract, transform, load (ETL) processes. With an open data lakehouse, you can access a single copy of data wherever your data resides.

An open data lakehouse handles multiple open formats (such as Apache Iceberg over cloud object storage) and combines data from various sources and existing repositories across the hybrid cloud. The most price-performant data lakehouse also enables the separation of storage and compute with multiple open source query engines and integration with other analytics engines to optimize workloads for superior price performance.

This includes integration with your data warehouse engines, which now must balance real-time data processing and decision-making with cost-effective object storage, open source technologies and a shared metadata layer to share data seamlessly with your data lakehouse. With an open data lakehouse architecture, you can now optimize your data warehouse workloads for price performance and modernize traditional data lakes with better performance and governance for AI.

Enterprises might also have petabytes, if not exabytes, of valuable proprietary data stored in their mainframe that needs to be unlocked for new insights and ML/AI models. With an open data lakehouse that supports data synchronization between the mainframe and open formats such as Iceberg, organizations can better identify fraud, understand constituent behavior and build predictive AI models to understand, anticipate and influence advanced business outcomes.

Before building trusted generative AI for your business, you need the right data architecture to prepare and transform this disparate data into quality data. For generative AI, the right data foundation might include various knowledge stores spanning NoSQL databases for conversations, transactional databases for contextual data, a data lakehouse architecture to access and prepare your data for AI and analytics and vector-embedding capabilities for storing and retrieving embeddings for retrieval augmented generation (RAG). A shared metadata layer, governance to catalog your data and data lineage enable trusted AI outputs.

3. Establishing a foundation of trust: Data quality and governance for enterprise AI

As organizations increasingly rely on artificial intelligence (AI) to drive critical decision-making, the importance of data quality and governance cannot be overstated. According to Gartner, 30% of generative AI projects are expected to be abandoned by 2025 due to poor data quality, inadequate risk controls, escalating costs or unclear business value. The consequences of using poor-quality data are far-reaching, including erosion of customer trust, regulatory noncompliance and financial and reputational damage.

Effective data quality management is crucial to mitigating these risks. A well-designed data architecture strategy is essential to achieving this goal. A data fabric provides a robust framework for data leaders to profile data, design and apply data quality rules, discover data quality violations, cleanse data and augment data. This approach ensures that data quality initiatives deliver on accuracy, accessibility, timeliness and relevance.

Moreover, a data fabric enables continuous monitoring of data quality levels through data observability capabilities, allowing organizations to identify data issues before they escalate into larger problems. This transparency into data flows also enables data and AI leaders to identify potential issues, ensuring that the right data is used for decision-making.

By prioritizing data quality and governance, organizations can build trust in their AI systems, minimize risks and maximize the value of their data. It is crucial to recognize that data quality is not just a technical issue, but a critical business imperative that requires attention and investment. By embracing the right data architecture strategy, organizations can unlock the full potential of their AI initiatives and drive business success.

4. Managing and delivering data for AI

Data is fundamental to AI, from building AI models with the right data sets to tuning AI models with industry-specific enterprise data to using vectorized embeddings to build RAG AI applications (including chatbots, personalized recommendation systems and image similarity search applications).

Trusted, governed data is essential for ensuring the accuracy, relevance and precision of AI. To unlock the full value of data for AI, enterprises must be able to navigate their complex IT landscapes to break down data silos, unify their data and prepare and deliver trusted, governed data for their AI models and applications.

With an open data lakehouse architecture powered by open formats to connect to and access critical data from your existing data estate (including data warehouses, data lakes and mainframe environments), you can use a single copy of your enterprise data to build and tune AI models and applications.

With a semantic layer, you can generate data enrichments that enable clients to find and understand previously cryptic, effectively structured data across your data estate in natural language through semantic search to accelerate data discovery and unlock data insights faster, no SQL required.

Using a vector database embedded directly within your lakehouse, you can seamlessly store and query your data as vectorized embeddings for RAG use cases, improving the relevance and precision of your AI outputs.

Build and create value with data products, AI assistants, AI applications and business intelligence

With an open and trusted data foundation in place, you can unlock the full potential of your data and create value from it. This can be achieved by building data products, AI assistants, AI applications and business intelligence solutions powered by an AI and data platform that uses your trusted data.

Data products, for instance, are reusable, packaged data assets that can be used to drive business value, such as predictive models, data visualizations or data APIs. AI assistants, applications and AI-powered business intelligence can help users make better decisions by providing insights, recommendations and predictions. With the right data, you can create a data-driven organization that drives business value and innovation.

To start building your data foundation for AI, explore our data management solutions with IBM® databases,™ and data fabric and scale AI with trusted data.

Explore our solutions learn how to design and build out your ideal data estate
Was this article helpful?

More from Artificial intelligence

ServiceNow and IBM revolutionize talent development with AI

4 min read - Generative AI is fundamentally changing the world of work by redefining the skills and jobs needed for the future. In fact, recent research from ServiceNow and Pearson found that an additional 1.76 million tech workers will be needed by 2028 in the US alone.  However, according to the IBM Institute for Business Value, less than half of CEOs surveyed (44%) have assessed the potential impact of generative AI on their workforces. To help customers develop and upskill their workforces to meet…

Responsible AI is a competitive advantage

3 min read - In the era of generative AI, the promise of the technology grows daily as organizations unlock its new possibilities. However, the true measure of AI’s advancement goes beyond its technical capabilities. It’s about how technology is harnessed to reflect collective values and create a world where innovation benefits everyone, not just a privileged few. Prioritizing trust and safety while scaling artificial intelligence (AI) with governance is paramount to realizing the full benefits of this technology. It is becoming clear that…

Taming the Wild West of AI-generated search results

4 min read - Companies are racing to integrate generative AI into their search engines, hoping to revolutionize the way users access information. However, this uncharted territory comes with a significant challenge: ensuring the accuracy and reliability of AI-generated search results. As AI models grapple with "hallucinations"—producing content that fills in gaps with inaccurate information—the industry faces a critical question: How can we harness the potential of AI while minimizing the spread of misinformation? Google's new generative AI search tool recently surprised users by…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters