In 2023, organizational departments such as human resources, IT and customer care focused on generative artificial intelligence (AI) use cases such as summarization, code generation and question-answering to reduce costs and boost productivity. A Gartner executive poll indicates that 55% of organizations are already piloting or implementing generative AI.
The major challenge facing enterprise decision-makers is achieving the right balance between operationalizing generative AI faster and mitigating foundational model-related risks, while staying on top of a rapidly evolving technology landscape.
Generative AI is proving its value as a driver of growth and innovation. According to a recent IBM Institute for Business Value (IBV) study, 75% of CEOs believe that gaining a competitive advantage hinges on possessing the most advanced generative AI, with 50% currently integrating it into their products and services. These are just some of the top generative AI use cases for enterprises that IBM has highlighted.
This is just the beginning. Multimodal, multilingual foundation models and automation agents expand the range of generative AI applications and adoption across business workflows. Also, vertical and domain-specific foundation models with fewer parameters, matching the performance of larger models at a lower cost of inference, gain more market traction. Furthermore, the techniques for training foundation models continue to evolve, unlocking advanced capabilities and efficiencies, and making generative AI even more appealing.
In 2024, enterprise business and technology executives will receive a clear mandate from their leadership and board to transform their business models, offerings and operations with generative AI. An IBM study on responsible AI and ethics reveals that CEOs feel over six times more pressure from their boards and investors to accelerate the adoption of generative AI rather than slowing it down.
According to “The CEO’s guide to generative AI” report from IBM Institute for Business Value (IBV), the experimentation phase for generative AI leaders is short and intense, with 74% of executives reporting that generative AI will be ready for general rollout in the next three years. As enterprise clients transition from exploration to investigation and production with generative AI, they require the right model choices for the right use cases, a robust platform to customize models and infuse AI into their applications, a hybrid cloud to deploy AI in the infrastructure of choice, and a reliable partner who can help scale and operationalize AI with minimal risks.
61% of CEOs identify concerns about data lineage and provenance as barriers to adopting generative AI, and 85% of executives anticipate direct interactions between generative AI and customers within the next two years. IBM prioritizes AI ethics, transparent data practices, governance capabilities, and indemnification to instill client trust in AI models for the coming era of enterprise-generative AI adoption.
While the technology offers ample promise, regulators are waking up to the potential risks and social harms, and are drafting policies and laws to help ensure sustainable innovation and diffusion. The EU AI Act, the first-ever comprehensive legal framework for AI worldwide passed by the European Parliament on March 13, and the US Whitehouse Executive Order on AI, announced in October 2023, both signal government commitment and scrutiny of AI.
Enterprise decision-makers leading generative AI initiatives within their organizations have much to consider. No other technology in the history of humankind has grown so large so soon, catching most business and technology executives by surprise.
According to another IBV study on responsible AI and Ethics, 58% of business executives see major ethical risks with generative AI adoption, while 79% prioritize AI ethics in their enterprise-wide approach.
To succeed in their missions with generative AI, clients need enterprise-grade model options with an easy-to-use toolkit for customization, robust AI or model governance, and flexible deployment from a reliable technology partner.
Decision-makers evaluating the selection of foundation models for scaling generative AI and achieving a steady return on investment must consider a strategic viewpoint that incorporates both enterprise-grade foundation models, and a robust platform to operationalize them. In this context, understanding the nuances of AI model selection becomes paramount, as decision-makers strive to align technological capabilities with business objectives.
Client users should be able to customize the models for their use cases, company and industry domains by using fine-tuning and prompt-tuning with an easy-to-use toolkit. They also need specialized database capabilities to store, manage and retrieve high-dimensional vectors that power generative AI applications.
Depending on the use case, content used to infer the model and operational considerations, clients should have the flexibility to deploy the models in the infrastructure of their choice. AI guardrails and continuous monitoring help ensure secure and reliable model deployments as organizations scale up generative AI applications.
Enterprise decision-makers seek a reliable technology partner that understands opportunities and risks in enterprise AI adoption, comprehends key model dimensions, and integrates AI ethics and regulatory preparedness across the generative AI lifecycle, starting with foundation model development.
As a leader in enterprise AI and hybrid cloud, IBM consistently provides trusted, performant and cost-effective generative AI products and solutions for our clients. Our approach to models includes:
IBM adopts an open ecosystem approach for its model strategy, integrating proprietary, third-party and open-source models into the watsonx platform, a unified data, AI and governance platform. IBM watsonx foundation models constitute a library of trusted, high-performing and cost-effective models accessible to clients directly from IBM® watsonx.ai™ or through IBM watsonx™ AI Assistants in digital labor, customer experience, application modernization and IT operations.
Employing a hybrid, multi-cloud approach, IBM offers clients the flexibility to deploy models on their preferred infrastructure, be it software as a service or on-premises. Clients can achieve superior price-to-performance ratios with industry or domain-specific and quantized (infrastructure optimized) models, alongside an easy-to-use toolkit for customization (fine-tuning and prompt tuning models), specialized databases, and flexible hybrid cloud deployment options on the watsonx platform.
With its 2024 roadmap, IBM aims to empower clients with enterprise-grade multimodal (code, text, audio, image, geospatial) and multilingual model options tailored to their specific business needs, regional interests and risk profiles.
The development of IBM® Granite™ models adheres to its AI ethics code, prioritizing trust and transparency in both training data and the model training process. These models use data sets meeting strict criteria for governance, risk and compliance. We designed chat fine-tuning techniques to mitigate hallucinations and misalignments in model outputs.
Granite models target specific business domains like finance and use cases such as retrieval augmented generation. They can match or exceed the performance of larger general-purpose models while requiring lower latencies and fewer infrastructure resources.
Also, Granite models consume only a fraction of graphics processing unit capacity and compute power, resulting in a reduced carbon footprint and total cost of ownership. IBM supports its models by providing clients with strong intellectual property indemnity protection, enabling them to focus on the business impact of AI rather than costly courtroom litigation.
Ready to get started? Take the interactive demo and sign-up for a trial of watsonx.ai.
This is the first post in our series “Enterprise generative AI made simple.” Stay tuned for future posts.
Learn more about model PoV and offerings by IBM