After years of experimentation with general-purpose AI, many enterprises are turning to specialized models designed to deliver greater accuracy and efficiency for specific domains.
In a recent report, Gartner examined the emerging market for domain-specific language models (DSLMs) and identified IBM as “the company to beat” in DSLM enablement as of 8 December 2025, citing IBM Granite models and the watsonx portfolio as key differentiators.
According to the report, “[t]he combination of IBM’s enterprise-grade Granite models and its watsonx platform capabilities addresses core challenges in DSLM creation and adoption, such as small model efficiency, openness for deployment flexibility, availability of quality AI-ready domain data and model governance. This establishes IBM currently as the frontrunner in the DSLM race for enabling trusted enterprise-grade DSLM.”
IBM believes that 2026 will mark a turning point for enterprise AI, as organizations increasingly prioritize models that are built, customized and governed for specific domains. Small language models (SLMs) such as IBM Granite provide a foundation for this approach, enabling enterprises to efficiently customize AI for a wide range of domain-specific use cases.
Gartner defines DSLMs as “generative AI (GenAI) models that have been created or optimized for the needs of specific industries, business functions or classes of problem.” These models are designed to improve accuracy and lessen the reliance on advanced prompt engineering for a narrower set of high-value use cases.
Gartner observes that, “[g]eneric one-size-fits-all LLMs, while versatile, don’t scale cost-efficiently and don’t address the accuracy, trustworthiness and control needed by enterprises.”
As AI initiatives mature, enterprises are moving away from adapting business processes to fit generic models. Instead, they are specializing models to fit their data, workflows and regulatory environments.
Frontier models are designed to be broadly capable across a wide range of tasks. Their massive parameter counts make them powerful, but expensive to operate, difficult to deploy and inefficient to customize.
IBM takes a different approach with the Granite family of small language models (SLMs) designed to scale enterprise AI efficiently and responsibly. Rather than optimizing for maximum scale, Granite is built to be small, efficient and open, making customization with data from any domain far easier and more cost-effective, without the models being pre-built for any one domain.
The breadth of the Granite family allows organizations to select the right model for each use case and when customized, these models can match or exceed the performance of frontier models for enterprise tasks at a fraction of the cost.
The latest Granite 4.0 models extend this advantage further. Their hybrid architecture delivers high efficiency and scalability, reducing memory requirements by more than 70% in long-context, multi-session scenarios while remaining fast and responsive. Granite 4.0 models outperform many models in their weight class and even much larger models, on enterprise-critical tasks such as instruction following, tool calling and retrieval-augmented generation (RAG).
Granite is also designed with industry-leading trust and transparency standards. It is the first open model family to achieve ISO 42001 certification, reinforcing IBM’s commitment to responsible and governed AI. The family also recently received the highest score ever recorded in Stanford University’s Foundation Model Transparency Index, with a transparency score of 95%, while most other model developers declined year over year.
DSLMs deliver value when they are grounded in enterprise data, deployed into real workflows and governed throughout their lifecycle.
Watsonx is IBM’s AI portfolio that works across any cloud, application, model, agent, or data type, enabling organizations to build and deploy DSLMs using their existing technology investments rather than replacing them.
DSLMs are only as good as the data that grounds them. With more than 90% of enterprise data unstructured, watsonx helps organizations access and prepare that data for AI, enabling them to customize models like Granite for specific tasks. Those customized models can then be operationalized, powering agents that execute work across the enterprise. Throughout this lifecycle, watsonx provides the governance needed to keep data, models and agents secure and compliant as they scale.
Together, Granite and watsonx provide an open, hybrid, integrated and responsible foundation for domain-specific AI, enabling enterprises to customize, deploy and govern DSLMs at scale.
IBM has worked with thousands of organizations across industries as they transform how they operate with data and AI. In many cases, that work has centered on specializing models for specific tasks and operationalizing them with watsonx.
In one example, a major telecommunications company needed to analyze hundreds of thousands of customer service call transcripts each day. They initially relied on a massive frontier model, driving high operational costs. By switching to a smaller Granite model customized with the company’s proprietary data, the organization was able to significantly reduce costs while maintaining the performance required for large-scale transcript analysis.
Similarly, the UFC uses RAG pipelines grounded in domain-specific content with Granite and watsonx to power its Insights Engine, a platform that allows users to ask complex questions about fights and fighters through a unified conversational interface.
In each case, the combination of customized Granite models and watsonx enabled organizations to move from generic AI capabilities to domain-specific systems designed for production.
Looking ahead, Gartner states that, “[t]he next phase of the model AI race will not hinge on model raw capability or scale, but on how effectively the vendor can blend specific domain knowledge and reasoning, strong governance and deployment flexibility in a coherent ecosystem through DSLMs.”
As we move into 2026, domain specialization will increasingly define which enterprise AI initiatives succeed. IBM believes that success will depend not on standalone models, but on systems that enable customization, flexible deployment across hybrid environments and governance throughout the AI lifecycle.
Small, efficient and open models, combined with enterprise-grade data, orchestration and governance will be crucial to successful AI initiatives in 2026 and beyond.
Gartner, AI Vendor Race: IBM Is the Company to Beat in Domain-Specific Language Model Enablement, Roberta Cozza, Samantha Searle, 8 December 2025
GARTNER is a trademark of Gartner, Inc. and/or its affiliates. Gartner does not endorse any company, vendor, product or service depicted in its publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner publications consist of the opinions of Gartner’s business and technology insights organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this publication, including any warranties of merchantability or fitness for a particular purpose.