Artificial intelligence is disrupting many different areas of business. The technology’s potential is particularly apparent in customer service, talent, and application modernization. According to IBM’s Institute of Business Value (IBV), AI can contain contact center cases, enhancing customer experience by 70%. Additionally, AI can increase productivity in HR by 40% and in application modernization by 30%. One example of this is reducing labor burdens by automating ticket assistance through IT operations. Although, while these numbers indicate transformation opportunities for enterprises, scaling and operationalizing AI has historically been challenging for organizations.

Request a demo to see how watsonx can put AI to work

There’s no AI, without IA

AI is only as good as the data that informs it, and the need for the right data foundation has never been greater. According to IDC, stored data is expected to grow up to 250% over the next 5 years

With data stored across clouds and on-premises environments, it becomes difficult to access it while managing governance and controlling costs. Further complicating matters, the uses of data have become more varied, and companies are faced with managing complex or poor-quality data.

Precisely conducted a study that found that within enterprises, data scientists spend 80% of their time cleaning, integrating and preparing data, dealing with many formats, including documents, images, and videos. Overall placing emphasis on establishing a trusted and integrated data platform for AI.

Trust, AI and effective knowledge management

With access to the right data, it is easier to democratize AI for all users by using the power of foundation models to support a wide range of tasks. However, it’s important to factor in the opportunities and risks of foundation models—in particular, the trustworthiness of models to deploying AI at scale.

Trust is a leading factor in preventing stakeholders from implementing AI. In fact, IBV found that 67% of executives are concerned about potential liabilities of AI. Existing responsible AI tooling lacks technical ability and is restricted to specific environments, meaning customers are unable to use the tools to govern models on other platforms. This is alarming, considering how generative models often produce output containing toxic language—including hate, abuse, and profanity (HAP)—or leak personal identifiable information (PII). Companies are increasingly receiving negative press for AI usage, damaging their reputation. Data quality strongly impacts the quality and usefulness of content produced by an AI model, underscoring the significance of addressing data challenges.

Increasing user productivity: Knowledge management use cases

An emerging generative AI application is knowledge management. With the power of AI, enterprises can use knowledge management tools to collect, create, access and share relevant data for organizational insights. Knowledge management software applications are often implemented into a centralized system, or knowledge base, to support business domains and tasks—including talent, customer service and application modernization.

HR, talent and AI

HR departments can put AI to work through tasks like content generation, retrieval augmented generation (RAG) and classification. Content generation can be utilized to quickly create the description for a role. Retrieval augmented generation (RAG) can help with identifying the skills needed for a role based on internal HR documents. Classification can help with determining whether the applicant is a good fit for the enterprise given their application. These tasks streamline the processing time from when a person applies to receiving a decision on their application.

Customer service and AI

Customer service divisions can take advantage of AI by using RAG, summarization and classification. For example, enterprises can incorporate a customer service chatbot on their website that would use generative AI to be more conversational and context specific. Retrieval augmented generation can be used to search through internal documents of organizational knowledge to answer the customer’s inquiry and generate a tailored output. Summarization can help employees by providing them a brief of the customer’s problem and previous interactions with the company. Text classification can be utilized to classify the customer’s sentiment. These tasks can reduce manual labor while improving customer support and, hopefully, customer satisfaction and retention.

Application modernization and AI

App modernization can also be achieved with the help of summarization and content generation tasks. With a summary of the company’s knowledge and business objectives, developers can spend less time learning this necessary information and more time coding. IT workers can also create a summary ticket request to quickly address and prioritize issues found in a support ticket. Another way developers can use generative AI is by communicating with large language models (LLMs) in human language and asking the model to generate code. This can help the developer translate code languages, solve bugs and reduce time spent coding, allowing for more creative ideation.

Powering a knowledge management system with a data lakehouse

Organizations need a data lakehouse to target data challenges that come with deploying an AI-powered knowledge management system. It provides the combination of data lake flexibility and data warehouse performance to help to scale AI. A data lakehouse is a fit-for-purpose data store.

To prepare data for AI, data engineers need the ability to access any type of data across vast amounts of sources and hybrid cloud environments from a single point of entry. A data lakehouse with multiple query engines and storage can allow team members to share data in open formats. Additionally, engineers can cleanse, transform and standardize data for AI/ML modeling without duplicating or building additional pipelines. Moreover, enterprises should consider lakehouse solutions that incorporate generative AI to help data engineers and non-technical users easily discover, augment and enrich data with natural language. Data lakehouses improve the efficiency of deploying AI and the generation of data pipelines.

AI-powered knowledge management systems hold sensitive data, including HR email automations, marketing video translations and call center transcript analytics. When it comes to this sensitive information, having access to secure data becomes increasingly important. Customers need a data lakehouse that offers built-in centralized governance and local automated policy enforcement, supported by data cataloging, access controls, security and transparency in data lineage.

Through these data foundations set by a data lakehouse solution, data scientists can confidently use governed data to build, train, tune and deploy AI models, ensuring trust and confidence.

Ensure responsible, transparent and explainable knowledge management systems

As previously mentioned, chatbots are a popular form of generative AI-powered knowledge management system used for customer experience. This application can produce value for an enterprise, but it also poses risk.

For instance, a chatbot for a healthcare company can reduce nurse workloads and improve customer service by answering questions about treatments using known details from previous interactions. However, if data quality is poor or if bias was injected into the model during the fine-tuning or prompt tuning, the model is likely to be untrustworthy. As a result, the chatbot may offer a response to a patient that includes inappropriate language or leaks another patient’s PII.

To prevent this situation from happening, organizations need proactive detection and mitigation of bias and drift when deploying AI models. Having an automatic content filtering capability to detect HAP and PII leakage would reduce the model validators burden of manually validating models to ensure they avoid toxic content.

Automatic content filters in watsonx can help prevent toxic language from being presented to an end-user.

Turn possibility into reality with watsonx

As stated, a knowledge management strategy refers to the collection, creation and sharing of knowledge within an organization. It is often implemented into a knowledge sharing system that can be shared with stakeholders to learn and leverage existing collective knowledge and organizational insights. For instance, a RAG AI task can help with identifying the skills needed for a job role based on internal HR documents or support a customer service chatbot to search through internal documents to answer a customer’s inquiry and generate a tailored output.

When looking to deploy generative AI models, businesses should join forces with a trusted partner that has created or sourced quality models from quality data—one that allows customization with enterprise data and goals. 

To help our clients solve for knowledge mangement, we offer IBM watsonx.ai. As part of the IBM watsonx platform that brings together new generative AI capabilities, watsonx.ai is powered by foundation models and traditional machine learning into a powerful studio spanning the AI lifecycle. With watsonx.ai, you can train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with ease and build AI applications in a fraction of the time with a fraction of the data.

Book a trial to see the value of your enterprise Learn more about IBM watsonx.ai
Was this article helpful?
YesNo

More from Artificial intelligence

AI transforms the IT support experience

5 min read - We know that understanding clients’ technical issues is paramount for delivering effective support service. Enterprises demand prompt and accurate solutions to their technical issues, requiring support teams to possess deep technical knowledge and communicate action plans clearly. Product-embedded or online support tools, such as virtual assistants, can drive more informed and efficient support interactions with client self-service. About 85% of execs say generative AI will be interacting directly with customers in the next two years. Those who implement self-service search…

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

Chat with watsonx models

3 min read - IBM is excited to offer a 30-day demo, in which you can chat with a solo model to experience working with generative AI in the IBM® watsonx.ai™ studio.   In the watsonx.ai demo, you can access some of our most popular AI models, ask them questions and see how they respond. This gives users a taste of some of the capabilities of large language models (LLMs). AI developers may also use this interface as an introduction to building more advanced…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters