Artificial intelligence is disrupting many different areas of business. The technology’s potential is particularly apparent in customer service, talent, and application modernization. According to IBM’s Institute of Business Value (IBV), AI can contain contact center cases, enhancing customer experience by 70%. Additionally, AI can increase productivity in HR by 40% and in application modernization by 30%. One example of this is reducing labor burdens by automating ticket assistance through IT operations. Although, while these numbers indicate transformation opportunities for enterprises, scaling and operationalizing AI has historically been challenging for organizations.
With data stored across clouds and on-premises environments, it becomes difficult to access it while managing governance and controlling costs. Further complicating matters, the uses of data have become more varied, and companies are faced with managing complex or poor-quality data.
With access to the right data, it is easier to democratize AI for all users by using the power of foundation models to support a wide range of tasks. However, it’s important to factor in the opportunities and risks of foundation models—in particular, the trustworthiness of models to deploying AI at scale.
Trust is a leading factor in preventing stakeholders from implementing AI. In fact, IBV found that 67% of executives are concerned about potential liabilities of AI. Existing responsible AI tooling lacks technical ability and is restricted to specific environments, meaning customers are unable to use the tools to govern models on other platforms. This is alarming, considering how generative models often produce output containing toxic language—including hate, abuse, and profanity (HAP)—or leak personal identifiable information (PII). Companies are increasingly receiving negative press for AI usage, damaging their reputation. Data quality strongly impacts the quality and usefulness of content produced by an AI model, underscoring the significance of addressing data challenges.
Increasing user productivity: Knowledge management use cases
An emerging generative AI application is knowledge management. With the power of AI, enterprises can use knowledge management tools to collect, create, access and share relevant data for organizational insights. Knowledge management software applications are often implemented into a centralized system, or knowledge base, to support business domains and tasks—including talent, customer service and application modernization.
HR, talent and AI
HR departments can put AI to work through tasks like content generation, retrieval augmented generation (RAG) and classification. Content generation can be utilized to quickly create the description for a role. Retrieval augmented generation (RAG) can help with identifying the skills needed for a role based on internal HR documents. Classification can help with determining whether the applicant is a good fit for the enterprise given their application. These tasks streamline the processing time from when a person applies to receiving a decision on their application.
Customer service and AI
Customer service divisions can take advantage of AI by using RAG, summarization and classification. For example, enterprises can incorporate a customer service chatbot on their website that would use generative AI to be more conversational and context specific. Retrieval augmented generation can be used to search through internal documents of organizational knowledge to answer the customer’s inquiry and generate a tailored output. Summarization can help employees by providing them a brief of the customer’s problem and previous interactions with the company. Text classification can be utilized to classify the customer’s sentiment. These tasks can reduce manual labor while improving customer support and, hopefully, customer satisfaction and retention.
Application modernization and AI
App modernization can also be achieved with the help of summarization and content generation tasks. With a summary of the company’s knowledge and business objectives, developers can spend less time learning this necessary information and more time coding. IT workers can also create a summary ticket request to quickly address and prioritize issues found in a support ticket. Another way developers can use generative AI is by communicating with large language models (LLMs) in human language and asking the model to generate code. This can help the developer translate code languages, solve bugs and reduce time spent coding, allowing for more creative ideation.
Powering a knowledge management system with a data lakehouse
Organizations need a data lakehouse to target data challenges that come with deploying an AI-powered knowledge management system. It provides the combination of data lake flexibility and data warehouse performance to help to scale AI. A data lakehouse is a fit-for-purpose data store.
To prepare data for AI, data engineers need the ability to access any type of data across vast amounts of sources and hybrid cloud environments from a single point of entry. A data lakehouse with multiple query engines and storage can allow team members to share data in open formats. Additionally, engineers can cleanse, transform and standardize data for AI/ML modeling without duplicating or building additional pipelines. Moreover, enterprises should consider lakehouse solutions that incorporate generative AI to help data engineers and non-technical users easily discover, augment and enrich data with natural language. Data lakehouses improve the efficiency of deploying AI and the generation of data pipelines.
AI-powered knowledge management systems hold sensitive data, including HR email automations, marketing video translations and call center transcript analytics. When it comes to this sensitive information, having access to secure data becomes increasingly important. Customers need a data lakehouse that offers built-in centralized governance and local automated policy enforcement, supported by data cataloging, access controls, security and transparency in data lineage.
Through these data foundations set by a data lakehouse solution, data scientists can confidently use governed data to build, train, tune and deploy AI models, ensuring trust and confidence.
Ensure responsible, transparent and explainable knowledge management systems
As previously mentioned, chatbots are a popular form of generative AI-powered knowledge management system used for customer experience. This application can produce value for an enterprise, but it also poses risk.
For instance, a chatbot for a healthcare company can reduce nurse workloads and improve customer service by answering questions about treatments using known details from previous interactions. However, if data quality is poor or if bias was injected into the model during the fine-tuning or prompt tuning, the model is likely to be untrustworthy. As a result, the chatbot may offer a response to a patient that includes inappropriate language or leaks another patient’s PII.
To prevent this situation from happening, organizations need proactive detection and mitigation of bias and drift when deploying AI models. Having an automatic content filtering capability to detect HAP and PII leakage would reduce the model validators burden of manually validating models to ensure they avoid toxic content.
Turn possibility into reality with watsonx
As stated, a knowledge management strategy refers to the collection, creation and sharing of knowledge within an organization. It is often implemented into a knowledge sharing system that can be shared with stakeholders to learn and leverage existing collective knowledge and organizational insights. For instance, a RAG AI task can help with identifying the skills needed for a job role based on internal HR documents or support a customer service chatbot to search through internal documents to answer a customer’s inquiry and generate a tailored output.
When looking to deploy generative AI models, businesses should join forces with a trusted partner that has created or sourced quality models from quality data—one that allows customization with enterprise data and goals.
To help our clients solve for knowledge mangement, we offer IBM watsonx.ai. As part of the IBM watsonx platform that brings together new generative AI capabilities, watsonx.ai is powered by foundation models and traditional machine learning into a powerful studio spanning the AI lifecycle. With watsonx.ai, you can train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with ease and build AI applications in a fraction of the time with a fraction of the data.