Artificial intelligence (AI) is polarizing. It excites the futurist and engenders trepidation in the conservative. In my previous post, I described the different capabilities of both discriminative and generative AI, and sketched a world of opportunities where AI changes the way that insurers and insured would interact. This blog continues the discussion, now investigating the risks of adopting AI and proposes measures for a safe and judicious response to adopting AI.

Risk and limitations of AI

The risk associated with the adoption of AI in insurance can be separated broadly into two categories—technological and usage.

Technological risk—data confidentiality

The chief technological risk is the matter of data confidentiality. AI development has enabled the collection, storage, and processing of information on an unprecedented scale, thereby becoming extremely easy to identify, analyze, and use personal data at low cost without the consent of others. The risk of privacy leakage from interaction with AI technologies is a major source of consumer concern and mistrust.

The advent of generative AI, where the AI manipulates your data to create new content, provides an additional risk to corporate data confidentiality. For example, feeding a generative AI system such as Chat GPT with corporate data to produce a summary of confidential corporate research would mean that a data footprint would be indelibly left on the external cloud server of the AI and accessible to queries from competitors.

Technological risk—security

AI algorithms are the parameters that optimizes the training data that gives the AI its ability to give insights. Should the parameters of an algorithm be leaked, a third party may be able to copy the model, causing economic and intellectual property loss to the owner of the model. Additionally, should the parameters of the AI algorithm model may be modified illegally by a cyber attacker, it will cause the performance deterioration of the AI model and lead to undesirable consequences.

Technological risk—transparency

The black-box characteristic of AI systems, especially generative AI, renders the decision process of AI algorithms hard to understand. Crucially, the insurance sector is a financially regulated industry where the transparency, explainability and auditability of algorithms is of key importance to the regulator.

Usage risk—inaccuracy

The performance of an AI system heavily depends on the data from which it learns. If an AI system is trained on inaccurate, biased, or plagiarized data, it will provide undesirable results even if it is technically well-designed.

Usage risk—abuse

Though an AI system may be operating correctly in its analysis, decision-making, coordination, and other activities, it still has the risk of abuse. The operator use purpose, use method, use range, and so on, could be perverted or deviated, and meant to cause adverse effects. One example of this is facial recognition being used for the illegal tracking of people’s movement.

Usage risk—over-reliance

Over-reliance on AI occurs when users start accepting incorrect AI recommendations—making errors of commission. Users have difficulty determining appropriate levels of trust because they lack awareness of what the AI can do, how well it can perform, or how it works. A corollary to this risk is the weakened skill development of the AI user. For instance, a claims adjuster whose ability to handle new situations, or consider multiple perspectives, is deteriorated or restricted to only cases to which the AI also has access.

Mitigating the AI risks

The risks posed by AI adoption highlights the need to develop a governance approach to mitigate the technical and usage risk that comes from adopting AI.

Human-centric governance

To mitigate the usage risk a three-pronged approach is proposed:

  1. Start with a training program to create mandatory awareness for staff involved in developing, selecting, or using AI tools to ensure alignment with expectations.
  2. Then conduct a vendor assessment scheme to assess robustness of vendor controls and ensure appropriate transparency codified in contracts.
  3. Finally, establish policy enforcement measure to set the norms, roles and accountabilities, approval processes, and maintenance guidelines across AI development lifecycles.

Technology-centric governance

To mitigate the technological risk, the IT governance should be expanded to account for the following:

  1. An expanded data and system taxonomy. This is to ensure the AI model captures data inputs and usage patterns, required validations and testing cycles, and expected outputs. You should host the model on internal servers.
  2. A risk register, to quantify the magnitude of impact, level of vulnerability, and extent of monitoring protocols.
  3. An enlarged analytics and testing strategy to execute testing on a regular basis to monitor risk issues that related to AI system inputs, outputs, and model components.

AI in insurance—Exacting and inevitable

AI’s promise and potential in insurance lies in its ability to derive novel insights from ever larger and more complex actuarial and claims datasets. These datasets, combined with behavioral and ecological data, creates the potential for AI systems querying databases to draw erroneous data inferences, portending to real-world insurance consequences.

Efficient and accurate AI requires fastidious data science. It requires careful curation of knowledge representations in database, decomposition of data matrices to reduce dimensionality, and pre-processing of datasets to mitigate the confounding effects of missing, redundant and outlier data. Insurance AI users must be aware that input data quality limitations have insurance implications, potentially reducing actuarial analytic model accuracy. 

As AI technologies continues to mature and use cases expand, insurers should not shy from the technology. But insurers should contribute their insurance domain expertise to AI technologies development. Their ability to inform input data provenance and ensure data quality will contribute towards a safe and controlled application of AI to the insurance industry.

As you embark on your journey to AI in insurance, explore and create insurance cases. Above all, put in a robust AI governance program.

Find more blogs on AI

Categories

More from

IBM TechXchange underscores the importance of AI skilling and partner innovation

3 min read - Generative AI and large language models are poised to impact how we all access and use information. But as organizations race to adopt these new technologies for business, it requires a global ecosystem of partners with industry expertise to identify the right enterprise use-cases for AI and the technical skills to implement the technology. During TechXchange, IBM's premier technical learning event in Las Vegas last week, IBM Partner Plus members including our Strategic Partners, resellers, software vendors, distributors and service…

Kubernetes version 1.28 now available in IBM Cloud Kubernetes Service

2 min read - We are excited to announce the availability of Kubernetes version 1.28 for your clusters that are running in IBM Cloud Kubernetes Service. This is our 23rd release of Kubernetes. With our Kubernetes service, you can easily upgrade your clusters without the need for deep Kubernetes knowledge. When you deploy new clusters, the default Kubernetes version remains 1.27 (soon to be 1.28); you can also choose to immediately deploy version 1.28. Learn more about deploying clusters here. Kubernetes version 1.28 In…

“Teams will get smarter and faster”: A conversation with Eli Manning

3 min read - For the last three years, IBM has worked with two-time champion Eli Manning to help spread the word about our partnership with ESPN. The nature of that partnership is pretty technical, involving powerful AI models—built with watsonx—that analyze massive data sets to generate insights that help ESPN Fantasy Football team owners manage their teams. Eli has not only helped us promote awareness of these insights, but also to unpack the technology behind them, making it understandable and accessible to millions.…

Temenos brings innovative payments capabilities to IBM Cloud to help banks transform

3 min read - The payments ecosystem is at an inflection point for transformation, and we believe now is the time for change. As banks look to modernize their payments journeys, Temenos Payments Hub has become the first dedicated payments solution to deliver innovative payments capabilities on the IBM Cloud for Financial Services®—an industry-specific platform designed to accelerate financial institutions' digital transformations with security at the forefront. This is the latest initiative in our long history together helping clients transform. With the Temenos Payments…