Artificial intelligence (AI) is polarizing. It excites the futurist and engenders trepidation in the conservative. In my previous post, I described the different capabilities of both discriminative and generative AI, and sketched a world of opportunities where AI changes the way that insurers and insured would interact. This blog continues the discussion, now investigating the risks of adopting AI and proposes measures for a safe and judicious response to adopting AI.

Risk and limitations of AI

The risk associated with the adoption of AI in insurance can be separated broadly into two categories—technological and usage.

Technological risk—data confidentiality

The chief technological risk is the matter of data confidentiality. AI development has enabled the collection, storage, and processing of information on an unprecedented scale, thereby becoming extremely easy to identify, analyze, and use personal data at low cost without the consent of others. The risk of privacy leakage from interaction with AI technologies is a major source of consumer concern and mistrust.

The advent of generative AI, where the AI manipulates your data to create new content, provides an additional risk to corporate data confidentiality. For example, feeding a generative AI system such as Chat GPT with corporate data to produce a summary of confidential corporate research would mean that a data footprint would be indelibly left on the external cloud server of the AI and accessible to queries from competitors.

Technological risk—security

AI algorithms are the parameters that optimizes the training data that gives the AI its ability to give insights. Should the parameters of an algorithm be leaked, a third party may be able to copy the model, causing economic and intellectual property loss to the owner of the model. Additionally, should the parameters of the AI algorithm model may be modified illegally by a cyber attacker, it will cause the performance deterioration of the AI model and lead to undesirable consequences.

Technological risk—transparency

The black-box characteristic of AI systems, especially generative AI, renders the decision process of AI algorithms hard to understand. Crucially, the insurance sector is a financially regulated industry where the transparency, explainability and auditability of algorithms is of key importance to the regulator.

Usage risk—inaccuracy

The performance of an AI system heavily depends on the data from which it learns. If an AI system is trained on inaccurate, biased, or plagiarized data, it will provide undesirable results even if it is technically well-designed.

Usage risk—abuse

Though an AI system may be operating correctly in its analysis, decision-making, coordination, and other activities, it still has the risk of abuse. The operator use purpose, use method, use range, and so on, could be perverted or deviated, and meant to cause adverse effects. One example of this is facial recognition being used for the illegal tracking of people’s movement.

Usage risk—over-reliance

Over-reliance on AI occurs when users start accepting incorrect AI recommendations—making errors of commission. Users have difficulty determining appropriate levels of trust because they lack awareness of what the AI can do, how well it can perform, or how it works. A corollary to this risk is the weakened skill development of the AI user. For instance, a claims adjuster whose ability to handle new situations, or consider multiple perspectives, is deteriorated or restricted to only cases to which the AI also has access.

Mitigating the AI risks

The risks posed by AI adoption highlights the need to develop a governance approach to mitigate the technical and usage risk that comes from adopting AI.

Human-centric governance

To mitigate the usage risk a three-pronged approach is proposed:

  1. Start with a training program to create mandatory awareness for staff involved in developing, selecting, or using AI tools to ensure alignment with expectations.
  2. Then conduct a vendor assessment scheme to assess robustness of vendor controls and ensure appropriate transparency codified in contracts.
  3. Finally, establish policy enforcement measure to set the norms, roles and accountabilities, approval processes, and maintenance guidelines across AI development lifecycles.

Technology-centric governance

To mitigate the technological risk, the IT governance should be expanded to account for the following:

  1. An expanded data and system taxonomy. This is to ensure the AI model captures data inputs and usage patterns, required validations and testing cycles, and expected outputs. You should host the model on internal servers.
  2. A risk register, to quantify the magnitude of impact, level of vulnerability, and extent of monitoring protocols.
  3. An enlarged analytics and testing strategy to execute testing on a regular basis to monitor risk issues that related to AI system inputs, outputs, and model components.

AI in insurance—Exacting and inevitable

AI’s promise and potential in insurance lies in its ability to derive novel insights from ever larger and more complex actuarial and claims datasets. These datasets, combined with behavioral and ecological data, creates the potential for AI systems querying databases to draw erroneous data inferences, portending to real-world insurance consequences.

Efficient and accurate AI requires fastidious data science. It requires careful curation of knowledge representations in database, decomposition of data matrices to reduce dimensionality, and pre-processing of datasets to mitigate the confounding effects of missing, redundant and outlier data. Insurance AI users must be aware that input data quality limitations have insurance implications, potentially reducing actuarial analytic model accuracy. 

As AI technologies continues to mature and use cases expand, insurers should not shy from the technology. But insurers should contribute their insurance domain expertise to AI technologies development. Their ability to inform input data provenance and ensure data quality will contribute towards a safe and controlled application of AI to the insurance industry.

As you embark on your journey to AI in insurance, explore and create insurance cases. Above all, put in a robust AI governance program.

Find more blogs on AI

More from Business transformation

Celebrating World Oceans Day: Revitalizing the marine ecosystem with technology-driven engineered reefs to accelerate CO2 capture

5 min read - Every year on June 8th, World Oceans Day provides a global platform to raise awareness about the value of our oceans and the critical need for their protection. One thing is for certain: oceans are vital to our existence. The importance of our oceans and coral reefs Oceans cover 70% of the Earth’s surface and is home to up to 80% of all life in the world. Oceans also generate 50% of the oxygen we need, absorb 25% of all…

5 min read

How Krista Software helped Zimperium speed development and reduce costs with IBM Watson

3 min read - Successful businesses are embracing the power of AI to help streamline operations, generate insights, boost productivity and drive more value for clients. However, for many enterprises, the barrier to entry for integrating trustworthy, scalable and transparent AI remains high. In fact, 80% of enterprise AI projects never make it out of the lab.   So how do businesses that want to incorporate AI move forward when there is such a high level of difficulty? Many have turned to IBM’s portfolio of…

3 min read

Enabling AI-powered business intelligence across the enterprise

3 min read - Data is the lifeblood of successful organizations. Beyond the traditional data roles—data engineers, analysts, architects—decision-makers across an organization need flexible, self-service access to data-driven insights accelerated by artificial intelligence (AI). From marketing to HR, finance to supply chain and more, decision-makers can use these insights to improve decision-making and productivity enterprise-wide.  But most businesses are behind. Essential data is not being captured or analyzed—an IDC report estimates that up to 68% of business data goes unleveraged—and estimates that only 15%…

3 min read

Clients can strengthen defenses for their data with IBM Storage Defender, now generally available

2 min read - We are excited to inform our clients and partners that IBM Storage Defender, part of our IBM Storage for Data Resilience portfolio, is now generally available. Enterprise clients worldwide continue to grapple with a threat landscape that is constantly evolving. Bad actors are moving faster than ever and are causing more lasting damage to data. According to an IBM report, cyberattacks like ransomware that used to take months to fully deploy can now take as little as four days. Cybercriminals…

2 min read