Share this post:
As InsureTech Connect 2018 continues, there are some common themes that continue to be raised during discussions of this 3-day event. Points like: “disruption is a good thing,” “blockchain is a game changer,” and “big data is only getting bigger.’ With that in mind, it was a pleasure to sit in on a session titled “Cutting Through the Hype on Big Data.” The panelists in attendance shared their valuable insight of the value of big data in insurance, what it can and can’t do to help further advance the industry, and the best way to leverage the data, while maintaining compliance and playing within the rules. Moderated by Ross Shanken CEO & founder of Jornaya, it included Michael Consedine CEO of the NAIC, Pranav Pasricha CEO Intellect SEEC, and Sofie Quidenus-Wahlforss CEO & Founder of omni:us.
Defining big data
The session began, to somewhat of a surprise, as Mr. Shanken referenced to the audience to an older (but still relevant) study from IBM and the University of Oxford on the subject of big data. Titled “Analytics: The real-world use of big data,” it was published by the IBM Institute of Business Value about 6 years ago, but many of its findings are still accurate and pertinent, as Mr. Shanken pointed out. The study asked over 1,000 data scientists and practitioners from almost 100 countries to define big data, and though there were numerous answers, the three most popular were the most compelling, as applied to insurance:
- Greater scope of information gathered: The amount of data that is generated and stored continues to grow exponentially every day.
- New kinds of data: Data that didn’t exist before.
- Real-time information: Timing and accessing of information can also be included under the “big data” umbrella.
Putting it to use, from a regulator’s point of view
The incredible opportunity caused by the amount of data being generated also involves particular challenges that are created by all of this information. As Mr. Consedine of the NAIC described from a regulator point of view, big data is nontraditional, and has not been historically used for underwriting. But now, within the last 5 years or so, information is pouring in. Underwriters must make sense of it and use it to make smart, informed underwriting decisions. A concern for regulators is that this information be used properly to both target and assist those who normally couldn’t be eligible for insurance, to provide coverage for them where they normally would be unable to. He continues, “We recognized the pro-consumer potential of big data, particularly to potentially better serve underserved populations.”
This simultaneously would generate new revenue streams for insurers that may not exist before. But, as they say, “with great knowledge comes great responsibility,” the pitfalls of things like ongoing GDPR regulations must always be kept in mind, as described by Ms. Quidenus-Wahlforss. She adds, to manage this “biggest shift in insurance in the last 100 years,” and maintain compliance, transparency and clarity is key. From a regulator’s perspective, Mr. Consedine explains “Regulators want to hear that you have accountability, mastery and transparency. “
Big data powers the heart of insurance
“What one single process is impacted by big data? The one thing is everything, and it’s for the good.” – Pranav Pasricha
As Mr. Pasricha describes, big data (and AI) has permeated the “heart of insurance,” or what he says are risk assessments and claims processing. Using the power of AI and big data for these two foundational pillars of insurance, everything else is impacted and falls into place. His vision for insurance in the future is based off of two cognitive-driven principles:
• Incorporating AI into the world of commercial insurance.
• Every process and transaction of an insurance company should be based on machine learning, be self-learned and autonomous.
The implementation of these principles’ foundation comes from big data. Data collected and put to use, cognitively, allows insurers to “ask less, know more” about their clients and prospective clients. Also, he noted that the as we move past the rise of roboadvisors in reducing claims processing and continue to refine their abilities, we have established a “frictionless” claims process. Now, we are focusing on working on to improve the outcomes of these claims for our customers, using the same insights and knowledge gained from this information. Moving forward, Mr. Parischa recommends focusing on further augmenting automatic claim handling and policy expansion with AI. This, in turn, can offer more personalized insurance to customers.
At IBM, we give both the incumbents and the disruptors the tools and the toolbox to create cognitive, big data-powered insurance solutions for frictionless claims and beyond.
For more on leveraging data in the insurance industry, we recommend reading “Digital transformation in the insurance industry: Part 1 – Data is the new natural resource,” by Sandip Patel, IBM Global Managing Director, Insurance.
To learn more about IBM’s insurance solutions, please visit: