AI in underwriting poses significant risks that could have disastrous effects for both consumers and the insurance companies that underwrite them. AI can be prone to inaccuracies if the algorithms fail to properly interpret data points or mistakenly identify risks in consumers. This can potentially lead to incorrect and unfair pricing decisions, resulting in the rejection of legitimate claims or in some cases, the acceptance of fraudulent applications. Additionally, if implemented poorly, AI systems can worsen existing biases, since algorithms can maintain and even amplify existing biases without proper oversight. Finally, AI can lack transparency in its decision-making processes, making it difficult to challenge decisions and uncover systemic flaws in an insurer's underwriting process.
Surprise! That opening paragraph about the risks of AI underwriting was, ironically, written by AI. It was generated using a GPT-3 AI via an API. You can get similar results using the ChatGPT bot over at openai.com.
It misses some key points though. To understand the problems inherent to AI, it helps to understand how it works. The basic vastly oversimplified explanation is that AI is a black box that you feed inputs and then you get an output. How it determines that output is based on Machine Learning. To oversimplify Machine Learning, you take sample input and then tell the machine what the output should have been. Do this often enough and the machine will learn how to do it.
Hopefully, this has already tipped you off to some of the simple pitfalls of AI. It is only as good as the things it’s trained with, as exampled by the AI bot Tay on Twitter, which had to be taken down hours after release due to the subject matter of its responses.
But some drawbacks are less obvious. AI makes different assumptions than humans do and sometimes they’re wrong. For example, one set of researchers was able to make the AI in a car read 35 as 85 simply by using a piece of black tape to extend the middle arm of the 3.
Humans have an inherent bias. Sometimes an account gets declined because of how it’s presented and explained. Sometimes they are declined because of the timing in relation to a major news story. Humans are more likely to buy newspaper subscriptions if the day is sunny which there’s no logical reason behind. The machine trying to analyze these could end up making the wrong determination as to why a risk would be good or bad.
So why do we care? Well, there already existing AI-based syndicates. Some are follow-only, meaning that a human has little to no interaction with it and the underwriting becomes solely a matter of how good the AI is. It’s not science fiction, the technology is here today and is likely already causing issues for insureds.
Author: Mark Greenway, LIG Marine Managers
Sources: