Unnatural bias: when AI is over-assumptive

The use of advanced analytics techniques and machine learning models in insurance has increased significantly during the past few years. It’s an exciting time for actuaries – and an opportunity to innovate. Using these advances, leading insurers are driving better insights and increasing their predictive powers, leading to better performance. 

However, every new technology comes with new risks. When it comes to artificial intelligence (AI), such risks could be material in terms of regulatory implications, litigation, public perception and reputation.

The ethical risks associated with data bias are not peculiar to AI models, but data bias is more prevalent in AI models for several reasons. First, AI models make predictions based on data patterns, without assuming any particular form of statistical distribution. As they learn from historical data, they may perpetuate biases that are present in the training data, leading to biased outcomes and the unfair treatment of certain groups or individuals.