Authors

Rick Swedloff

Document Type

Article

Disciplines

Insurance Law

Abstract

Insurers can no longer ignore the promise that the algorithms driving big data will offer greater predictive accuracy than traditional statistical analysis alone. Big data represents a natural evolutionary advancement of insurers trying to price their products to increase their profits, mitigate additional moral hazard, and better combat adverse selection. But these big data promises are not free. Using big data could lead to inefficient social and private investments, undermine important risk-spreading goals of insurance, and invade policyholder privacy. These dangers are present in any change to risk classification. Using algorithms to classify risk by parsing new and complex data sets raises two additional, unique problems.

First, this machine-driven classification may yield unexpected correlations with risk that unintentionally burden suspect or vulnerable groups with higher prices. The higher rates may not reinforce negative stereotypes and cause dignitary harms, because the algorithms obscure who is being charged more for coverage and for what reason. Nonetheless, there may be reasons to be concerned about which groups are burdened by having to pay more for coverage.

Second, big data raises novel privacy concerns. Insurers classifying risk with big data will harvest and use personal information indirectly, without asking the policyholders for permission. This may cause certain privacy invasions unanticipated by current regulatory regimes. Further, the predictive power of big data may allow insurers to determine personally identifiable information about policyholders without asking them directly.

Included in

Insurance Law Commons

COinS