Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Risk Classification’s Big Data (R)Evolution

Swedloff, Rick, Risk Classification’s Big Data (R)Evolution (2014). Connecticut Insurance Law Journal, Vol. 21, 2014. Available for download at SSRN: http://ssrn.com/abstract=2566594

Insurers can no longer ignore the promise that the algorithms driving big data will offer greater predictive accuracy than traditional statistical analysis alone. Big data represents a natural evolutionary advancement of insurers trying to price their products to increase their profits, mitigate additional moral hazard, and better combat adverse selection. But these big data promises are not free. Using big data could lead to inefficient social and private investments, undermine important risk-spreading goals of insurance, and invade policyholder privacy. These dangers are present in any change to risk classification. Using algorithms to classify risk by parsing new and complex data sets raises two additional, unique problems. First, this machine-driven classification may yield unexpected correlations with risk that unintentionally burden suspect or vulnerable groups with higher prices. The higher rates may not reinforce negative stereotypes and cause dignitary harms, because the algorithms obscure who is being charged more for coverage and for what reason. Nonetheless, there may be reasons to be concerned about which groups are burdened by having to pay more for coverage. Second, big data raises novel privacy concerns. Insurers classifying risk with big data will harvest and use personal information indirectly, without asking the policyholders for permission. This may cause certain privacy invasions unanticipated by current regulatory regimes. Further, the predictive power of big data may allow insurers to determine personally identifiable information about policyholders without asking them directly. Thus, while big data may be a natural next step in risk classification, it may require a revolutionary approach to regulation. Regulators are going to have to be more thoughtful about when price discrimination matters and what information can be kept private. The former, in particular, will require regulators to determine whether it will be acceptable to charge risky groups more for coverage regardless of the social context in which those risks materialize. Further, for both price discrimination and privacy issues, regulators will have to increase their capacity to analyze the data inputs, algorithms, and outputs of the classification schemes.”

Sorry, comments are closed for this post.