Big data and AI: are we acting responsibly?
10/16/2016

The benefits of big data could become even greater, thanks to the growing use of artificial intelligence. But are we forgetting our ethics?

By Paul Maher, Principal, Life Technology Solutions, Milliman.

The brave new world of big data and artificial intelligence (AI) seems so beguiling.

Take large sets of data, put them together, let the machines do the hard work and wait for the answers. Insurers will get newfound insights into risks and pricing, identify market opportunities and all with much tighter control over their portfolio.

They might also be putting their reputation on the line.

What happens if insurers don’t ask the right questions? What if they haven’t got the right controls in place? What if the answers throw insurers into conflict with public policy?

These questions—and many similar ones—lie at the heart of the debate about the ethics of using big data. And this debate will only grow more heated as AI bolsters insurers’ ability to analyze data sets.

Unwanted by-products

The chances of something going wrong with insurers’ big data is increasing all the time.

The risk of data leaks, for example, grows as large data sets become more accessible and are moved around within insurers’ own highly secure server infrastructures, or even into the Cloud. If appropriate controls and policy are not enforced and followed, then this could be a problem, and leave insurers exposed and liable if something goes wrong.

Merging data sets and comparing them can produce personal, identifiable data almost as a by-product.

And the more big data is used, the more it can be inadvertently misused. Taking once-discrete data sets, merging them and comparing them (often referred to as the ‘mosaic effect’) can produce personal, identifiable data almost as a by-product. Often those working on the big data projects don’t understand these risks.

Policies and procedures for handling data are often cast in a bad light—blamed for slowing or obstructing innovation. But without these parameters—and without ownership of them at board level—the potential for catastrophic reputational damage is significant.

Treat with care

Boards must also start to grapple with the emerging challenges of AI. AI enables insurers to ask all sorts of questions of the big data they collect from customers. Sounds exciting, right?

But if insurers don’t know exactly what they will and won’t do with the answers produced, they may find themselves facing a public relations nightmare.

If you don’t discuss what you will and won’t do with the answers produced by AI and big data, you could find in yourself facing a public relations nightmare.

Some of us might remember the story, for example, of when a father complained to retailer Target that it was sending his high-school daughter vouchers for discounts on baby products. “Are you trying to encourage her to get pregnant?” asked the father.

It turned out Target’s statistical model had correctly inferred—by looking at purchasing data— that the father’s daughter was pregnant. The father didn’t know.

Inclusion and fairness

Using data analytics, insurers can easily build up a profile of an individual or a business, including deep insights into:

  • what drives behavior,
  • how that behavior affects risk, and crucially
  • what insurance policy terms they should offer and at what price.

Based on such insight, an insurer might decide not to offer any policy terms. But if that happens too many times with too many people and in too many companies, the insurance industry will find itself facing profound ethical and public policy challenges.

Regulatory scrutiny is already afoot in the area of ‘big data discrimination’. In a new report the Federal Trade Commission (FTC) calls on companies to check whether their ‘data model’ takes account of biases and whether their “reliance on big data raise[s] ethical or fairness concerns”.

The FTC says it encourages companies to “apply big data analytics in ways that provide benefits and opportunities to consumers, while avoiding pitfalls that may violate consumer protection or equal opportunity laws, or detract from core values of inclusion and fairness”.

These challenges are not unique to insurers. Other disciplines—engineering, science and, above all, medicine—have already grown accustomed to dealing with them. The lesson they can offer insurers: big data is not just an opportunity—it’s also a big responsibility.

 

ADD COMMENT

no comments

Thank you for your submission.