The risks from better analytics
01/20/2017

Analytics capabilities are becoming more powerful. But at the same time, they risk becoming more difficult to use to good effect.

By Tom Kim, Consulting Actuary, Milliman.

Advanced analytics have caught the attention of insurance CEOs, who are keen to exploit the potential to improve operations and profitability in a digital world. Finance professionals desire more complex multifactor analyses based on large amounts of data and computing power.

But these techniques come with challenges.

Business relevance is a critical issue. Analytical and predictive exercises can be fascinating, but they consume significant resources. There must be a strong business case in the first place. Will the insights gained be meaningful enough to add value? It’s important to think through projects, and their likely outputs, before embarking on time-consuming computational cycles.

Analytical models are often complex, and their output is vast. For the most part, senior insurance executives don’t have the time or inclination to dive into that sort of detail; they want to be able to grasp the essential points and arrive at actionable conclusions quickly. The challenge is to communicate the results in a clear and concise way.

Fortunately, there are tools at hand that do just that. Data visualization and business intelligence tools can turn complex analysis into easily and quickly assimilated information, using charts, graphs, heat maps, dashboards, and the like. Many of them are open source, and readily available. And they are advancing rapidly.

As well as conveying the key results, it is equally important to communicate the limitations of any such analysis. Slick graphs and creative illustrations should not mislead decision-makers into reading more into the outputs than is actually there.

Analytics are progressing fast, but the results still depend on the credibility of the information going into them. Suitable and sufficient data can be hard to come by. For instance, predicting policyholder behavior will work better if insurers use more than their own records—but competitors will naturally be reluctant to share their data. Offering to circulate the results of a study among participating companies could be an incentive, perhaps as a precursor to more robust (and lucrative) analyses down the road.

The demand for better analytics currently outstrips the supply of people capable of delivering those analytics, so it is important to build a well-functioning team to advance those efforts. A right mixture of diverse talents is critical—moving, organizing, and securing large volumes of data, developing statistical models with efficiency, interpreting and visualizing the results, and coordinating the many moving pieces are all key roles that need to be played. Poor fits in any of these areas can derail the process, so it is important to find individuals with the right skill sets. Perhaps more importantly, look for individuals with a passion for analyzing, understanding, and explaining.

There is also the tricky issue of incorrect assumptions and biases. They can haunt traditional analysis, but are particularly thorny when it comes to predictive exercises. And they are even more challenging to manage in an unsettled environment.

Objectivity is key here: allow the data to come up with the answer, and don’t let human bias steer the results. Avoid predefining what a “good” score is. For example, if you are studying employee productivity, be wary of a model that relies too heavily on subjective performance reviews.

Incorporating a multitude of factors into a model is also a challenge. The trick is to give them the right weight, without destabilizing existing factors and their interactions. For financial forecasting, economic scenario generators can be very sophisticated and integrated with your own assumptions or factors to run holistic simulations.

With the added statistical complexity of models comes a greater computational workload. Fortunately, ultra-scalable cloud computing solutions exist to meet even the most demanding of models. Other branches of data analytics such as cluster and proxy modeling can also be used to further alleviate computational burdens.

All of this highlights the importance of model validation. Did the model forecast what actually transpired? If not, were the deviations well understood in advance, or was the model missing key parameters? The more complex the model, the more important it is to validate it.

ADD COMMENT

no comments

Thank you for your submission.