What can insurance companies do to exploit all their unstructured information?
A typical big data scenario
Insurance companies collect huge volumes of text on a daily basis and through multiple channels (their agents, customer care centers, emails, social networks, web in general). The information collected includes policies, expert and health reports, claims and complaints, results of surveys, relevant interactions between customers and no-customers in social networks, etc. It is impossible to handle, classify, interpret or extract the essential information from all that material.
The Insurance Industry is among the ones that most can benefit from the application of technologies for the intelligent analysis of free text (known as Text Analytics, Text Mining or Natural Language Processing).
Insurance companies have to cope also with the challenge of combining the results of the analysis of these textual contents with structured data (stored in conventional databases) to improve decision-making. In this sense, industry analysts consider essential the use of multiple technologies based on Artificial Intelligence (intelligent systems), Machine Learning (data mining) and Natural Language Processing (both statistical and symbolic or semantic).
Most promising areas of text analytics in the Insurance Sector
According to Accenture, in a report released in 2013, it is estimated that in Europe insurance companies lose between 8,000 and 12,000 million euros per year due to fraudulent claims, with an increasing trend. Additionally, the industry estimates that between 5% and 10% of the compensations paid by the companies in the previous year were due to fraudulent reasons, which could not be detected due to the lack of predictive analytic tools.
According to the specialized publication “Health Data Management”, Medicare’s fraud prevention system in the United States, which is based on predictive algorithms that analyze patterns in the providers’ billing, in 2013 saved more than 200 million dollars in rejected payments.