Modernising claims processing with generative AI – A step-by-step guide

  • Categories
  • Date
  • Author
  • Industry, Generative AI
  • April 22, 2024
  • Jodie Rhodes
Explore more

The insurance industry deals with thousands of claims per day. Claims handlers spend hours processing forms, handwritten documents, PDFs, and images to extract the relevant information. Verifying coverage and initiating payments follow this initial data extraction. This repetitive, document-intensive process is precisely where Generative AI can add value through automating text-based processing.

Generative AI can unlock a multitude of benefits in claims automation including:

Save time spent on sorting unstructured data

80% of data received by claims handlers is in unstructured emails, PDFs, forms and images. It takes a significant effort to go through large volumes of documents and extract meaningful insights. Automating can save time, improve accuracy, reduce risk and reduce error rates.

Speed to market and improved Customer Experience (CX)

Speed is king… and in the digital age, customers are used to very near real time services with technology and apps. Having slow claims processing times will frustrate customers and increase churn. The results for insurers for slow claims responses are low Transactional Net Promoter Scores and high customer churn, often to digital challenger brands.

Cost efficiency

By using automation, Insurers can significantly reduce time and cost spent on manually processing claims. This frees up claims handlers to focus on more complex and difficult claims where they can add more value.

Proactive Regulatory Compliance

Find and implement the latest legislation changes, to update your policies, train staff and reduce risk. Automatic scan regulation websites to detect new legislation and apply to internal policy and claims.

Enhanced Claims Forecasting

Predictive modelling, a core strength of machine learning algorithms, further refines forecasting. By leveraging historical data alongside diverse influencing factors, models give insurers a better understanding of the variables impacting claims occurrences. This approach equips insurers with the tools to make informed decisions and optimise resources based on accurate forecasts.

Maintaining Data Integrity

Enhance data protection. AI’s proficiency in fraud detection and anomaly identification safeguards against financial losses and maintains the integrity of claims data.

Claims process automation provides the ability for claims handlers to process a high volume of data and refocus on high value tasks, enabling Insurers to make more informed decisions with better accuracy and efficiency. Automation and AI can be embedded into every step of the claims process: Reporting the claim, assessing the claim, validating the claim, making the decision and processing the payment. Ultimately, this reduces complexity, cost and risk.

No one claim is the same

Whether or not you’ve ever had to claim against your insurance, you can understand that due to the nature of life and all the variables that can occur, this means that all claims are unique, even if they’re related (e.g. large storm damage across a wide area).

This makes the process of claim management very manual.

This is where Gen AI can really help speed up the processes and this can lead to improving customer satisfaction because generally when there’s a claim then there’s also emotions attached.

Understanding a claim

Generally, an insurer will want to create a record of the “first notice of loss”, this is a record indicating that the customer has signalled that they want to claim for an incident. Once the information is extracted, the claims process typically follows these steps:

• Insurer assesses the claim
• Provides feedback to the customer, indicating whether the claim is covered
•Excesses and the claim amounts are discussed
•The claim is approved or denied

This is an oversimplification, but generally, that’s how the process works.

Creating structure from unstructured data

Computer systems want structure, however with variable businesses such as insurance, sometimes it’s very hard to produce data in such a way that traditional systems and models can comprehend the input and provide a definitive output.

At Firemind, we’ve been leveraging Generative AI to help our insurance clients bridge the gap between structured and unstructured data. This work has been instrumental in achieving the AWS Financial Services Competency earlier this year.

In this post, I’ll run you through some recipes we use with our FSI customers to help wrangle the unstructured data into a format that allows their teams to save time in manual processes and provide an outcome to their customers faster.

VQA with visual evidence

With most claims, insurance companies will require that visual photographic evidence is submitted. We’ve been helping our customers utilise the best of multi-model large language models for VQA (visual question and answering), and now with the release of Claude 3 Sonnet and Haiku on Bedrock, this process has become a lot easier.

While the technical run-through provided here covers some of the basic steps, it’s important to note that there are several other crucial elements involved when working with clients to build an application. This includes aspects such as consultancy to tailor the solution to specific needs, ensuring compliance with insurance regulations, implementing security measures such as rate limiting, authentication, and error handling, as well as addressing file validation and leveraging web workers for efficient processing.

Let’s dive right in and assume we want to quickly build a web application that can send an image to Bedrock for VQA with an insurance flavour:

Firstly, we’d start off with our frontend logic, build a form input that can be used by the user to select files they want to reason over:

 

				
					<form.Control
  accept='.jpg,.png'
  multiple={true}
  onChange={(e) => parseFiles(e.target.files).then(sendFiles)}
  type='file'
/>
				
			
				
					const reader = new FileReader()
reader.onload = async (e) => loadPrompt(e.target.result)
reader.readAsArrayBuffer(file)
				
			

As we then build the prompt on the backend, we want to utilise Claude’s messaging API.

				
					@app.post("/api/invoke-prompt")
async def invoke_prompt(req: Request):
    try:
        body = await req.json()
        return {
            "response": await invoke_model_stream(
                {
                    "messages": body["messages"],
                    "max_tokens": 3100,
                    "temperature": 0.5,
                    "top_k": 150,
                    "top_p": 0.5,
                    "system": body.get(
                        "system",
                        "You are an expert focusing on the insurance claims industry.",
                    ),
                    "anthropic_version": "bedrock-2023-05-31",
                },
                "anthropic.claude-3-haiku-20240307-v1:0",
            )
        }
    except Exception as e:
        raise HTTPException(status_code=400, detail=f"Call to Bedrock failed: {str(e)}")
				
			

The power of a multi-model large language model here is that it will take both visual and text based input and return structured output for us. We can specify our prompts in a certain way to ask it to return data as JSON answering specific questions:

 

				
					{
  content: [
    {
      source: {
        data: imageData,
        media_type: 'image/jpeg',
        type: 'base64'
      },
      type: 'image'
    },
    {
      text: `Return your response as the following JSON:
      {
        "description": "Please describe the objects in this image and if there's any damage"
      }`,
      type: 'text'
    }
  ],
  role: 'user'
}
				
			

We can now leverage the computer systems in place by using the power of Generative AI to transform this unstructured input into structured data points.

 

 

Here’s a great example of utilising the event driven, scalable and resilient services that AWS provide, orchestrated into a pipeline that allows it to exist as middleware.

• Systems can send files into an Amazon S3 bucket.
• These files trigger validation in the form of an AWS Lambda. If validation is successful, a job is created and submitted into an Amazon Simple Queue Service queue.
• An AWS Lambda picks up these jobs at the tail-end of the queue, runs inference using Amazon Bedrock. Amazon Bedrock can invoke further Agents for running recursive and complex tasks that can trigger an AWS Lambda to deal with the output.
•The inference output can be split out and sent into discreet storage for the data warehouse and long-term storage in the data lake, utilising Amazon Aurora and Amazon S3 respectively.
• Finally, the organisation can access the data using tools such as Amazon Athena and Amazon QuickSight.

How should you get started?

Feeling overwhelmed by generative AI? Don’t panic, it’s easier than ever to get started with user-friendly tools on AWS focusing on a small project or Proof-of-Concept (POC).

If you need expert guidance, we can help your organisation unlock the value of generative AI in weeks using our custom-built frameworks. Get in touch with our team, or come see us at our next event and let’s help your organisation unlock value through Generative AI.

Get in touch

Want to learn more?

Seen a specific case study or insight and want to learn more? Or thinking about your next project? Drop us a message!

 

Explore latest insights from Firemind

View all Cassio Milani joins Firemind as an Senior Solutions Architect

Cassio Milani joins Firemind as an Senior Solutions Architect

Personalising Retail Experiences with Generative AI: Beyond Recommendations

Personalising Retail Experiences with Generative AI: Beyond Recommendations

Firemind Launches Pulse and Arc at AWS London

Firemind Launches Pulse and Arc at AWS London