Scaling Email Data Extraction Using LLMs at Foxintelligence
Foxintelligence, an organisation rapidly advancing into AI/ML, sought a solution to automate and streamline their internal operations. Firemind proposed a proof-of-concept leveraging a retrieval-augmented generation (RAG) approach. The solution ingested and interpreted Foxintelligence's email data using large language models and semantic search, providing a prompt-based UI that allowed their team to easily extract key information and insights. (Note: This project remains a proof-of-concept and is not yet deployed in production.)
Scaling Email Data Extraction Using LLMs at Foxintelligence
Foxintelligence, an organisation rapidly advancing into AI/ML, sought a solution to automate and streamline their internal operations. Firemind proposed a proof-of-concept leveraging a retrieval-augmented generation (RAG) approach. The solution ingested and interpreted Foxintelligence's email data using large language models and semantic search, providing a prompt-based UI that allowed their team to easily extract key information and insights. (Note: This project remains a proof-of-concept and is not yet deployed in production.)
At a glance
Foxintelligence measures and improves the performance of digital brands and retailers by delivering innovative, privacy-first market and shopper insights worldwide. They are part of the NielsenIQ family of businesses.
Challenge
Foxintelligence faced challenges in processing unstructured data, like customer emails, efficiently, with manual processes limiting scalability without added staff.
Solution
Firemind proposed a proof-of-concept using AWS services and a serverless architecture to automate data processing for Foxintelligence.
Services Used
Amazon Bedrock
Amazon Kendra
AWS Lambda
Amazon API Gateway
Outcomes
Scalable LLM solution capable of replacing 6,000 parsers.
Amazon Bedrock LLMs enhance data accuracy.
Business challenges
Automating email data extraction for greater efficiency
Foxintelligence was facing challenges in efficiently processing and extracting insights from unstructured data, particularly customer email communications. Their manual, time-consuming processes hindered scalability without increasing headcount and full-time employee (FTE) hours. They sought an automated solution to free up their team for higher-value work.
"Working with Firemind on our proof of concept to extract data from emails using generative AI was a great experience. The project went smoothly, and we were very pleased with their work. The team was highly reactive, promptly addressing our needs and providing valuable insights that helped us advance in our R&D process. Firemind’s innovative approach and proactive suggestions made a significant impact on the project’s success."
Régis Amichia, Head of Data Science — Foxintelligence
Solution
AI-powered insight extraction made easy
Firemind engaged Foxintelligence to address their challenges in efficiently processing and extracting insights from unstructured data, particularly customer emails. With an internal innovation department exploring AI and machine learning, Foxintelligence was interested in leveraging these technologies for operational improvements and a competitive edge.
Firemind developed a proof-of-concept solution leveraging various AWS services to create a scalable, serverless system. The first step established a robust data ingestion pipeline to handle any text documents, including unstructured data. Using Amazon S3, customer emails were ingested and securely stored as they were uploaded.
For semantic search capabilities, Firemind implemented Amazon Kendra, enabling the large language model to retrieve the most relevant documents from the data repository. This was foundational for the chain-of-thought reasoning process central to the solution.
The user interface was developed using React and hosted on Amazon S3, with content delivered through a CloudFront distribution. Secured with AWS WAF and integrated with Amazon Cognito, the UI provided a prompt-based experience, allowing Foxintelligence’s team to easily enter queries and retrieve responses from the LLM.
The Claude V2 model, hosted on Amazon Bedrock, was central to this solution. This LLM, combined with Kendra’s search capabilities, allowed Foxintelligence to extract insights from unstructured data, automating manual tasks and freeing up the team for strategic initiatives. The system’s serverless architecture, utilising AWS Lambda and Amazon API Gateway, ensured scalability, cost-efficiency, and maintenance ease, in line with Firemind’s AWS best practices.
This proof-of-concept demonstrated how Foxintelligence could leverage generative AI and AWS services to streamline operations and scale without additional FTEs.
Streamlined data processing
Firemind's solution enabled Foxintelligence to automate ingestion and interpretation of unstructured data, such as customer emails, allowing the team to focus on strategic initiatives. AWS services like Amazon Kendra and Amazon Bedrock provided a scalable, efficient approach to data extraction.
Increase in productivity
The prompt-based user interface allowed Foxintelligence's team to interact easily with the LLM, retrieving relevant information quickly and accurately. This increased productivity and reduced the need for additional headcount.
Get in touch
Want to learn more?
Seen a specific case study or insight and want to learn more? Or thinking about your next project? Drop us a message below!