Personalising retail experiences with generative AI: Beyond recommendations
- Categories
- Date
- Author
- Industry, Generative AI
- May 28, 2024
- Jodie Rhodes
In today’s highly competitive retail landscape, delivering a truly personalised customer experience has become a critical differentiator. While traditional recommendation engines have long been the go-to solution for driving product discovery and increasing sales, their limitations in truly understanding customer preferences have become increasingly apparent. This is especially evident with the release of powerful Large Language Models (LLMs).
In this article, we explore how personalising retail experiences with generative AI can go beyond the limitations of traditional recommendation engines.
What is a “traditional recommendation engine”?
For years, retailers have relied on traditional recommendation engines to drive product discovery and increase sales. Tools like Amazon Personalize and Amazon Forecast, powered by machine learning models like XGBoost and Neural Collaborative Filtering, have become staples in the industry.
These solutions leverage historical customer data, purchase patterns, and demographic information to surface relevant product recommendations. By identifying similarities between users and items, they can make educated guesses about what a customer might be interested in next.
The key advantages of these traditional approaches are their speed of inference, low cost of inference, and high availability. Recommendation models can be deployed at scale, providing real-time suggestions to customers with minimal latency and resource requirements. This makes them a reliable and cost-effective solution for many retail use cases.
For example, it’s very quick and simple to train XGBoost using your own data as we can see in the following example:
import pandas as pd
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor
from sklearn.metrics import mean_squared_error
# Load the dataset
data = pd.read_csv('dataset.csv')
# Split the data into features and target
X = data.drop('target_column', axis=1)
y = data['target_column']
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Create an XGBoost regressor
model = XGBRegressor(
objective='reg:squarederror',
n_estimators=100,
max_depth=3,
learning_rate=0.1,
random_state=42
)
# Train the model
model.fit(X_train, y_train)
And sometimes even easier with Amazon Personalize:
...
try:
create_dataset_import_job_response = personalize.create_dataset_import_job(
jobName=import_job_name,
datasetArn=dataset_arn,
dataSource=s3_data_source,
roleArn='arn:aws:iam::123456789012:role/my-personalize-role'
)
import_job_arn = create_dataset_import_job_response['datasetImportJobArn']
print(f'Dataset import job ARN: {import_job_arn}')
except ClientError as error:
print(f'Failed to create dataset import job: {error}')
# Train a model
solution_name = 'my-solution'
recipe_arn = 'arn:aws:personalize:::recipe/aws-user-personalization'
try:
create_solution_response = personalize.create_solution(
name=solution_name,
datasetGroupArn=dataset_group_arn,
recipeArn=recipe_arn
)
solution_arn = create_solution_response['solutionArn']
print(f'Solution ARN: {solution_arn}')
except ClientError as error:
print(f'Failed to create solution: {error}')
...
However, these traditional approaches have their limitations. They often struggle to truly understand the nuanced preferences, emotional drivers, and contextual factors that influence a customer’s purchasing decisions.
Recommendations can feel generic, impersonal, and disconnected from the individual’s unique needs and desires.
Adding Firemind’s Pulse into the mix
This is where the power of generative AI, as enabled by Firemind’s PULSE, comes into play. PULSE leverages advanced language models (LLMs) to dynamically create personalised content, product descriptions, and shopping journeys that speak directly to each customer’s unique preferences and intent.
Rather than relying on historical data alone, PULSE can leverage real-time contextual cues, emotional signals, and even open-ended customer feedback to deliver a level of personalisation that was previously unattainable. The result is a seamless, engaging, and truly tailored retail experience that fosters deeper brand loyalty and drives sustainable growth.
PULSE’s LLM capabilities allow it to understand the nuanced preferences, emotional drivers, and contextual factors that influence a customer’s purchasing decisions, going beyond the limitations of traditional recommendation engines.
By tapping into the latest advancements in natural language processing and deep learning, PULSE can create personalised content and shopping journeys that resonate with each individual customer.
What could a solution look like?
Focusing specifically about a cost effective, fault tolerant asynchronous batch pipeline, a typical approach could look like this:
Data ingestion and preprocessing: Ingest and preprocess customer, product, and other relevant data from various sources, using techniques like data normalisation, feature engineering, and enrichment.
Traditional ML-based recommendations: Leverage pre-built or custom machine learning models (e.g. Amazon Personalize, XGBoost, Neural Collaborative Filtering) to generate initial product recommendations based on historical customer behaviour and item similarities, leveraging the models’ speed, availability, and low latency.
Generative AI-powered enhancements: Use large language models (LLMs) from Amazon Bedrock to generate personalised product descriptions, content, and shopping journeys that build upon the initial recommendations, leveraging the LLMs’ ability to understand context, sentiment, and customer intent.
Batch processing and optimisation: Run the asynchronous batch recommendation pipeline on a regular schedule to update the product recommendations and personalised content, continuously monitoring performance and refining the traditional ML models and LLM-powered enhancements.
Real-time inference and delivery: Integrate the batch-processed recommendations and personalised LLM-generated content into the customer-facing retail platform, leveraging the real-time inference capabilities of the traditional ML models to provide immediate recommendations, seamlessly blending them with the pre-generated personalisation.
Defining a clear output
The output from traditional recommendation engines often takes the form of a list or grid of product recommendations.
The recommendations are typically presented in a straightforward, data-driven manner. For example, a customer who has previously purchased a particular product may see a list of “Customers who bought this item also bought…” recommendations. Or a customer browsing a certain category may see a grid of “Recommended for you” products.
The recommendations are usually generic, focusing on similarities between products and customers. They may highlight complementary items, frequently purchased together products, or items that are popular among customers with similar profiles.
The language used to describe the recommendations is often functional and matter-of-fact, aiming to provide a practical suggestion rather than an emotionally resonant or personalised experience. The recommendations are typically presented as a list of product titles, images, and prices, without much additional context or personalisation.
This still requires the business to take context and apply further detail to provide highly personalised recommendation content, such as emails, however the output can feel impersonal and disconnected from the individual’s unique preferences and needs.
This is where the power of generative AI, as enabled by tools like Firemind’s PULSE, comes into play. By leveraging advanced language models and deep learning, these solutions can create highly personalised, contextual, and emotionally engaging product recommendations and shopping experiences that go beyond the limitations of traditional approaches.
Getting started at speed
It’s very easy to quickly fall into a rabbit hole of setting up a large project to experiment at speed with hypothesises around what good retail recommendations may look like. However, with Firemind’s PULSE tool you can harness the power of generative AI to create and test truly unique and tailored customer experiences, going beyond the limitations of traditional recommendation engines.
PULSE is Firemind’s generative AI tool that provides a safe and predictable starting point for enterprises to use generative AI services on AWS. It can be leveraged to quickly validate the potential of using generative AI in a pipeline without having to set up extensive infrastructure.
PULSE can dynamically generate personalised product descriptions, shopping journeys, and even custom content that speaks directly to each individual’s needs and preferences. Unlock the full potential of your customer data and foster deeper brand loyalty with Firemind’s AI-powered solutions:
Explore PULSE on the AWS Marketplace: The first step is to discover PULSE on the AWS Marketplace. This allows you to easily deploy the tool within your existing AWS infrastructure, ensuring your data remains secure and accessible.
Schedule a consultation: Reach out to the Firemind team to schedule a consultation. Our experts will work closely with you to understand your specific business requirements, use cases, and data landscape. This will help us guide you on the best way to leverage PULSE for your needs.
Participate in a Gen AI Discovery workshop: Consider signing up for Firemind’s generative AI Discovery workshop, a 3–6-week service that allows you to explore the potential of generative AI in a structured, hands-on environment. During this workshop, you’ll have the opportunity to experiment with PULSE and validate its capabilities in your retail context.
Integrate PULSE into your workflows: Once you’ve had a chance to evaluate PULSE, our team will help you seamlessly integrate the tool into your existing retail workflows. This may involve connecting PULSE to your product data, customer information, and other relevant sources to power personalised experiences.
Continuously optimise and scale: PULSE is designed to be a scalable and adaptable solution. As you continue to use the tool, our team will work with you to fine-tune the configurations, experiment with new use cases, and scale the deployment to meet your growing needs.
To learn more about how Firemind’s PULSE can help you unlock the power of generative AI to enhance your retail experiences, reach out using the form below.
Get in touch
Want to learn more?
Seen a specific case study or insight and want to learn more? Or thinking about your next project? Drop us a message!