A Hands-On Guide to Building a Visual Similarity-Based Recommendation System using Python

This article was published as a part of the Data Science Blogathon.

Introduction

In today’s competitive world of technology, it is very crucial for a growing e-commerce platform to engage its customers and maintain a consistent brand experience. Instead of allowing the users to perform search after search in order to get their desired items, recommending such relevant items is more impressive and provides a better sense of satisfaction.

Product recommendations can address such challenges very effectively by analyzing the customer’s previous browsing patterns and current platform usage.

Product recommendations can help in:

  • Converting the shoppers to customers
  • Engaging the customers
  • Boosting sales and revenue
  • Delivering the most relevant content
  • Maintaining the brand experience

Broadly speaking there are two kinds of recommendation approaches:

  1. Content-based recommendations
  2. Collaborative filtering

As the name suggests, the content-based method recommends based on the additional content (metadata) about the customers or products. For products, this content may be product title, description, images, category/subcategory, specification, etc.

So, this approach recommends the products by finding the most similar products to a given product based on the content.

In this post, we will implement a content-based recommendation system by utilizing the product images. Basically, the goal is to recommend product images that are very similar to a recently bought/checked product image.

Therefore, this image-based recommendation will be helpful in recommending the most similar products to the customers based on their recent shopping behavior/platform usage.

Let’s start implementing this using the Fashion Product Images Dataset. The dataset contains 2906 product images across four different gender categories (men, women, boys, and girls). It also contains various product features like title, category, subcategory, color, gender, type, usage, etc.

Table of Contents

  1. Basic Data Analysis
    1. Importing the necessary libraries & loading the data
    2. 1.2 Basic statistics – Number of products, subcategories & gender
    3. 1.3 Frequency of each gender
    4. 1.4 Distribution of products gender-wise
  1. Data Preparation
  2. Feature extraction using ResNet
  3. Computing the Euclidean distance and recommending similar products
    1. Loading the extracted features
    2. Distance computation and Recommendation
  1. Deploying the solution

Basic Data Analysis

1.1 Importing the necessary libraries & loading the data

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
from keras import applications
from sklearn.metrics import pairwise_distances
import requests
from PIL import Image
import pickle
from datetime import datetime
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
import plotly.figure_factory as ff
import plotly.graph_objects as go
import plotly.express as px
import streamlit as st
#use the below library while displaying the images in jupyter notebook
from IPython.display import display, Image

fashion_df = pd.read_csv("./fashion.csv")

1.2 Basic statistics – Number of products, subcategories & gender

print("Total number of products : ", fashion_df.shape[0])
print("Total number of unique subcategories : ", fashion_df["SubCategory"].nunique())
print("Total number of unique gender types : ", fashion_df["Gender"].nunique())

As mentioned earlier, the dataset contains 2906 products of 9 different subcategories across 4 different gender types.

1.3 Frequency of each gender

In this dataset, most of the products belong to men, then women, and so on.

1.4 Distribution of products gender-wise

plot = sns.countplot(fashion_df["Gender"])
plt.title("Distribution of articles gender-wise")
plt.xlabel("Gender type")
plt.ylabel("Number of products")
plot.set_xticklabels(plot.get_xticklabels())

From the bar chart also, we can observe that Men have the highest number of products. Similarly, the dataset is almost balanced.

2. Data Preparation

Since cross-category recommendations are not preferred, for example, recommending girls’ products to a bachelor, let’s subset the data gender-wise into 4 different dataframes.

apparel_boys = fashion_df[fashion_df["Gender"]=="Boys"]
apparel_girls = fashion_df[fashion_df["Gender"]=="Girls"]
footwear_men = fashion_df[fashion_df["Gender"]=="Men"]
footwear_women = fashion_df[fashion_df["Gender"]=="Women"]

3. Feature Extraction using ResNet

Generally, the product image contains a unique pattern along with its color, shape, and edges.

Images with the same kind of such features are supposed to be similar. Therefore, extracting such features from the images will be very helpful in order to recommend the most similar products.

How to extract features from the images?

Computer vision techniques can be used to extract features from the images. Here, since we have limitations on data size, compute resources, and time, so let’s use the standard pre-trained models like ResNet to extract the features. Such pre-trained models are already fine-tuned and trained on a huge dataset (like ImageNet). This process is also known as transfer learning.

ResNet

ResNet is an abbreviated form of Residual Networks, first proposed by Kaiming He in 2015. Currently, it is perceived as a classical neural network for many computer vision tasks. In 2015, during the ImageNet Challenge, this model out-performed previous models like GoogleNet, VGGNet, and AlexNet.

The architecture allows us to train an extremely deep and wide network with 152 layers successfully. In our implementation, we will use ResNet50 (a smaller version of ResNet152) to extract the features.

img_width, img_height = 224, 224
#top_model_weights_path = 'resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'
train_data_dir = "/home/vikas/fl/av/Footwear/Men/Images/"
nb_train_samples = 811
epochs = 50
batch_size = 1def extract_features():
Productids = []
datagen = ImageDataGenerator(rescale=1. / 255)
model = applications.ResNet50(include_top=False, weights='imagenet')
generator = datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)

for i in generator.filenames:
Productids.append(i[(i.find("/")+1):i.find(".")])
extracted_features = model.predict_generator(generator, nb_train_samples // batch_size)
extracted_features = extracted_features.reshape((811, 100352))
np.save(open('./Men_ResNet_features.npy', 'wb'), extracted_features)
np.save(open('./Men_ResNet_feature_product_ids.npy', 'wb'), np.array(Productids))

a = datetime.now()
extract_features()
print("Time taken in feature extraction", datetime.now()-a)

extract_features() function extracts the features from the given images. As per the ResNet standard first, we resize the image to 224 x 224 and normalize them using ImageDataGenerator available in Keras. Finally, each image is represented as a 100352-dimensional feature vector.

To avoid run-time feature extraction after deployment, the extracted features are persisted in NumPy arrays. We maintain two arrays here for product Ids and extracted features respectively.

Similarly, this same feature extraction process is repeated for other product images gender-wise.

4. Computing the Euclidean distance and recommending similar products

Distance is the most preferred measure to assess similarity among items/records. Minimum the distance, the higher the similarity, whereas, the maximum the distance, the lower the similarity.

There are various types of distances as per geometry like Euclidean distance, Cosine distance, Manhattan distance, etc. We will use Euclidean distance here to compute similarity.

Since we have already extracted the image features so the Euclidean distance can be easily computed using the pairwise_distances() function form sklearn.metrics.

Once this distance is computed, we can easily recommend the products as per the ascending order of distance. Let’s do this!

4.1 Loading the extracted features

extracted_features = np.load('./Men_ResNet_features.npy')
Productids = np.load('./Men_ResNet_feature_product_ids.npy')men = pd.read_csv('./footwear_men.csv')
df_Productids = list(men['ProductId'])
Productids = list(Productids)

4.2 Distance computation and Recommendation

def get_similar_products_cnn(product_id, num_results):
doc_id = Productids.index(product_id)
pairwise_dist = pairwise_distances(extracted_features, extracted_features[doc_id].reshape(1,-1))
indices = np.argsort(pairwise_dist.flatten())[0:num_results]
pdists  = np.sort(pairwise_dist.flatten())[0:num_results]
print("="*20, "input product image", "="*20)
ip_row = men[['ImageURL','ProductTitle']].loc[men['ProductId']==int(Productids[indices[0]])]
#print(ip_row.head())
for indx, row in ip_row.iterrows():
display(Image(url=row['ImageURL'], width = 224, height = 224,embed=True))
print('Product Title: ', row['ProductTitle'])
print("n","="*20, "Recommended products", "="*20)

for i in range(1,len(indices)):
rows = men[['ImageURL','ProductTitle']].loc[men['ProductId']==int(Productids[indices[i]])]
for indx, row in rows.iterrows():
display(Image(url=row['ImageURL'], width = 224, height = 224,embed=True))
print('Product Title: ', row['ProductTitle'])
print('Euclidean Distance from input image:', pdists[i])get_similar_products_cnn('13683', 5)

The above get_similar_products_cnn() function recommends 5 most similar products to the queried product based on the extracted features. The function accepts two arguments – product id of recently bought/checked item and the number of products to be recommended.

The top 5 recommended products against the product id 13683 are as shown below.

Tip: This complete code can be downloaded from here.

Likewise, we can recommend products against the products from other gender types also. Let’s see the final deployment using Streamlit.

5. Deploying the Solution

Streamlit is an interactive library/framework to build data apps & web applications and deploy machine learning workloads. The most important thing is it does not require any prior knowledge of web designing and development. Python knowledge is sufficient to interact with this as it is Python compatible.

st.set_option('deprecation.showfileUploaderEncoding', False)

fashion_df = pd.read_csv("./fashion.csv")
boys_extracted_features = np.load('./Boys_ResNet_features.npy')
boys_Productids = np.load('./Boys_ResNet_feature_product_ids.npy')
girls_extracted_features = np.load('./Girls_ResNet_features.npy')
girls_Productids = np.load('./Girls_ResNet_feature_product_ids.npy')
men_extracted_features = np.load('./Men_ResNet_features.npy')
men_Productids = np.load('./Men_ResNet_feature_product_ids.npy')
women_extracted_features = np.load('./Women_ResNet_features.npy')
women_Productids = np.load('./Women_ResNet_feature_product_ids.npy')
fashion_df["ProductId"] = fashion_df["ProductId"].astype(str)

def get_similar_products_cnn(product_id, num_results):
if(fashion_df[fashion_df['ProductId']==product_id]['Gender'].values[0]=="Boys"):
extracted_features = boys_extracted_features
Productids = boys_Productids
elif(fashion_df[fashion_df['ProductId']==product_id]['Gender'].values[0]=="Girls"):
extracted_features = girls_extracted_features
Productids = girls_Productids
elif(fashion_df[fashion_df['ProductId']==product_id]['Gender'].values[0]=="Men"):
extracted_features = men_extracted_features
Productids = men_Productids
elif(fashion_df[fashion_df['ProductId']==product_id]['Gender'].values[0]=="Women"):
extracted_features = women_extracted_features
Productids = women_Productids
Productids = list(Productids)
doc_id = Productids.index(product_id)
pairwise_dist = pairwise_distances(extracted_features, extracted_features[doc_id].reshape(1,-1))
indices = np.argsort(pairwise_dist.flatten())[0:num_results]
pdists  = np.sort(pairwise_dist.flatten())[0:num_results]
st.write("""
#### input item details
""")
ip_row = fashion_df[['ImageURL','ProductTitle']].loc[fashion_df['ProductId']==Productids[indices[0]]]
for indx, row in ip_row.iterrows():
image = Image.open(urllib.request.urlopen(row['ImageURL']))
image = image.resize((224,224))
st.image(image)
st.write(f"Product Title: {row['ProductTitle']}")
st.write(f"""
#### Top {num_results} Recommended items
""")
for i in range(1,len(indices)):
rows = fashion_df[['ImageURL','ProductTitle']].loc[fashion_df['ProductId']==Productids[indices[i]]]
for indx, row in rows.iterrows():
image = Image.open(urllib.request.urlopen(row['ImageURL']))
image = image.resize((224,224))
st.image(image)
st.write(f"Product Title: {row['ProductTitle']}")
st.write(f"Euclidean Distance from input image: {pdists[i]}")

st.write("""
## Visual Similarity based Recommendation
"""
)
user_input1 = st.text_input("Enter the item id")
user_input2 = st.text_input("Enter number of products to be recommended")
button = st.button('Generate recommendations')
if button:
get_similar_products_cnn(str(user_input1), int(user_input2))

Here, the below  built-in functions are used to make an interactive deployment:

  • st.text_input() – takes dynamic input from the user
  • st.write() – writes messages/arguments to the app
  • st.title() – displays an image or list of images.

Like earlier, here the get_similar_products_cnn() function recommends most similar products as per the arguments specified.

To execute this deployment script in the terminal type:

streamlit run recom_deployment.py

Tip: This complete code can be downloaded from here.

End Notes

In this post, we discussed product recommendations and implemented a visual similarity-based recommendation system on top of available product images using ResNet.

If you have any questions or feedback on this article, feel free to reach out to me in the comments section below.

You can also read this article on our Mobile APP Get it on Google Play

Related Articles

Author: admin

Leave a Reply

Your email address will not be published.