How to Build, Train and Deploy Your Own Recommender System – Part 2
We build a recommender system from the ground up with matrix factorization for implicit feedback systems. We then deploy the model to production in AWS.
Who knew!? I never knew they were different until I’ve attended this course!
“The major difference between machine learning and statistics is their purpose. Machine learning models are designed to make the most accurate predictions possible. Statistical models are designed for inference about the relationships between variables.”
It is still a bit unclear to me because the the lines are really blurred with both overlapping in capabilities. Perhaps this is best shown by explaining what the difference is between inference and prediction by means of an example.
When we aim to predict the outcome of a future race, as in the case of my Capstone Project, this is an example of a prediction, fairly obvious there. In applying machine learning techniques, we typically train the machine learning model using a training/test set. When we need to make a prediction, we pass the input variables to the model, and we expect the prediction as the output of that model.
Machine learning is better at predictions, but it can also do a good job in inference.
Inference, is similar, but with a subtle difference. For example, you are a Data Scientist in the Formula 1 organization, and it has been decided that a new race will be added to the Global calendar. Your boss approached you with this problem - Can you create a statistical model that can infer which country/where the best location of that new race is going to be?
This differs from prediction, because we are not actually predicting something, however, we are somewhat creating an outcome based on past and current data to find relationships, and come up with the best country/location for the next race.
Statistical modeling tend to be better at making inferences, but they can also be good in making predictions.
Clear as mud, right?
I originally planned to write something about this Data Science bootcamp every week, and up until last week I intended to. However, many things conspired that I was not able to complete that. We were also required to submit our EDA (Exploratory Data Analysis) assignments that weekend, plus a few other things, so something’s got to give.
This week, we had a special guest brought in by our instructor. A Senior Data Scientist from a well known international tech company came in an gave us an hour and a half talk regarding his Data Science journey.
With a smart and resourceful personality, but what was more exciting was that he was also talking about his recent Data Science projects, both at his work, and more interestingly, his personal projects. Seeing these gave us cohorts that dose of motivation to soldier on with the rest of the course, well at least for me, that’s for sure!
So yeah, I am well and truly into my Capstone project. I have started getting all the required data from my sources. Instead of taking the easier way of just downloading data from sites like Kaggle and Google Dataset, I have decided to find and extract and transform all the data myself. I have to experience how if feels to go through the process. I feel that this is the only way to learn.
I have also dumped all the race and results data to a MongoDB collection. It’s been a long time since the last time I have tinkered with Mongo, but just the same, it’s still easy and wonderful to work with. I picked it, not really specifically for working with Python, but I am planning to write an API and application with the Capstone, if time permits.
Below is the script I used to prepare the race results Data Frame, the main data I would need to commence EDA:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
def create_results_dataframe_from_mongodb_collection():
db = connect.f1Oracle
collection = db.results
for_da_result = {'Season':[],'Round':[],'Race Name':[],'Race Date':[],'Race Time':[],'Position':[],
'Points':[],'Grid':[],'Laps':[],'Status':[],'Driver':[],'DOB':[],
'Nationality':[],'Constructor':[],'Circuit Name':[],'Race Url':[],
'Lat':[],'Long':[],'Locality':[],'Country':[]}
for race in races:
race_results = list(collection.find({'season':f"{race['season']}",'round': f"{race['round']}"}))
for results in race_results:
for item in results['Results']:
for_da_result['Season'].append(f"{results['season']}")
for_da_result['Round'].append(f"{results['round']}")
for_da_result['Race Name'].append(f"{results['raceName']}")
for_da_result['Race Date'].append(f"{results['date']}")
for_da_result['Race Time'].append(f"{results['time']}" if 'time' in results else '10:10:00Z')
for_da_result['Position'].append(f"{item['position']}")
for_da_result['Points'].append(f"{item['points']}")
for_da_result['Grid'].append(f"{item['grid']}")
for_da_result['Laps'].append(f"{item['laps']}")
for_da_result['Status'].append(f"{item['status']}")
for_da_result['Driver'].append(f"{item['Driver']['givenName']} {item['Driver']['familyName']}")
for_da_result['DOB'].append(f"{item['Driver']['dateOfBirth']}")
for_da_result['Nationality'].append(f"{item['Driver']['nationality']}")
for_da_result['Constructor'].append(f"{item['Constructor']['name']}")
for_da_result['Circuit Name'].append(f"{results['Circuit']['circuitName']}")
for_da_result['Race Url'].append(f"{results['url']}")
for_da_result['Lat'].append(f"{results['Circuit']['Location']['lat']}")
for_da_result['Long'].append(f"{results['Circuit']['Location']['long']}")
for_da_result['Locality'].append(f"{results['Circuit']['Location']['locality']}")
for_da_result['Country'].append(f"{results['Circuit']['Location']['country']}")
return pd.DataFrame(for_da_result)
results_df = create_results_dataframe_from_mongodb_collection()
results_df
Ergast Motor Racing has been publishing these Formula 1 results from 1950 up to the present. Majority of my data set will be from this API.
I will also be scraping some data from the following sites:
Ever since the first season of Drive to Survive, I’ve been captivated by the drama and excitement that is Formula 1. I’ve been consuming this public API in some of my past blog posts and I thought it would be fun to continue this trend and explore the insights and predictions that can be gleaned from past race data:
We build a recommender system from the ground up with matrix factorization for implicit feedback systems. We then deploy the model to production in AWS.
We build a recommender system from the ground up with matrix factorization for implicit feedback systems. We put it all together with Metaflow and used Comet...
Building and maintaining a recommender system that is tuned to your business’ products or services can take great effort. The good news is that AWS can do th...
Provided in 6 weekly installments, we will cover current and relevant topics relating to ethics in data
Get your ML application to production quicker with Amazon Rekognition and AWS Amplify
(Re)Learning how to create conceptual models when building software
A scalable (and cost-effective) strategy to transition your Machine Learning project from prototype to production
An Approach to Effective and Scalable MLOps when you’re not a Giant like Google
Day 2 summary - AI/ML edition
Day 1 summary - AI/ML edition
What is Module Federation and why it’s perfect for building your Micro-frontend project
What you always wanted to know about Monorepos but were too afraid to ask
Using Github Actions as a practical (and Free*) MLOps Workflow tool for your Data Pipeline. This completes the Data Science Bootcamp Series
Final week of the General Assembly Data Science bootcamp, and the Capstone Project has been completed!
Fifth and Sixth week, and we are now working with Machine Learning algorithms and a Capstone Project update
Fourth week into the GA Data Science bootcamp, and we find out why we have to do data visualizations at all
On the third week of the GA Data Science bootcamp, we explore ideas for the Capstone Project
We explore Exploratory Data Analysis in Pandas and start thinking about the course Capstone Project
Follow along as I go through General Assembly’s 10-week Data Science Bootcamp
Updating Context will re-render context consumers, only in this example, it doesn’t
Static Site Generation, Server Side Render or Client Side Render, what’s the difference?
How to ace your Core Web Vitals without breaking the bank, hint, its FREE! With Netlify, Github and GatsbyJS.
Follow along as I implement DynamoDB Single-Table Design - find out the tools and methods I use to make the process easier, and finally the light-bulb moment...
Use DynamoDB as it was intended, now!
A GraphQL web client in ReactJS and Apollo
From source to cloud using Serverless and Github Actions
How GraphQL promotes thoughtful software development practices
Why you might not need external state management libraries anymore
My thoughts on the AWS Certified Developer - Associate Exam, is it worth the effort?
Running Lighthouse on this blog to identify opportunities for improvement
Use the power of influence to move people even without a title
Real world case studies on effects of improving website performance
Speeding up your site is easy if you know what to focus on. Follow along as I explore the performance optimization maze, and find 3 awesome tips inside (plus...
Tools for identifying performance gaps and formulating your performance budget
Why web performance matters and what that means to your bottom line
How to easily clear your Redis cache remotely from a Windows machine with Powershell
Trials with Docker and Umbraco for building a portable development environment, plus find 4 handy tips inside!
How to create a low cost, highly available CDN solution for your image handling needs in no time at all.
What is the BFF pattern and why you need it.