Skip main navigation

Have a go at classification

Create a machine learning model that recognises rock paper scissors hand gestures
2.7
Over the next two steps, you are going to create your very own rock paper scissors game using Google’s Teachable Machine. You’re going to use image classification to teach your computer to recognise the typical hand gestures used in the game. When you first arrive at the Teachable Machine site, you’ll need to select what type of project you want to make. I’m going to hold my hand up to the camera, so I’ll select Image Project using the standard image model. You will need a class for each gesture you want the computer to recognise, in this case one for rock and another for paper. So I’m going to change Class 1 to Rock, and Class 2 to Paper.
45.3
We’re going to need a third class for scissors, so I’m going to add a class and change the name.
55.4
Teachable Machine allows you to use your webcam to capture images for the algorithm to learn from. You’ll need around 50 to 100 samples for each class, so make sure you get a range of different angles. I’m going to click on webcam for rock, and I’m going to hold my hand up in the gesture to my webcam for rock. I’m going to hold down recording. You can see it’s getting a number of image samples. I’m going to move my hand around a little bit. And then I’m going to do the same for paper, this time doing the paper hand gesture. You can see the image samples there. And then finally, scissors.
96.8
Do the hand gesture for scissors. OK, now we have lots of image samples for each of the different classes. So now, the next step is to train the model. I’m going to press the Train model button, and Teachable Machine is going to take all of those image samples and it’s going to train the algorithm based on them.
125.2
Done. I can now test my model using the Preview box to see how accurate it is. So I’m going to hold my hand up with the rock gesture, and you can see the output says 100% rock, so that’s good. I’m going to try my paper, and again it’s picked up very quickly that that’s paper. And then finally, scissors. And it’s showing scissors. Let’s go from rock, to scissors, to paper, to scissors. How did the testing go? Were there any gestures the model struggled with? Share your experiences in the comments below.

Over the next two steps, you’re going to create your very own rock, paper, scissors game. You will use image classification to teach your computer to recognise gestures you hold up to your webcam. You’ll then turn this into a game where you compete against a program that randomly makes its choice.

First, you will use Google’s Teachable Machine to train your computer to recognise your rock, paper, and scissors gestures.

Step 1: Set up the project

Start by visiting Teachable Machine. If you intend to hold up an object, hand, or image to the camera to represent rock, paper, and scissors, then select Image Project. If you intend to stand further away from the camera and strike a pose, then select Pose Project. Bear in mind that to do a pose project, you might need someone to help you add the images (step 3 below). I will demonstrate by using hand gestures, so I will select Image Project.

Step 2: Create your classes

As you have three images you want to classify, add a class to those two already present so that you have three classes. Label these as “Rock”, “Paper”, and “Scissors”.

GIF of steps taken to label each class

Step 3: Add the image samples

The next step is to add images (samples) to teach your computer what images you’d like to be considered to be rock, paper, and scissors. The image samples you collect are stored locally on your machine and won’t be uploaded anywhere. Before you start recording, think about what you’d like to teach the machine to recognise for each class. You might decide to print images that can be held up to the camera, or take the more traditional approach of making a hand gesture towards the camera.

Start with rock and select Webcam. If your browser asks you to allow permission for Teachable Machine to use your webcam and microphone, you will need to grant permission to complete this step.

When you’re ready, select Hold to Record and keep your mouse button held down whilst you record your samples. I recommend adding roughly 50 to 100 samples for each class.

Move your hand or image around while you hold the button, so that you get a range of samples at different angles. Keep in mind that the more samples you take, the longer it will take your computer to train the model.

GIF showing samples being added

Step 4: Train your model

Once you have collected samples for all three of your classes, select Train Model from the middle box on the screen. The speed at which your model is trained will depend on the speed of your processor as well as the settings for the training.

Click on the button labelled Advanced to show more options. You will explore all of these terms in greater detail later in this course, but for now I’ll briefly explain what each one does:

  • Epochs is the number of times that each sample in the dataset will be processed and fed into the model for it to learn from. Generally, the greater the number of epochs, the more accurate your model will be, but the longer it will take to train.
  • Batch size is the number of samples collected per batch. If you had 160 samples and a batch size of 16, you would have 160/16 = 10 batches. The machine learning algorithm will update the model based on the data from an entire batch, rather than a single piece of data at a time. Using smaller batches will increase the total processing time.
  • Learning rate determines how much the machine learning algorithm updates the model based on the “error” between the model and the current batch of data. The higher the learning rate, the less processing time is required; however, the model may be less accurate.

Labelled diagram of the training options

Step 5: Test your model

Once your model is trained, a preview window will appear on the right, showing the current image from your webcam. It’s time to test your model and see how accurately it has learnt. Try holding your image or pose for the webcam and keep an eye on the Output menu. This menu shows a horizontal bar chart, which displays the probability (confidence score) that the model estimates for the image to fit into each of the classes you’ve created.

GIF showing the model being tested with different hand gestures. The confidence score for each class changes with each new gesture

You might find that the confidence score is low for some of your gestures or they are being incorrectly classified. If this is the case, try and retrain your model by adding more image samples to get better results.

Step 6: Export and upload

The last thing to do is to export your model so that you can use your model as part of the game that you will create.

To do this, select Export Model and select the Upload my model button. This creates a shareable URL of the location of your algorithm, which has been uploaded to the cloud (your training data images will remain on your machine). Keep a record of the URL as you will need it for the next step of this course.

Image showing shareable link to model

Discussion

  • Is your model worse at identifying one of your classes in particular? If so, why do you think that is?
  • What could you do to improve the accuracy of the algorithm?

Share your experiences in the comments.

This article is from the free online

Introduction to Machine Learning and AI

Created by
FutureLearn - Learning For Life

Our purpose is to transform access to education.

We offer a diverse selection of courses from leading universities and cultural institutions from around the world. These are delivered one step at a time, and are accessible on mobile, tablet and desktop, so you can fit learning around your life.

We believe learning should be an enjoyable, social experience, so our courses offer the opportunity to discuss what you’re learning with others as you go, helping you make fresh discoveries and form new ideas.
You can unlock new opportunities with unlimited access to hundreds of online short courses for a year by subscribing to our Unlimited package. Build your knowledge with top universities and organisations.

Learn more about how FutureLearn is transforming access to education