Amazon Rekognition Image

In this lecture we dive into the Amazon Rekognition service and how it works specifically for processing images. We'll cover off each of the main image processing features such as Facial Analysis, Face Comparison, Celebrity Detection, Text Extraction, Content Moderation, and Feature Extraction. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file. The response returns an array of faces that match, ordered by similarity score with the highest similarity first. More specifically, it is an array of metadata for each face match found. Learn more about the AWS Innovate Online Conference at - this session, learn how to expand the capabilities of your applications w. The bad news is that using Amazon Rekognition in Home Assistant can cost you around $1 per 1000 processed images. But the good news is that you can get started at no cost. The Free Tier lasts 12 months and allows you to analyze 5,000 images per month.

You can use Rekognition Custom Labels to train your own machine learning models that perform image classification (image level predictions) or object detection (object/bounding box level predictions).

Amazon Rekognition Custom Labels automatically selects the right machine learning algorithm to train your custom machine learning model based on the labeled data you provide without requiring deep learning expertise.

R users can access and use Amazon Rekognition Custom Labels using the fabulous paws package, an AWS SDK for R, created by David Kretch and Adam Banker.

In this article, we will use Rekognition Custom Labels to train an object detection model to detect the Swoosh, Nike’s famous logo that Carolyn Davidson designed in 1971:

The process of using Rekognition Custom Labels to train, evaluate, deploy and use image classification or object detection models is the same and consists of the following steps:

  • Step 0: Collect and preprocess your image data
  • Step 1: Create an S3 Rekognition Custom Labels default bucket in your region
  • Step 2: Upload your dataset to S3
  • Step 3: Create a Rekognition Custom Labels dataset
  • Step 4: Create your project
  • Step 5: Train your model
  • Step 6: Evaluate the training results
  • Step 7: Deploy your model
  • Step 8: Make real-time predictions for new data
  • Step 9: Stop your model

We will follow the steps described above to build our Swoosh detection model. Not all of the steps are supported by the Rekognition API. If necessary, we will switch over to the Amazon Rekognition Custom Labels console.

The entire code of this article is also part of a self-paced and fully reproducible workshop based on Rmarkdown that you can download from GitHub here.


In short, you need to have an IAM admin user with programmatic access to the AWS API and you need to save the user’s access key ID and secret access key as R environment variables:

  • You need to have access to an AWS account using an IAM user.

  • The IAM user needs to come with security credentials that allows him to access AWS (1) programmatically via the API using an access key ID and a secret access key and (2) via the AWS Management Console.

  • For simplicity, you can use an IAM admin user to follow along: Attach the AdministratorAccess permissions either directly to your IAM user or to a group your user is a member of. See the official documentation for creating your first IAM admin user and group.

Local installations & configuration

  • Install paws from CRAN using install.packages('paws') and set the following environment variables in your .Renviron file which is easiest to do using usethis::edit_r_environ():
  • Make to sure to install the remaining 8 R packages referenced in the next section on your machine.


At the time of writing this article in December 2020, Amazon Rekognition Custom Labels is available in 4 regions: us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), and eu-west-1 (Ireland).Please make sure to update the AWS_REGION entry in your .Renviron file in case you configured another AWS region.

Amazon Rekognition Custom Labels is currently also part of the AWS Free Tier which means you can get started for free:The service-specific Free Tier lasts 3 months and includes 10 free training hours per month and 4 free inference hours per month.

Load the necessary libraries

Step 0: Collect and preprocess your image data

We collected 75 free and publicly available images containing the Nike Swoosh logo from, preprocessed/scaled the images, and uploaded 70 of the preprocessed images as a dataset to Kaggle that we will use to train the Swoosh detector model. The remaining 5 of the pre-processed images come with this repository as a hold-out test set for making real-time predictions.

Please navigate to the Nike Swoosh Compilation dataset on Kaggle, click on the page’s Download button, and unzip once the download is complete.

The compilation below shows some of the images we collected:


The minimum and the maximum image dimension of images used for training jobs and for inference with Rekognition Custom Labels is 64 pixels x 64 pixels and 4096 pixels x 4096 pixels respectively. Always scale your images accordingly before you use Amazon Rekognition Custom Labels.

Additional requirements like supported image file formats, maximum image size, maximum number of labels per image are described in the official documentation here.

We used the purrr-EBImage-recipe below to scale the images from Pexels below the 4096 pixels threshold before we uploaded them to Kaggle. You can use the recipe in your future Rekognition Custom Labels projects but we don’t need to use it here:

Step 1: Create S3 Rekognition Custom Labels default bucket

Creating the default S3 Rekognition Custom Labels bucket is an one-time step per region in which you like to use Amazon Rekognition Custom Labels. You don’t need to repeat this step afterwards.

Amazon rekognition image to text

In the AWS console select Amazon Rekognition underneath services and then select Use Custom Labels in the left sidebar. Click on Get started in the middle of the Amazon Rekognition Custom Labels console and then on Create S3 bucket to create your Rekognition Custom Labels default bucket in your region:

You can safely ignore the prompt in the console to create your first Custom Labels project. We will do this later via the API.

Next, we create an S3 client to retrieve the name of our S3 Rekognition Custom Labels default bucket you just created. We will need the S3 bucket name later.

Step 2: Upload your dataset

We will create a new folder /assets in our S3 Custom Labels default bucket to which we will upload the Swoosh dataset.

Next, we will switch over to the S3 console and upload the unzipped folder swoosh_data of Swoosh images we downloaded from Kaggle.

Navigate to the /assets folder we just created and click on Upload:

Click on Add folder and select the /swoosh_data folder on your file system:

Important: Back on the Upload page, make sure to scroll down and click on Upload:

After the upload, the S3 folder structure should look like this and the /train subfolder should contain 70 images:

Step 3: Create a Rekognition Custom Labels dataset

Now, we will create a Rekognition Custom Labels dataset. A Rekognition Custom Labels dataset references the training/test dataset residing in S3 and allows you to add labels/bounding box metadata to your images. The labeling process will generate a manifest file that includes (1) the respective labels/bounding box information and (2) the references to the images stored in S3. Without a manifest file we won’t be able to start a Rekognition Custom Labels training job.

You can only create Rekognition Custom Labels datasets by using the Amazon Rekognition Custom Labels console. However, instead of creating the image labels/bounding boxes from scratch which is a kind of cumbersome manual process, you can also create a Custom Labels dataset based on an existing manifest file that already includes the respective label/bounding box information of your dataset.

And you’re lucky: We already created the manifest file with the necessary information for you. This will save you approximately 30-40 minutes and you won’t need to draw bounding boxes yourself.

Step 3.1: Edit and upload the manifest file

On your machine:

  • Download and open the manifest file you can get from GitHub here.

  • Replace the beginning of ALL 70 s3://[YOUR_CUSTOM_LABELS_DEFAULT_BUCKET]/.. resource identifiers with the correct name of your S3 Rekognition Custom Labels default bucket. You can get the bucket name by printing custom_labels_bucket to the R console. Save and close the manifest file.

Navigate to the S3 console. Upload the updated manifest file to the root folder of your Rekognition Custom Labels default bucket:

Step 3.2: Create Custom Labels dataset based on uploaded manifest file

Navigate to the Rekognition Custom Labels console. Click on Datasets in the left sidebar and then on Create dataset. Specify the following:

  • Dataset name: swoosh_dataset

  • Image location: Select Import images labeled by Amazon SageMaker Ground Truth

  • .manifest file location: The S3 path to the manifest file we uploaded in the previous step

After that, click on Submit at the bottom of the page:

Step 3.3: Check image labels and new manifest file

In the Rekognition Custom Labels console you should find the generated Swoosh_dataset underneath Datasets. All 70 images of the dataset should include the respective label/bounding box information:

Important: Using an existing manifest file to create a Custom Labels dataset will also create a NEW manifest file output.manifest in your Custom Labels S3 default bucket underneath [YOUR_CUSTOM_LABELS_DEFAULT_BUCKET]/datasets/swoosh_dataset/manifests/output. This new manifest file will be the one that we’ll pass as a parameter when starting the training job later:

Step 4: Create your project

Rekognition Custom Labels projects help you to manage the life cycle of your machine learning models. A trained model always belongs to one project. A project just serves as an umbrella under which you train, deploy and manage one or more image classification/object detection models. Sourcetree uncommitted changes.

We will initialize a Rekognition client and create our first Custom Labels project via the API:

create_project() returns the project’s Amazon Resource Name (ARN). We will store it in a separate variable which we will need later when defining the training job for our Swoosh detection model.

Alternatively, you can also use the following code snipped to retrieve the entire list of your Rekognition Custom Labels projects and select the project ARN of your choice:

Step 5: Train your model

You train a model by calling create_project_version() which is not the most intuitive function name in this context. As you will see below, we don’t need to choose nor specify the training algorithm itself. Based on the provided labeled data Amazon Rekognition Custom Labels automatically selects the right machine learning algorithm, trains a model (in our case an object detection model), and provides model performance metrics at the end of the training.

To train a model, the following information is needed:

  • Name: A unique name for the model. Best practice is to use a project name - timestamp combination for the model name.

  • Project ARN: The Amazon Resource Name (ARN) of the project that will manage the model lifecycle and which we stored in project_arn.

  • Training dataset: A manifest file with the S3 location of the training image dataset and the image labeling information. This is the manifest file that was generated in datasets/swoosh_dataset/manifests/output/output.manifest when we created the Rekognition Custom Labels dataset in the step 3 above.

  • Test dataset (optional): A manifest file of the test set generated like the training set manifest file via the Rekognition Custom Labels console. If not provided, Rekognition Custom Labels creates a test dataset with a random 80/20 split of the training dataset which is the option we will use here by setting AutoCreate = TRUE below.

  • Training results location – The Amazon S3 location where the training results are stored. We will store the results of training jobs in dedicated subfolders /output_folder/[project name]/[model name] in our S3 Rekognition Custom Labels default bucket.


Before you start the training job by executing the code chunk below, make sure to get a coffee. The training time for the Swoosh Detector model will be approximately one hour.

The response from create_project_version() is the ARN of the trained Swoosh detector model. You will use the model ARN in subsequent API requests when deploying the trained model, making real-time predictions, and stopping the running model.

Use the following command to get the current status of the training job. Training is complete when the status is TRAINING_COMPLETED.

Step 6: Evaluate the training results

Once the training job completed successfully, we can evaluate the model performance metrics on the test set. Amazon Rekognition Custom Labels provides various model performance metrics for object detection models: (1) Metrics for the test set as a whole and (2) metrics for each predicted bounding box in your test set.

Via the API we can quickly have a look at the F1 score:

The F1 score is 0.646 for our Swoosh detector model. All other model performance metrics are accessible via the Console or via downloading the evaluation results summary file which we will do below:

Now, let us have a look at the other metrics:

We see that 56 images went into the training set and 14 images into the test set. This matches the 80/20 split of our data set of 70 images when we set AutoCreate = TRUE during the training job specification.

The parsed evaluation summary results also include various model performance metrics besides just the F1 score:

According to the official documentation all model performance metrics are calculated in the manner you would expect. The Threshold above is the value with which a prediction is counted as a true or false positive and is set based on the test set.

Each image of the test set has one or more ground truth bounding boxes. The Swoosh object detector, as an object detection model, predicts bounding boxes in an image. Each predicted bounding box is associated with a confidence score. There are three types of bounding box predictions:

  • True Positives (TP): The object is there and the model detects it with a bounding box associated with a confidence score above the threshold.

  • False Negatives (FN): The object is there and the model does not detect it OR the object is there and the model detects it with a bounding associated with a confidence score below the threshold

  • False Positives (FP): The object is not there but the model detects one.

We will have a look at the individual bounding box predictions in each image of the test set. For this, we will download and parse yet another JSON file that includes the details for each predicted bounding box:

The parsed response below shows us the test set results per predicted bounding box (box_id) grouped by image (image_id):

The image and bounding box identifiers match the ones you will find when evaluating the individual image prediction results via the Rekognition Custom Labels Console. We will use the fifth image of our test set to illustrate this fact:

We can now start analyzing the test set results in more detail:

As you can see, we have 17 ground truth bounding boxes in the 14 images of our test set. The Swoosh detector model detected 13 (True Positives) labels correctly and missed to detect 4 (False Negatives). In total, the model falsely detected 7 (False Positives) Swooshes which were not part of the images.

Let us check out the False Positives a bit more:

We see that 3 of the 7 False Positives were caused just by a single image in our test set.

Based on the individual bounding box predictions, we are even able to calculate the model performance metrics ourselves:

Interestingly, the model performance metrics we just calculated based on the individual bounding box predictions DO NOT match the model performance metrics calculated by Rekognition Custom Labels which we extracted from the evaluation results summary file at the beginning of this section. By comparison, the model performance metrics calculated by Rekognition Custom Labels seem to underestimate the true model performance.

Step 7: Deploy your model

It is time to deploy our Swoosh detection model to check its performance against the hold-out test set. You start the deployment by calling start_project_version() on the Rekognition object. We will go with the minimum number of inference units. The number of inference units decide the maximum number of transactions per second (TPS) a model endpoint can support for real-time predictions.

Use the following command to get the current deployment status. Model deployment is complete when the status is RUNNING.

Step 8: Make real-time predictions for new data

We will use the 5 images of the hold-out test set to test the deployed Swoosh detector model. Make sure to store the hold-out test set images underneath ./images/inference on your end. We will test each of the following images one by one:

As you can see, images 1 and 2 contain a single Swoosh each, image 3 contains no Swoosh, and images 5 and 6 include multiple Swooshes.

We will show you various best practices on how to parse the results from the Rekognition Custom Labels API and how to add the received bounding box coordinates to the original image using the magick package.


Your prediction results might differ slightly from the results below because the Swoosh object detection model was trained based on a radom split of the training data set.

Image 1: A single Swoosh

We will read the first image into a raw vector and then send it to the model endpoint for prediction. Afterwards, we will parse the result into a tibble that will include one row per detected label with the respective label name and the confidence score.

Unlike described in the official Rekognition Custom Labels documentation, it is NOT necessary to pass the image as base64-encoded image bytes to detect_custom_labels().

In total, the model detected one Swoosh in the image with a high confidence.

Let us extract the bounding box coordinates from the response and add the bounding box to the original picture.

Great! We see that our model detected the Swoosh in the image correctly. Let’s continue!

Image 2: Another single Swoosh

In total, the model detected one Swoosh in the second image. Let us add the bounding box to the image.

The model also detected this Swoosh correctly.

Image 3: No Swoosh

When Rekognition Custom Labels does not find a matching label in the image, it returns an empty response. The third image does not contain any Swoosh so the expected and parsed prediction result would be an empty tibble.

The result shows that the model also got this prediction correctly.

Image 4: Multiple Swooshes

Our fourth image of the hold-out test set includes 4 Swooshes.

The parsed response shows that 4 Swooshes were detected in the image with a high confidence score.We will now add the correspondent bounding boxes to the original image.


The purrr-magick recipe below allows you to extract the coordindates of ALL bounding boxesincluded in a Rekognition Custom Labels prediction response and add them to the original image. You can also use it for parsing results with a single label match.

The model detected all Swooshes in the image correctly.

Image 5: Even more Swooshes

Our final image from the hold-out test set contains 9 Swooshes in total. Let us see if our model will be able to detect all of them.

Surprisingly, the model response shows 10 detected Swooshes. Let us add the bounding boxes.

The visualized bounding boxes above in the image show the reason why the model detected 10 Swooshes in an image with only 9 Swooshes: One of the Swooshes was detected and counted twice by the model.

Amazon Rekognition Image Moderation

Step 9: Stop your model

After we used the hold-out test set for making real-time predictions, we will now stop our deployed model by calling stop_project_version():

The model stopped running when the returned status is STOPPED.

You can always re-start a stopped model by calling start_project_version():

Amazon Rekognition Pricing


In this article we described how to build our own Swoosh detection model using Amazon Rekognition Custom Labels. What are our take home messages?

  • You can get started quickly building your own custom object detection and images classification models from scratch just by providing the labeled training data. You don’t need to have any Deep Learning expertise and you can use Amazon Rekognition Custom Labels to start exploring this particular Machine Learning domain.

  • Even small training data can produce very robust models that might already satisfy your production requirements. You can also use Rekognition Custom Labels models to build first baseline models.

  • Don’t disqualify trained models based on the model performance metrics too quickly. Especially, when the test set is relatively small. In our case, almost 50% of the False Positives in the test set were introduced by a single image. Besides mediocre model performance metrics, our Swoosh detection model had all 16 bounding box predictions in the 5 images of the hold-out test set correctly and had only one minor error when it counted a detected Swoosh twice.

  • You can easily integrate Amazon Rekognition Custom Labels models into your R and Shiny applications similar to other AWS AI Services which we describe here.

Chilkat • HOME • Android™ • Classic ASP • C • C++ • C# • Mono C# • .NET Core C# • C# UWP/WinRT • DataFlex • Delphi ActiveX • Delphi DLL • Visual FoxPro • Java • Lianja • MFC • Objective-C • Perl • PHP ActiveX • PHP Extension • PowerBuilder • PowerShell • PureBasic • CkPython • Chilkat2-Python • Ruby • SQL Server • Swift 2 • Swift 3,4,5.. • Tcl • Unicode C • Unicode C++ • Visual Basic 6.0 • VB.NET • VB.NET UWP/WinRT • VBScript • Xojo Plugin • Node.js • Excel • Go

Primary Categories
AWS Translate
Activix CRM
Amazon DynamoDB
Amazon MWS
Amazon Rekognition
Aruba Fatturazione
Azure Maps
Azure Monitor
Azure OAuth2
Azure Storage Accounts
Bitfinex v2 REST
Constant Contact
Global Payments
Google People
Google Search Console
Hungary NAV Invoicing
IBM Text to Speech

Microsoft Calendar
Microsoft Group
Microsoft Tasks and Plans
Microsoft Teams
Okta OAuth/OIDC
OneLogin OIDC
Royal Mail OBA
SCiS Schools Catalogue
SII Chile
Shopware 6
Walmart v3

See more Amazon Rekognition Examples

Detects faces within an image that is provided as input. This example passes theimage as base64-encoded image bytes.

For more information, see

Chilkat Tcl Extension Downloads

Amazon Rekognition Image Demo

© 2000-2021 Chilkat Software, Inc. All Rights Reserved.

Amazon Rekognition Police