Real-time image processing
In this tutorial you learn how to build a real-time image processing pipeline in Quix, using the Transport for London (TfL) traffic cameras, known as Jam Cams, the webcam on your laptop or phone, and a YOLO v3 machine learning model.
You'll use prebuilt Code Samples to build the pipeline. A prebuilt UI is also provided that shows you where the recognized objects are located around London.
The following screenshot shows the pipeline you build in this tutorial:
If you need any assistance while following the tutorial, we're here to help in The Stream community, our public Slack channel.
Tutorial live stream
If you'd rather watch a live stream, where one of our developers steps through this tutorial, you can view it here:
To get started make sure you have a free Quix account.
You'll also need a free TfL account.
Follow these steps to locate your TfL API key:
Register for an account.
Login and click the
You should have one product to choose from:
500 Requests per min.
500 Requests per min.
Enter a name for your subscription into the box, for example "QuixFeed", and click
You can now find your API Keys in the profile page.
The Code Samples is a collection of ready-to-use components you can leverage to build your own real-time streaming solutions. Typically these components require minimal configuration.
Most of the code you need for this tutorial has already been written, and is located in the
When you are logged into the Quix Portal, click on the
Code Samples icon in the left-hand navigation, to access the Code Samples.
The pipeline you will create
There are four stages to the processing pipeline you build in this tutorial:
- Webcam image capture
- TfL Camera feed or "Jam Cams"
- Grab frames from TfL video feed
- Detect objects within images
Web UI configuration
A simple UI showing:
- Images with identified objects
- Map with count of objects at each camera's location
Now that you know which components will be needed in the image processing pipeline, the following sections will step through the creation of the required microservices.
The parts of the tutorial
This tutorial is divided up into several parts, to make it a more manageable learning experience. The parts are summarized here:
Connect the webcam video feed. You learn how to quickly connect a video feed from your webcam, using a prebuilt sample.
Object detection. You use a computer vision sample to detect a chosen type of object. You'll preview these events in the live preview. The object type to detect can be selected through a web UI, which is described later.
Connect the TfL video feed. You learn how to quickly connect the TfL traffic cam feeds, using a prebuilt sample. You can perform object detection across these feeds, as they are all sent into the objection detection service in this tutorial.
Frame grabber. You use a standard sample to grab frames from the TfL video feed.
Deploy the web UI. You the deploy a prebuilt web UI. This UI enables you to select an object type to detect across all of your input video feeds. It displays the location pof object detection and object detection count on a map.
Summary. In this concluding part you are presented with a summary of the work you have completed, and also some next steps for more advanced learning about the Quix Platform.