Create a real-time object tracking camera with TensorFlow and Raspberry Pi

[ad_1]

Are you just getting started with machine/deep learning, TensorFlow, or Raspberry Pi?

I created rpi-deep-pantilt as an interactive demo of object detection in the wild, and in this article, I’ll show you how to reproduce the video below, which depicts a camera panning and tilting to track my movement across a room.

This article will cover:

  1. Build materials and hardware assembly instructions.
  2. Deploying a TensorFlow Lite object-detection model (MobileNetV3-SSD) to a Raspberry Pi.
  3. Sending tracking instructions to pan/tilt servo motors using a proportional–integral–derivative (PID) controller.
  4. Accelerating inferences of any TensorFlow Lite model with Coral’s USB Edge TPU Accelerator and Edge TPU Compiler.

Terms and references

  • Raspberry Pi: A small, affordable computer popular with educators, hardware hobbyists, and robot enthusiasts.
  • Raspbian: The Raspberry Pi Foundation’s official operating system for the Pi. Raspbian is derived from Debian Linux.
  • TensorFlow: An open source framework for dataflow programming used for machine learning and deep neural learning.
  • TensorFlow Lite: An open source framework for deploying TensorFlow models on mobile and embedded devices.
  • Convolutional neural network: CNN is a type of neural network architecture that is well-suited for image classification and object detection tasks.
  • Single-shot detector: SSD is a type of CNN architecture specialized for real-time object detection, classification, and bounding box localization.
  • MobileNetV3: A state-of-the-art computer vision model optimized for performance on modest mobile phone processors.
  • MobileNetV3-SSD: An SSD based on MobileNet architecture. This tutorial will use MobileNetV3-SSD models available through TensorFlow’s object-detection model zoo.
  • Edge TPU: a tensor processing unit (TPU) is an integrated circuit for accelerating computations performed by TensorFlow. The Edge TPU was developed with a small footprint for mobile and embedded devices “at the edge.”

Cloud TPUs (left and center) accelerate TensorFlow model training and inference. Edge TPUs (right) accelerate inferences in mobile devices.

Build list

Essential

Optional

Looking for a project with fewer moving pieces? Check out Portable Computer Vision: TensorFlow 2.0 on a Raspberry Pi to create a hand-held image classifier.

Set up the Raspberry Pi

There are two ways you can install Raspbian to your MicroSD card:

  1. NOOBS (“New Out Of Box Software”) is a GUI operating system installation manager. If this is your first Raspberry Pi project, I’d recommend starting here.
  2. Write the Raspbian image to an SD card.

This tutorial and supporting software were written using Raspbian (Buster). If you’re using a different version of Raspbian or another platform, you’ll probably experience some pains.

Before proceeding, you’ll need to:

Install software

  1. Install system dependencies:
    $ sudo apt-get update && sudo apt-get install -y python3-dev libjpeg-dev libatlas-base-dev raspi-gpio libhdf5-dev python3-smbus
  2. Create a new project directory:
    $ mkdir rpi-deep-pantilt && cd rpi-deep-pantilt
  3. Create a new virtual environment:
    $ python3 -m venv .venv
  4. Activate the virtual environment:
    $ source .venv/bin/activate && python3 -m pip install --upgrade pip
  5. Install TensorFlow 2.0 from a community-built wheel:
    $ pip install https://github.com/leigh-johnson/Tensorflow-bin/blob/master/tensorflow-2.0.0-cp37-cp37m-linux_armv7l.whl?raw=true
  6. Install the rpi-deep-pantilt Python package:
    $ python3 -m pip install rpi-deep-pantilt

 

 

Read the full article by Leigh Johnson Feed

Leave a Reply

Your email address will not be published. Required fields are marked *