Installing tensorflow object detection API. Some notes.

Recently I have become interested in computer vision problems and have been exploring object detection in images using the tensorflow API. I used several guides online to set up the various machines that I have experimented on and encountered a few issues.

I’ve written this micro-blog to detail the process that I went through to get the tensorflow object detection API working on the Campus GPU Desktop. This can be used to support other data scientists get up and running if wanting to apply these methods in project work.

To minimise the chance of problems with dependencies and package versions I first set up a virtual environment in which to work.

Setting up a Virtual environment

We set up a virtual environment called ‘venv’ from the terminal with virtualenv venv. We then activate the virtual environment with source venv/bin/activate. When the virtual environment is active, the name of the virtual environment will appear in brackets before the terminal prompt. When we are finished working in the virtual environment we can exit with the deactivate.

Installing tensorflow

The tensorflow installation guide should be followed ensuring that the tensorflow-gpu package is installed (not the standard tensorflow package which will only run on CPUs).

Preparing the Object Detection API

In order to use the object detection API you are required to clone the whole of the tensorflow models repository. Clone the tensorflow models repo into the virtual environment (from the terminal git clone https://github.com/tensorflow/models.git). The object detection API files are located at models/research/object_detection directory.

Follow the object detection tutorial here to ensure that the dependencies are set up. The key part of this installation is executing the ‘protoc’ command. This command compiles all of the .proto files in the models/research/object_detection/protos directory into into python scripts. I did have trouble when trying to do this on an iMac and worked around it by finding a repository on github with a precompiled version of the proto directory and replacing it.

Note that in order to run the API the models/research directory needs to be added to the python path using

export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

This code need to be run every time a new terminal window is opened.

The final line in the object detection installation guide above provides a line of code that you can run to test that the API has been setup successfully. In addition we can test the API by running the object detection tutorial notebook.

Test the API is working

To test that the API is working in the terminal from the models/research/object_detection directory run jupyter notebook and open the object_detection_tutorial.ipynb notebook. Each code chunk can be executed to progress through the tutorial testing one of the pretrained models on two test images. If you want to use your own test images then you add them to the models/research/object_detection/test_images, renaming them to be of the form “imagesX.jpg” where X is the next available number and then changing the range in the code chunk where the test images are set to include the newly added pictures.

![Anya the builder - can she fix it?](/images/object_detector/bob_anya_the_builders.png “Anya the Builder - can she fix it?

Retraining a model

One option to quickly get working with your own data sets and labels is to retrain one of the pretrained models. Before we can retrain a model we need to create a training set.

There are several steps to retraining a model on custom data and labels. The first of these is to develop a training set so that it is compatible with the object_detection API. In order for tensorflow to accept the training data it needs to be in .tfrecord form which is a tensorflow specific data object.

Developing a training set for the API requires that images have bounding boxes defined in either XML or JSON files for specific objects. This can easily be accomplished using LabelImg which outputs files in the exact format required for the tensorflow API. I had some trouble getting the PyQt4 package installed for use with Python 2.7 and then had similar trouble getting PyQt5 installed for use with python 3. As the default installation instructions use the apt-get installation command and hadnt worked I tried sudo pip3 install pyqt5 which was successful. I then executed ‘make qt5py3’ and used python3 to run labelImg (‘python3 labelImg.py’) fromt he labelImg directory.


By

Updated