Training Pipeline for Autonomous Driving

The neural network model is the most crucial part of an autonomous vehicle. Fitting a proper model is therefore an important task. Burro includes an easily customizable training pipeline for fitting neural networks from driving datasets. This post provides a brief description thereof.

Training Structure & Components

The pipeline comprises components that are in fact just Python generator functions. They accept a tuple of image and steering values as input, and output a tuple of transformed values. Depending on the nature of the component it may act on the image, the steering value, or both. Below an example that accepts a steering angle in radians and returns it’s sinus value:

The function includes a simple for loop that iterates over tuples of input (image) and output (steering angles). It makes use of the yield keyword to output one tuple at a time. The output consists of the same image as the input, and the sinus of the angle.



The generators are “linked” together at runtime to form a chain. A total of 19 components are available in four modules: File related, image related, numpy related and miscellaneous. One special component scans a specified directory for image files with special file names and outputs them. This component is always at the head of the chain.

Training Pipeline

Using the components outlined above a training pipeline is constructed. The figure below highlights the sequence of components in the pipeline included in Burro:

Training Pipeline in Burro
Training Pipeline in Burro

Below the function of each of the components in detail:

  • filename_generator enumerates files present in the defined directory
  • nth_select either includes or excludes every nth item for cross-validation purposes
  • equalize_probs selectively skips examples based on the frequency of appearance, so as to equalize occurrences of different value ranges
  • image_generator loads a PIL image corresponding to the filename passed
  • image_mirror, image_resize and image_crop applies mirror, resize and crop transformations to the image
  • array_generator converts the image to a numpy array
  • center_normalize zero-centers and normalizes image data
  • brightness_shifter randomly adjusts image brightness
  • gaussian_noise adds a randomized level of gaussian noise to the image
  • Finally, category_generator converts the scalar output value to a one-hot encoding, if specified
Read also:  Fully Autonomous Robocar

There are two pipelines available out of the box: The first trains categorical networks and the second regression networks. You can choose between them using a command line argument, as we will show in a bit.

Training a Model

The training pipeline is part of the training branch of Burro. The easiest way to get started is to download the install-trainer.sh file to your computer and run it. Once run it will create a new folder for the training infrastructure and set up all necessary files. You should have the necessary system-wide libraries installed. Burro does not install any system-wide packages to avoid messing with your system. Mainly you should have Tensorflow installed for your specific hardware (CPU or GPU).

Once you have the training variant of Burro installed, you can train a regression model like so:

or alternatively a categorical model like so:

The training will start and Keras messages will periodically inform you of the progress.

Conclusion

This post presented a brief outline of the autonomous driving model training functionality in Burro. Using the method presented you can fit your own models on existing datasets, or even on your own datasets that you record while driving your autonomous vehicle around.

If you haven’t built an autonomous vehicle yet, now it’s the best time to do so. Backyard robotics contains posts and tutorials to get you started with ready made models and software.

If you do build your own vehicle or driving model, make sure to share your experience in the comments below!

 

For more exciting experiments and tutorials, subscribe to our email:

Leave a Reply