Here for a quick update on the state of the autonomous car, platform and Tensorflow/Keras models I’m building and testing in various environments. I’ve been working on an indoor model to drive around the dining table in our home. The aim is to have no markings at all, so that the car relies only on it’s environment to navigate. It is a simple oval path, but it is visually cluttered and quite tight, and so it poses an interesting challenge.
I got a convolutional neural network to drive around fairly succesfully. Here’s a video showing a couple of successful laps, and some where the car gets stuck:
Autonomous car platform
The driving platform that the car uses is Burro. Burro a fork of Donkey, and lately is following it’s own path. A Raspberry PI 2 does all the heavy lifting to evaluate the neural network model, and is able to reach around 10-12fps without overclocking. I’d like to try overclocking as well, but I have a couple more thoughts I want to try to improve fps before I go on with it.
Neural network model
I trained the model using Keras/Tensorflow on a dataset of around 18000 images of the indoor scene. There are no sensors onboard other than vision using a wide-angle camera. No ultrasound/LIDAR/TOF here. I’m planning to write up an extensive post outlining the process of data collection (including strategies for driving), pre-processing and configuring training parameters.
For efficient training, I developed a pipeline of Python generators for preprocessing and transforming images. The generators include transformations such as rotations, mirroring, color adjustments and other operators.
An RC car does autonomous laps in a cluttered indoors track using a convolutional neural network trained on 18000 images.
Do you have any ideas or comments on improving the autonomous RC car? Share your experience in the comments below.