Build an Autonomous Car with RPi, NAVIO2 and Tensorflow/Keras, Part II: The Software

In the previous post I’ve outlined the hardware build of a simple autonomous car using a small RC chassis with few modifications. The car drives using a convolutional neural network and monocular vision. If you’ve followed the directions in the last post, you should be able to customize your RC car with a simple wooden or plastic platform, a Raspberry Pi, a camera and a PWM HAT1 that can control a motor and a servo. For my build I’ve also added an RC receiver, since my NAVIO2 HAT supports decoding of SBUS and PPM signals out of the box. However this is optional, and there are many ways to control your car, depending on what you have available (WiFi, for instance).



Even though the hardware is essential to a functioning autonomous car, at it’s heart it’s the software and the algorithms that enable autonomy. In this post we will be focusing on building a simple software stack on the Raspberry Pi that can control the steering of an autonomous vehicle using a Convolutional Neural Network (CNN).

Background and Aims

Let us elaborate on the background and our goals a bit. As mentioned earlier, the aim of this project is to build a car that can navigate itself around a course, using vision and vision alone. A single convolutional neural network (CNN) will perform all decision making regarding steering and throttle. The network takes the image as input and outputs values corresponding to steering and throttle2. This type of decision-making known in machine learning as an “end-to-end” approach.

End-to-end model fitting is just one of a number of different approaches for autononous vehicles. Another popular one is the so-called “robotics” approach, where a suite of different sensors (vision included) are “fused” together algorithmically to produce a map of the vehicle surroundings and localize the vehicle within it. Then, decision making takes place as a separate step, and sometimes consists of hand-coded conditions and actions.

It is not the place in this blog post to debate the merits of one approach vs the other. The truth may lie in a compositional approach, for all we know 3 Taking into account, however, the simplicity of this project, as well as the recent leaps in end-to-end neural net approaches for autonomous cars, I feel it’s worth a try. And so did quite a few people, including the Donkey team, whose default CNN model and pieces of code we’ll be using in this build.

Autonomous car software Installation

For up to date installation instruction you are invited to visit the Burro repo on Github. This is a dated installation guide and may become obsolete in the future.

This car build uses the Burro autonomous car software, freely available on Github. Burro is an adaptation of Donkey for the NAVIO2 HAT. While it borrows a lot of features from Donkey, Burro has a number of significant differences:

  • There is no separare server instance. An onboard web socket server serves all telemetry.
  • Car control is through RC(SBUS) or gamepad (Logitech F710).
  • It works with either the NAVIO2 HAT or the Adafruit Motor HAT.

Currently Burro requires a Raspberry 2 or 3 board with the NAVIO2 HAT. Before proceeding with the installation of Burro, you will need to have a working EMLID image installation. We strongly recommend to get the latest version. Please make sure you follow the instructions available in the relevant EMLID docs.

Once this is complete, ssh to your Raspberry Pi, which by default should be navio.local, if using the EMLID image.

wget the Burro install script

change permissions and run it

This will install all required libraries, create a virtual environment, clone the Burro repo and set it up, and create symlinks for you. After successful completion, you end up with a fully working installation.

Read also:  Setup Raspberry Pi as a 5GHz Access Point

A warning: Some steps of this script can take a significant amount of time. The numpy install step, for instance, takes around 20min. Unfortunately numpy needs to be compiled from source due to version incompatibility with the apt-get versions. Total installation time should be around 30min. To ensure that your installation goes smoothly, make sure that you run your Pi out of either a power supply that can supply at least 5V/2A, or a fully charged power bank or LiPo of sufficient capacity.

Configuring

I am using the software with a Turnigy mini-trooper 1/16 RC car. If you have the same car, you need only change your RC channels if necessary. RC Input channels are as follows: 0 – Yaw (i.e. steering), 2 – Throttle, 4 – Arm. Arm, Yaw and throttle are configurable via config.py. Arming the RC controller triggers a neutral point calibration. Thus, you only need to make sure that your sticks are center before arming the car.

By default Burro outputs throttle on channel 2 of the NAVIO2 rail, and steering on channel 0. You may wish to change this.

You may also wish to configure the throttle threshold value above which images are recorded.

See the Readme in the Burro repo for more instructions on how to edit your configuration.

Testing

After completing installation and configuration, you should be able to drive your autonomous car around either using the manual controls, or using a mix of CNN for steering and manual controls for throttle. Automatic throttle control is not yet available, but it will be in a future version.

To start a Burro instance, first ssh to your RPi, if you havent done already:

Then type following, from the place where your install-burro.sh script was located:

Drive it!

Point your browser to your RPi address (http://navio.local by default for the EMLID image), the telemetry interface will come up. Choose your driving mode based on your controller. The default is using the F710 gamepad for steering and throttle. There are options for RC, gamepad, and mixed RC+CNN and gamepad+CNN driving, where the CNN controls the steering and you control the throttle.

Here’s a video from Burro running on an RC autonomous car in Mixed mode:

Autonomous Car – Next Steps

I like to think the Burro project as being part of the lively Donkey community, since it has been spun out of Donkey after all. As such it is worth taking a look at many resources created by the Donkey developers, namely:

If you’re interested in the development of autonomous small scale vehicles, you may wish to be part of the Slack community of Donkey, by requesting an invite.

Conclusion

This is the second post in a series discussing the software aspects of a small scale autonomous vehicle, using vision alone and end-to-end machine learning for control and navigation. The post went over installation of the Burro autonomous car platform and basic software configuration.

Autonomous vehicles is a very young and promising field of AI, and certainly we will be seeing very interesting competition in the near future.

Did this post help you build an RC autonomous car? Share your experience in the comments below!

Sign up to Backyard Robotics

Enter your email address to receive updates on exciting new experiments, tutorials and how-to's:

  1. Like this one, for instance, used by Donkey
  2. Values can be in fact either continuous or categorical
  3. There is a quote of James McLelland, which mentions that science is best served by pursuing integrated accounts that span multiple levels of analysis simultaneously. 
8 replies on “ Build an Autonomous Car with RPi, NAVIO2 and Tensorflow/Keras, Part II: The Software ”
  1. Hi, many thanks for your fork with Navio2. Done intial trials with my 1:10 car and also my mini Z.

    I ran into a problem however that the Servo is turning in the wrong direction in one of my cars (controller thinks it turns left but wheels go right and opposite), cant figure out how to change this in the code. Do you have any suggestions? I thought one could switch the max and min values in the drive.py but that had none effect…

    Thanks in advance.

    1. Hi,

      This is because apparently some servos are either placed opposite wrt the steering mechanism, or have different endpoints. Currently the easiest way to fix thisis to edit the file drive.py inside the burro/ directory. Change line 111 to pwm_val = 1.5 - value * 0.5 . Make sure your tabs are ok, and that should do it.

    2. There is also the chance that your RC controller may need the yaw channel to be reversed. I’m planning to provide configuration options for all these, btw.

      Oh, and if you have any videos with your setup, do share!

      1. Thanks, that solved it offcourse. Just first steps today, had some ECS issues with my car. I am not there yet, getting some angle errors on boot why i am reinstalling the image now from scratch.

        So while i can drive it and teach it, i still am to get some decent drives in automunus on my circuit which is tape on the floor.

        A picture of the car within the link, https://ibb.co/bu3qnk, it is a custom rebuild mini z buggy with separate esc and servo card. Althought it looks to be verry crammed (it is) it drives very good and 4wd. Getting some servo jitter on the servo, something strange with the Navio2 pwm.

        Is it possible i assume to load several instances of cars? I.e change from the default setup? This way to be able to teach the car differnt things in differnt sessions. Where would one then change the vehicle in the code?

        I am rather novice to this as you notice! Compared to the donkey car, how much benefit is it to use the IMU of the navio in your experience? I assume it is great to be able to counter drift, but have you seen some else remarkable benefits?

        1. Hey, looks nice. One suggestion I have: Get the wide-angle camera. It’s a $14-15 extra cost, but the default camera just has too narrow of a viewing field to capture nearby features (lanes etc). If you use the default camera, you could try to raise your position and angle.
          Not sure about your second point..
          Re. the gyro drift correction: this is just an adjustment happening post-steering, mainly to keep the car from slowly drifting due to miscalibration. I should add an integral (I) component as well to make this work better, but even now it works ok imo. It doesn’t affect training or inference of the neural net though.

Leave a Reply