Lab11 - Localization on the Real Robot

Published:

Objective

In this lab, the localization with the Bayes filter will be performed on the actual robot car. Considering the noise from the robots’ motions which is meaningless for our motion prediction, we will only use the update step based on the full 360 degree scans by the ToF sensor.

This lab was provided with the optimized version of the localization code from the simulation.

Keywords

Localization, Bayes Filter

Lab Tasks

All the tasks in this lab are mostly based on the simulation from the last lab (Lab10)

Task 1 - Optimized Simulation Plot

With the optimized simulaiton code, we can see the optimized localization plot in the simulation:

opt_sim_plot

where the red line represents the odometry measurement, the green line represents the ground truth, and the blue line represents belief (prediction with the Bayes filter). We can clearly see that the prediction line greatly matched the ground truth line.

Task 2 - Observation Loop Implementation

The jupyter notebook of the skeleton localization code was provided to process the robot’s observation data. All the code is based on the uniform prior on the robot’s pose.

Now, the car should go for the full 360 degree scans by the ToF sensor, transmit the data by Bluetooth, and pass the data through the Bayes filter which was defined in Lab10, to predict the robot’s current position in our space.

The observation loop was implemented as follows:

perform_observation_loop_code

In this function, I used my code from the previous labs. The PID control parameters’ initialization, rotation, and the data transmission can be all set here.

Considering the condition that the robot car collects the distance&angle information after each rotation part, I re-arranged the order of the collected data, to make the recordings started from the 0 degrees and ended at the 340 degrees (Here, the rotation degree gap is 20 degrees). This is equivalent to the required data collectoin method (data recorded from 0 degrees to 340 degrees) and does no harm to the raw data.

Task 3 - Test the Observation Loop

There are four marked poses needed to be tested in the lab court, and they all have the shape of (x, y, angle):

  • (-3 ft, -2 ft, 0 deg)
  • (0 ft, 3 ft, 0 deg)
  • (5 ft, -3ft, 0 deg)
  • (5 ft, 3 ft, 0 deg)

After running the code, we can get the predicted localized poses and the corresponding ground truths.

The following result groups are displayed in the units of: (m, m, degree)

  • (-3 ft, -2 ft, 0 deg)

p3

p3_plot

  • (0 ft, 3 ft, 0 deg)

p1

p1_plot

  • (5 ft, -3ft, 0 deg)

p4

p4_plot

  • (5 ft, 3 ft, 0 deg)

p2

p2_plot

Task 4 - Results Discussion

As for the distance errors, the (5 ft, -3 ft, 0 deg) and the (5 ft, 3 ft, 0 deg) points both have distance errors. The former one has 1 ft error on the x-axis and the y-axis respectively. The latter one has 1 ft error on the x-axis. Considering the error and noise from the ToF readings and the rotation control, such errors are acceptable. To improve the performance, increasing the resolution of the grids and finetuning the rotation control can be considered.

As for the angle errors, the first three points all have -10 degrees error while the ground truth angle is 0 and the prediction angle is -10 degrees. Considering our resolution of the theta axis is 20 degrees, the prediction returns the 10 degree error which is the middle value of the interval, is feasible. But there may be larger noise for the last test point whose theta error becomes 30 degrees. This is probably due to the subtle error from the orientation control which subsequently gives rise to the larger error on the theta prediction.