Vector Field Histogram (VFH) Player/Stage

It’s been a long time since I blogged about anything, time to change that. I’ll force myself (glued to the desk) to write every week about my progress in my Master Thesis. Since I’m somewhat 6 weeks behind I’ll cover two blog posts, first one about obstacle avoidance and the VFH algorithm and a second one about localization with a particle filter.

Avoiding Obstacles

A small recap of my Master Thesis. I’m trying to improve the localization of a robot using DASH7 tags. The idea is that I can localize a robot using a laser and a map of the environment but that localization is somewhat inaccurate. Using the DASH7 tags I can tell the robot more about its environment and thus increase the localization.

But before all that can happen the robot first needs to drive around autonomously without hitting any static (closet, chair, table,…etc) or dynamic objects (humans, cats, dogs,…etc). This is the first part of my research, we already know that we are going to use the VFH algorithm now an important step is to configure it.

Heading over to the Player/Stage manually I found out that the VFH algorithm is implemented and allows to configure almost every calculation resulting in a staggering 25+ different parameters, holy cow.

Settings up the different tests

I decided to test two different parts, first the robot should drive to a specific location. When using the VFH algorithm we get a new interface (planner) that allows to send the robot to a specific location relative to its starting position (which is always 0,0).

The first test should drive the robot to a specific location (1,1). The second test will see if the robot can drive sequentially to different coordinates and the robot drives in a square.

The next three tests deal with obstacle avoidance. All the test are quite easy, the robot drives 5m straight and I place different objects in front of it and the robot should avoid them. If the robot drives into the objects or fails to avoid them in a reasonable time the parameters are changed and tested again.

Firstly, we should test and see if the robot can avoid typical domestic objects. I tested a flowerpot, closet, table with chairs, chairs and anything else I could find. Secondly, we should test and see if the robot can pass through a door (typically 80cm here in Belgium). Finally, we also test dynamic objects, I walk towards the robot and the robot should avoid me.

Results and Parameters changed

I’m only going to talk about my findings and what parameters I changed to avoid a typical obstacle. If I don’t mention a parameter it means I never changed it and left the value default.

max_speed

I first changed this to 1.2 since the Pioneer robot could handle around 1.2m/s, boy I couldn’t be more wrong. I got a lot of warnings about exceeding speed and velocity and other crap in the console. After lowering the value drastically I ended up with a save 0.4 which is two times more than the default. In reality the robot drives blazing fast around obstacles so I’m happy.

safety_dist

The safety distance determines what distance the robot must keep from an object at all times (regardless of the free_space_cutoff). It comes in two versions, when standing still and moving at 1m/s. This parameter will greatly influence if the robot pass between two objects or not. If I change both to 0.10 (default) then the robot can pass between a door opening, and a chair. If I increase it to 0.20 it won’t go under the chair and avoids it but cannot pass between a door.

free_space_cutoff

The parameters come in two version, when standing still and when moving at 1m/s. The parameters influences when the robot starts avoiding an object. A high value (2000000.0) will drive the robot nearly against the obstacle before avoiding it while a lower value will make sure that the robot starts early with the obstacle avoiding. The parameter doesn’t have a unit so it’s a bit guessing, I picked the following values based on my test results: free_space_cutoff_1ms 100000 & free_space_cutoff_0ms 200000.

min_turnrate & max_turnrate

I changed them and then I got a lot of warnings about right wheel velocity and left wheel velocity. Not that good so I left them default.

distance_epsilon & angle_epsilon

When the robot drives a goal the distance and angle epsilon will determine what error is acceptable. If I increase the parameters I don’t really care if the robot arrives exactly at the goal, if I lower both values then I want a precise result (based on the slipping odometry). The robot will keep driving around and turn till it arrives at it goal. So with a higher precision it can take a while till the robot thinks its at the goal. I increased the accuracy: distance_epsilon 0.2 & angle_epsilon 5.

So all the test succeeded and the robot could avoid all the obstacles. A big problem remains, as mentioned in the safety_dist parameter, is that the robot can avoid a table with chairs (using 0.2 for safety_dist) but cannot through a door. Change the parameter so the robot can go through a door (using 0.1 for safety_dist) then the robot will also attempt to go through the table with chairs and gets stuck.

I still have to test and see if any other parameters can solve this problem. I’m thinking about obs_cutoff that doesn’t have an explanation on the VFH page of Player project.

The last test was a person walk towards the robot also works quite well. The VFH algorithm doesn’t take into account the velocity of objects, so a moving person towards the robot is quite difficult. The test showed that the person also needs to take a small step in a direction (right or left) and the robot will do the same. If you walk straight then you step on the robot or the robot drives over your toes.

Conclusion

The robot can avoid static and dynamic objects. If I found a solution for my door-table problem then I’ll post an update (it is quite important for a domestic environment).

Particle Filter Implementation in Player/Stage

Player/Stage has a particle filter build in called the Advanced Monte-Carlo Localization driver, I’ve done some tests with it but I’m not really happy with the results:

• A starting position and orientation has to be given (not really a requirement but the algorithm tends to fail else)
• Particles can move through walls or be placed on walls
• The initial particle distribution does not fill the complete map

This makes the algorithm quite useless. I have two options concerning my Master Thesis, create my own or change it so it does work as expected. I decided to create my particle filter from scratch, later on I’ll decide if I’ll adapt the AMCL driver or create my own. The code is based on the online lessons from Udacity (by Sebastian Thrun).

The setup of the particle filter

The particle filter setup, including 6 landmarks

So we have a small area, the robot and 6 landmarks. As you can see is the Field of View (FOV) big enough so I always see one landmark.

Bits and pieces

So we got everything set up now the code. I started from the original python code from Udacity that I created. Now I just had to port the code from python to C++ Player/Stage style.

Obstacle Avoidance & Random Driving

First I created some code so the robot could drive around (random) and avoid the walls in the setup. I decided to keep it very simple and basic. The wander code looks as follows:

So we create random variables and see that they are between the right ranges. Speed in player stage must always be between 0-1 and the turn speed should be in radians. The following code I used to avoiding obstacles (walls):

Again very basic, it works but that’s about it. Note the check I made to see if we got any range sensors (rp.GetRangeCount()). For some reason it’s possible that player crashes during the first iteration of your while/for loop because the ranger sensors are not completely initialized.

I decided to keep track of my particles in a struct (pf_particle_t), it allows to easily add additional variables if need, my particles contain the following data: (x, y, yaw, weight). Next I should initialize an amount of particles with random location and spread them out over my map.

As you can see I used a vector that holds all my particles. It’s really easy to use it, especially iterate over it. On line three you see that I multiply with 15 and then subtract with 7.5. My map has a dimension of 15 but the robot is placed on the origin of the map, this makes sure my particles are randomly generated in the map and not outside the map. Next I should also grab all the locations of the fiducials (landmarks). You can do this hard-coded (look at your world file) or you can request them in your code:

If you simulate in stage you can add the SimulationProxy and so request various properties of the objects in your world file. A little trick, you can also add the Graphics3dProxy to draw your particles. This code should be in your main loop (while/for) and draws all the particles, again I used a vector to hold all the points to draw:

The Particle Filter code

Good now we the particle filter code. The idea is to measure the distance from the robot to the detected landmarks. Then measure the distance from the particles to the landmarks and apply a weight based on the correctness of those measurements. Finally re-sample the particles so that particles with lower weights are less likely to reappear in our new particle set.

robot measurements

First we request the position of the robot (using the Position2dProxy, note: in the configuration the robot should be set on odom for localization not gps!) and then we get the location of the fiducials and calculate the distance between the two coordinates.

Apply motion and measurement model

Now we apply the same motion as the robot on the particles. First we keep track of our previous location and our current location and find the distance between them. Because every particle can be in another heading direction we also have to include this in our motion model. We add random Gaussian noise, a popular algorithm is the Box-Muller algorithm:

$\rho = \sqrt{-2 log(1-Y)}$

$X = \rho cos(2 \pi Z)$

This accounts for small possible errors and spreads our particles. Finally we apply the motion of the robot on the particles, lines 1-17 in the following code:

After that we need to add a weight to every particle. So we go over every fiducial found, calculate the distance from the fiducial to the particle and then calculate the Gaussian distribution. We also keep track of the highest weight, we need this for the re-sampling.

Resampling

So that movie explains the basic concept of this re-sampling step. Re-sampling can be done in multiple ways, this approach is the know as the Roulette wheel and as a complexity of O(n log n), not that good but its proven to better for localization.

The Result

My first experiment with a moving gif, better than another particle filter video on YouTube I guess.

Little animated gif shows the robot and particle filter

Next is to see how I can remove particles that are invalid (they went through a wall), use the laser measurements instead of fixed landmarks and other improvements.

Monte-Carlo Localization

Today a bit more about figuring out the robot position. Again I’ll try to explain the basics behind Monte-Carlo Localization without going into a bunch of maths or robotic jargon.

My post is based on the following paper:

Fox, D., Burgard, W., Dellaert, F. & Thrun, S. Monte carlo localization: Efficient position estimation for mobile robots. Proceedings of the National Conference on Artificial Intelligence, 1999, pp. 343-349.

Monte-Carlo Localization

Localization can be done in a couple of ways. A popular method was a grid-based approach, without going into details, the main disadvantage was updating the grid on 80s-90s computers. Also some problems like robot kidnapping (robot is moved by the user) were impossible to solve with a grid-based approach.

Monte-Carlo Localization is based on Markov localization, the key concept is to represent the posterior belief $Bel(l)$ by a set of $K$ weighted, random particles

$S= s_i | i = 1..K$

A particle contains the following information $[(x, y, \theta), p]$ it contains the robots $x,y$ position and heading $\theta$ and also a weighting factor that’s analogous to a discrete probability. A sample set represents an approximation of a probability distribution, which means that the sum of all the particles in the sample set should equal to 1. Monte-Carlo Localization works in two steps: Robot motion and sensor readings.

Step 1: Robot Motion

When the robot drives, $K$ new samples are generated that approximates the robot’s position after the motion. Each sample is generated randomly, by drawing a sample from the previous sample set with a likelihood that is determined by their $p$-values.

After the robot motion a sensor reading is done. When the robot sensors are available, the measurements are incorporated in the algorithm. The weight of every sample is recalculated, let $(l, p)$ be a sample then
$p = \alpha P(s|l)$
is the new weight were $\alpha$ is a normalizing factor that enforces a probability distribution, meaning it enforces that the sum of all the weights is 1.