# Vector Field Histogram (VFH) Player/Stage

It’s been a long time since I blogged about anything, time to change that. I’ll force myself (glued to the desk) to write every week about my progress in my Master Thesis. Since I’m somewhat 6 weeks behind I’ll cover two blog posts, first one about obstacle avoidance and the VFH algorithm and a second one about localization with a particle filter.

# Avoiding Obstacles

A small recap of my Master Thesis. I’m trying to improve the localization of a robot using DASH7 tags. The idea is that I can localize a robot using a laser and a map of the environment but that localization is somewhat inaccurate. Using the DASH7 tags I can tell the robot more about its environment and thus increase the localization.

But before all that can happen the robot first needs to drive around autonomously without hitting any static (closet, chair, table,…etc) or dynamic objects (humans, cats, dogs,…etc). This is the first part of my research, we already know that we are going to use the VFH algorithm now an important step is to configure it.

Heading over to the Player/Stage manually I found out that the VFH algorithm is implemented and allows to configure almost every calculation resulting in a staggering 25+ different parameters, holy cow.

# Settings up the different tests

I decided to test two different parts, first the robot should drive to a specific location. When using the VFH algorithm we get a new interface (planner) that allows to send the robot to a specific location relative to its starting position (which is always 0,0).

The first test should drive the robot to a specific location (1,1). The second test will see if the robot can drive sequentially to different coordinates and the robot drives in a square.

The next three tests deal with obstacle avoidance. All the test are quite easy, the robot drives 5m straight and I place different objects in front of it and the robot should avoid them. If the robot drives into the objects or fails to avoid them in a reasonable time the parameters are changed and tested again.

Firstly, we should test and see if the robot can avoid typical domestic objects. I tested a flowerpot, closet, table with chairs, chairs and anything else I could find. Secondly, we should test and see if the robot can pass through a door (typically 80cm here in Belgium). Finally, we also test dynamic objects, I walk towards the robot and the robot should avoid me.

# Results and Parameters changed

I’m only going to talk about my findings and what parameters I changed to avoid a typical obstacle. If I don’t mention a parameter it means I never changed it and left the value default.

### max_speed

I first changed this to 1.2 since the Pioneer robot could handle around 1.2m/s, boy I couldn’t be more wrong. I got a lot of warnings about exceeding speed and velocity and other crap in the console. After lowering the value drastically I ended up with a save 0.4 which is two times more than the default. In reality the robot drives blazing fast around obstacles so I’m happy.

### safety_dist

The safety distance determines what distance the robot must keep from an object at all times (regardless of the free_space_cutoff). It comes in two versions, when standing still and moving at 1m/s. This parameter will greatly influence if the robot pass between two objects or not. If I change both to 0.10 (default) then the robot can pass between a door opening, and a chair. If I increase it to 0.20 it won’t go under the chair and avoids it but cannot pass between a door.

### free_space_cutoff

The parameters come in two version, when standing still and when moving at 1m/s. The parameters influences when the robot starts avoiding an object. A high value (2000000.0) will drive the robot nearly against the obstacle before avoiding it while a lower value will make sure that the robot starts early with the obstacle avoiding. The parameter doesn’t have a unit so it’s a bit guessing, I picked the following values based on my test results: free_space_cutoff_1ms 100000 & free_space_cutoff_0ms 200000.

### min_turnrate & max_turnrate

I changed them and then I got a lot of warnings about right wheel velocity and left wheel velocity. Not that good so I left them default.

### distance_epsilon & angle_epsilon

When the robot drives a goal the distance and angle epsilon will determine what error is acceptable. If I increase the parameters I don’t really care if the robot arrives exactly at the goal, if I lower both values then I want a precise result (based on the slipping odometry). The robot will keep driving around and turn till it arrives at it goal. So with a higher precision it can take a while till the robot thinks its at the goal. I increased the accuracy: distance_epsilon 0.2 & angle_epsilon 5.

So all the test succeeded and the robot could avoid all the obstacles. A big problem remains, as mentioned in the safety_dist parameter, is that the robot can avoid a table with chairs (using 0.2 for safety_dist) but cannot through a door. Change the parameter so the robot can go through a door (using 0.1 for safety_dist) then the robot will also attempt to go through the table with chairs and gets stuck.

I still have to test and see if any other parameters can solve this problem. I’m thinking about obs_cutoff that doesn’t have an explanation on the VFH page of Player project.

The last test was a person walk towards the robot also works quite well. The VFH algorithm doesn’t take into account the velocity of objects, so a moving person towards the robot is quite difficult. The test showed that the person also needs to take a small step in a direction (right or left) and the robot will do the same. If you walk straight then you step on the robot or the robot drives over your toes.

# Conclusion

The robot can avoid static and dynamic objects. If I found a solution for my door-table problem then I’ll post an update (it is quite important for a domestic environment).

# Particle Filter Implementation in Player/Stage

Player/Stage has a particle filter build in called the Advanced Monte-Carlo Localization driver, I’ve done some tests with it but I’m not really happy with the results:

• A starting position and orientation has to be given (not really a requirement but the algorithm tends to fail else)
• Particles can move through walls or be placed on walls
• The initial particle distribution does not fill the complete map

This makes the algorithm quite useless. I have two options concerning my Master Thesis, create my own or change it so it does work as expected. I decided to create my particle filter from scratch, later on I’ll decide if I’ll adapt the AMCL driver or create my own. The code is based on the online lessons from Udacity (by Sebastian Thrun).

# The setup of the particle filter

The particle filter setup, including 6 landmarks

So we have a small area, the robot and 6 landmarks. As you can see is the Field of View (FOV) big enough so I always see one landmark.

# Bits and pieces

So we got everything set up now the code. I started from the original python code from Udacity that I created. Now I just had to port the code from python to C++ Player/Stage style.

### Obstacle Avoidance & Random Driving

First I created some code so the robot could drive around (random) and avoid the walls in the setup. I decided to keep it very simple and basic. The wander code looks as follows:

So we create random variables and see that they are between the right ranges. Speed in player stage must always be between 0-1 and the turn speed should be in radians. The following code I used to avoiding obstacles (walls):

Again very basic, it works but that’s about it. Note the check I made to see if we got any range sensors (rp.GetRangeCount()). For some reason it’s possible that player crashes during the first iteration of your while/for loop because the ranger sensors are not completely initialized.

I decided to keep track of my particles in a struct (pf_particle_t), it allows to easily add additional variables if need, my particles contain the following data: (x, y, yaw, weight). Next I should initialize an amount of particles with random location and spread them out over my map.

As you can see I used a vector that holds all my particles. It’s really easy to use it, especially iterate over it. On line three you see that I multiply with 15 and then subtract with 7.5. My map has a dimension of 15 but the robot is placed on the origin of the map, this makes sure my particles are randomly generated in the map and not outside the map. Next I should also grab all the locations of the fiducials (landmarks). You can do this hard-coded (look at your world file) or you can request them in your code:

If you simulate in stage you can add the SimulationProxy and so request various properties of the objects in your world file. A little trick, you can also add the Graphics3dProxy to draw your particles. This code should be in your main loop (while/for) and draws all the particles, again I used a vector to hold all the points to draw:

# The Particle Filter code

Good now we the particle filter code. The idea is to measure the distance from the robot to the detected landmarks. Then measure the distance from the particles to the landmarks and apply a weight based on the correctness of those measurements. Finally re-sample the particles so that particles with lower weights are less likely to reappear in our new particle set.

### robot measurements

First we request the position of the robot (using the Position2dProxy, note: in the configuration the robot should be set on odom for localization not gps!) and then we get the location of the fiducials and calculate the distance between the two coordinates.

### Apply motion and measurement model

Now we apply the same motion as the robot on the particles. First we keep track of our previous location and our current location and find the distance between them. Because every particle can be in another heading direction we also have to include this in our motion model. We add random Gaussian noise, a popular algorithm is the Box-Muller algorithm:

$\rho = \sqrt{-2 log(1-Y)}$

$X = \rho cos(2 \pi Z)$

This accounts for small possible errors and spreads our particles. Finally we apply the motion of the robot on the particles, lines 1-17 in the following code:

After that we need to add a weight to every particle. So we go over every fiducial found, calculate the distance from the fiducial to the particle and then calculate the Gaussian distribution. We also keep track of the highest weight, we need this for the re-sampling.

### Resampling

So that movie explains the basic concept of this re-sampling step. Re-sampling can be done in multiple ways, this approach is the know as the Roulette wheel and as a complexity of O(n log n), not that good but its proven to better for localization.

# The Result

My first experiment with a moving gif, better than another particle filter video on YouTube I guess.

Little animated gif shows the robot and particle filter

Next is to see how I can remove particles that are invalid (they went through a wall), use the laser measurements instead of fixed landmarks and other improvements.

# Home Automation using DASH7

The goal of the project was to implement a basic Home Automation using DASH7. We had limited resources, worked in a group with 7 people and had only 8 days to finish the project.

# The goal of the project

The project overview

The idea is illustrated in the following image. A server is used and utilises a self-learning algorithm that learns preferences of the user. The user has a smart phone and can adjust the environment, these adjustments are registered on the server and used in the self-learning algorithm.

So a little example, I walk in the room and the light turns on, I find it to bright so change it to 70% from the original 100%. I do this every day for 3 days and on day 4 the light switches on, but instead of the 100% illumination, it is only on for 70%. The system learned my preference and applied it.

The smart phone communicates with the server using WiFi and the server must communicate with the application (light bulb, oven, water heater  lamp, light strip …) using DASH7 technology.

We were free to choice the applications and the mobile application (dedicated, web-based universal or hybrid). There were only 8 days left and we were allowed to take short cuts (aka dirty code and tricks) to get a result.

## DASH7 Technology

Everyone has an idea how a smart phone application might look. But the DASH7 part is a big unknown. There are developer kits from WizziLab for rather a hefty price (€100). A PhD student at my University made a smaller tag that only costs around €10, or even less.

Combine those two items and you can program the DASH7 tags. Right now the DASH7 Alliance is building a complete OSI stack for DASH7 and writing the code with the help of researchers/PhD students. They already finished the Physical and Data-Link Layer, we programmed using this stack on the Data-Link Layer.

This also means that we had to program embedded devices, luckily we could use C code and the library from the DASH7 Alliance include most of the important commands to send receive and handle interrupts. Quite cool using cutting-edge technology.

So we (me and 6 colleagues) started to draft out a general concept/flowchart so we could divide this project in subsections. Every subsection is developed by one person who is responsible. We came up with the following flowchart.

Our basic flowchart, every colour represents a different person.

My job was to program the DASH7 tags and make sure the applications worked. Another student is on the same subsection as me but his task was to wire electronics on the GPIO pins of the DASH7 tags. I’ll only talk about my part because I don’t really know all the details.

# DASH7 Application

We picked the following application. We have a house with 2 rooms, a living room with a TV and a workroom with a desk and a computer. When the users walks to the TV it should automatically play a video, if he walks away the video should pause. When the user enters the workroom the laptop should automatically open certain applications. Whenever the user leaves or enters the room the light should switch on or off accordingly.

The DASH7 application with two rooms

A couple of things come together here. Firstly, we have an approximation application, meaning that if a user comes near a tag something should happen. Secondly, we defined three types of DASH7 tags, a server tag, application tag and mobile tag. Finally, we interact with 230v light bulbs using the 3.3v GPIO from the DASH7 tag.

# Communications in the DASH7 application

A little note, I made this communication protocol because its easy dirty and the way to go for our little application. In no way this is useful if more than 3 tags are used and way to much overhead comes in the mind. Because of the time constrains I picked this easy, dirty solution.

### Commands

Every command/message is send using the following protocol that I also used in my ArduinoPi code with one big difference every command is send in one byte (0×00-0xFF). So small recap, every command starts with @ and ends with :, the first number indicates the command and the second the value. I also included the comma between the command and value but it’s not really needed since every value is only one byte. The reason was that later on I could always change it back to dynamically allocate the command and value.

### Communications

So the communication is based on the following image (yes I love images).

Communications in the DASH7 application

Dotted lines indicated wireless communication and solid lines include wired communication. It’s quite simple, every command has its own value followed by the value (if any). The mobile tag always sends out a one every second, the user wears this tag (can be a watch or small keychain).

The application tag receives a one, pulls out the dbm value and sends it back as command 2 to the server tag. The server tag sends the dbm to the server over a serial connection and the server decides if the user is close enough to switch the light. Note that one short-cut is included, the application tag will also send the dbm value to the computer who will start the video if the user is close enough, normally the server should decide this (for various reasons we didn’t include this).

The server tag also receives a one and sends his dbm value to the server, which decides if the user is close enough to switch the lamp or start-up the applications on the computer.

Other communication is also included because we had to read out a sensor (light sensor), I used the same way of communication for the two light sensors.

# Result

The following video demonstrates our project. We failed to include the smart phone (HTML5 based jQuery mobile application) because we were unable to connect it with the server/DASH7 tags. We used a WCF Service Library and apparently it is unable to start, talk or do anything when it receives commands from the smart phone. We only figured this out on the last day and try to convert it to a WCF Service but we had no more time and decided to remove it, shame really.