Looking for ways for a robot to find itself in a house

I'm hacking a robot vacuum cleaner to control it using a microcontroller (Arduino). I want to make it more efficient when cleaning the room . For now, he just goes straight and turns when he puts something.

But itโ€™s hard for me to find the best algorithm or method to use to know its position in the room . I am looking for an idea that remains cheap (less than $ 100) and not complicated (one that does not require a PhD in computer vision). I can add some discrete markers to the room if necessary.

Right now, my robot has:

  • One webcam
  • Three proximity sensors (about 1 meter)
  • Compass (Now Not Used)
  • Wifi
  • Its speed may vary if the battery is full or nearly empty.
  • Eee PC netbook integrated into the robot

Do you have an idea to do this? Is there any standard method for such problems?

Note: if this question belongs to another site, please move it, I could not find a better place than Stack Overflow.

+47
algorithm robotics geolocation arduino robot
Jun 29 '11 at 12:23
source share
11 answers

The problem of determining the position of the robot in its environment is called localization . Computer scientists have been trying to solve this problem for many years with limited success. One of the problems is that you need reasonably good touch input to find out where you are, and the touch input of webcams (i.e. computer vision) is far from solving the problem.

If it didnโ€™t scare you: one of the approaches to localization, which is the easiest for me to understand, is particle filtering . The idea looks something like this:

  • You track a bunch of particles, each of which represents one possible location in the environment.
  • Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
  • When you start, all of these particles can be distributed evenly throughout your environment and receive equal probabilities. Here the robot is gray and the particles are green. initial particle filter
  • When your robot moves, you move each particle. You can also separate each particle probability to represent uncertainty about how the engines actually move the robot. particles after movement
  • When your robot observes something (for example, a landmark seen with a webcam, Wi-Fi signal, etc.), you can increase the likelihood of particles that agree with this observation. particles after observation
  • You can also periodically replace particles with the least probability with new particles based on observations.
  • To decide where the robot is located, you can use the particle with the highest probability, the cluster with the highest probability, the weighted average of all particles, etc.

If you are looking for a little, you will find many examples: for example. video of a robot using particle filtering to determine its location in a small room .

Particle filtering is good because it is pretty easy to understand. This makes implementing and customizing it a little less complicated. There are other similar methods (such as Kalman filters ) that may sound more theoretically, but it can be harder to look away.

+30
Jun 29 '11 at 14:09
source share
โ€” -

If you can put some markers in the room, using a camera may be an option. If 2 known markers have an angular offset (from left to right), then the camera and markers lie on a circle whose radius is related to the measured angle between the markers. I donโ€™t remember the formula right away, but the segment of the arc (in this circle) between the markers will be twice the angle you see. If you have markers with a known height, and the camera is at a fixed angle, you can calculate the distance to the markers. Any of these methods alone can nail your position, given sufficient markers. Using both will help to do this with fewer markers.

Unfortunately, these methods are imperfect due to measurement errors. You work around this using the Kalman calculator to enable multiple noise measurements to get a good position estimate โ€” you can then feed some dead accounting information (which is also imperfect) to further refine it. This part goes pretty deep into math, but I would say that it is a requirement to do an excellent job of what you are trying. You can do OK without it, but if you need an optimal solution (in terms of a better location estimate for a given input), there is no better way. If you really want a career in autonomous robotics, this will play a big role in your future. (

Once you can determine your position, you can cover the room with any pattern. Continue to use the shock sensor to help build an obstacle map, and then you will need to develop a scanning method that includes obstacles.

Not sure if you have a math background, but here is a book: http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C

+6
Jun 29 '11 at 1:58 p.m.
source share

QR Code

The QR Code poster in each room will not only make an interesting part of modern art, but it will also be relatively easy to spot with the camera!

+6
Jun 29 2018-11-21T00:
source share

This does not replace the accepted answer (this is great, thanks!), But I would recommend getting Kinect and using it instead of your webcam, either through recently released official Microsoft drivers, or using hacked drivers if your EeePC does not have Windows 7 ( presumably this is not so).

Thus, positioning will be improved thanks to 3D-vision. Observing landmarks will now tell you how far the landmark is, and not just where the landmark is in sight.




Regardless of the fact that the accepted answer does not really take into account how to choose landmarks in the field of view, it simply assumes that you can. Although Kinect drivers may already include feature detection (I'm not sure), you can also use OpenCV to detect features in the image.

+4
Jun 30 '11 at 11:37
source share

One solution would be to use a strategy similar to wikipedia . For the controller to accurately perform stripping, it needs a distance. You can calibrate your bot using proximity sensors: for example, within 1 s = xx in the immediate vicinity. With this information, you can move your bot to the exact distance and continue to sweep the room using the flood fill.

+3
Jun 29 '11 at 12:35
source share

Assuming you're not looking for a generic solution, you can really know the shape, size, potential obstacles, etc. When a bot exists in a factory, there is no information about its future operating environment, what type causes it to be inefficient from the very beginning. If you do this, you can program this information and then use basic measurements (e.g. rotary encoders on wheels + compass) to pinpoint its location in the room / house. In my opinion, there is no need for triangulating Wi-Fi or crazy sensor settings. At least for a start.

+3
Jun 29 '11 at 12:50
source share

Ever considered GPS? Each position on the earth has unique GPS coordinates - with a resolution of 1 to 3 meters, and when using differential GPS, you can go to a range of up to 10 cm - more information here:

http://en.wikipedia.org/wiki/Global_Positioning_System

And Arduino has many options for GPS modules:

http://www.arduino.cc/playground/Tutorials/GPS

After you have collected all the key coordinates of the house, you can then write a routine for arduino to move the robot from point to point (as was collected above) - provided that it will do all these obstacles. / p>

More detailed information can be found here:

http://www.google.com/search?q=GPS+localization+robots&num=100

And inside the list I found this - especially for your case: Arduino + GPS + localization:

http://www.youtube.com/watch?v=u7evnfTAVyM

+1
Jul 01 2018-11-11T00:
source share

I also thought about this problem. But I donโ€™t understand why you cannot just triangulate? You have two or three beacons (for example, IR LEDs of different frequencies) and an IR rotary "eye" sensor on the servo. You could get an almost permanent correction of your position. I expect accuracy to be in the low cm range and it will be cheap. Then you can map everything that you easily succeed.

Perhaps you can also use any interrupt in the beams to build objects that are also far from the robot.

+1
May 22 '13 at 11:55
source share

Use Ultra Sonic Sensor HC-SR04 or equivalent. As mentioned above, the distance to the walls from the robot with sensors and part of the room with a QR code.

When you are approaching the wall, turn 90 degrees and move across the width of your robot and turn 90 degrees again (i.e. 90 degrees to the left) and move your robot again, I think this will help :)

+1
Oct 30 '14 at 13:34
source share

Do you have a camera that you said? Do you think looking at the ceiling? There is little chance that the two rooms are the same size, so you can determine which room you are in, the position in the room can be calculated from the angular distance to the ceiling, and the direction can probably be extracted by the position of the doors.

This will require some image processing, but a vacuum cleaner that moves slowly for efficient cleaning will have enough time to calculate.

Good luck

0
Jan 12 '15 at 20:37
source share

I am working on a Python computer vision project on a raspberry Pi to allow a robot without encoders to more accurately track the use of camera tracking. This does not solve the problem of navigation in the room (which also interests me). There are some good examples of using ROS and LIDAR. In any case, this is my small contribution if you are interested (work in progress). See below for more details. https://github.com/pageauc/motion-track/tree/master/cam-track

0
06 Sep '16 at 13:52
source share



All Articles