Google Driverless Car- The Obstacle Detection Unit

obstacle-detection-unit-of-google-driverless-car


The obstacle detection unit includes sonar, stereo cameras, a localization camera, a laser, and a radar detection unit that helps the car computer to make driving strategy.

In one of our past posts, we discussed computer, display, and map & navigation system of driverless car of Google that it carries under its hood.

We are studying patents filed by Google to write these articles so that we can give you a holistic view of the driverless car before anyone else. You can access the full series from here.

Today’s post is about the obstacle detection unit of driverless car by Google that includes sonar, stereo cameras, a localization camera, a laser, and a radar detection unit.

These sensors help robotic chauffeur (driverless car computer) to identify, track and predict the movements of pedestrians, bicycles, and other vehicles on the roadway. And based on the data provided by these sensors, the robotic chauffeur makes driving strategies.

We will talk about the field of view and ranges of these sensors later in today’s post. Let’s first find out which sensor is located where and how it helps in self driving.

Position and Use of Various Obstacle Detection Units in Driverless Car

According to Chris Urmson (tech lead of driverless car project), the laser range finder of the car is “heart of the system”.

The car carries a Velodyne 64-beam laser on its roof. It measures the distance between the vehicle and the object surfaces facing the vehicle by spinning on its axis and changing its pitch, in other words by rotating.

The radar detection unit is used for adaptive cruise control systems that are located on the front and back of the car as well as on either side of the front bumper. For the same purpose Sonar will also be used.

Varieties of cameras are mounted on the car separated from each other by a small distance. The parallax from different images, which are captured from multiple cameras, is used to compute the distance to various objects.

The purpose behind using different units of different field of view and range is that they provide superb situational awareness and works in all type of weather.

For example, on a foggy day, chances are that the camera may provide wrong inputs, however, radar will work efficiently in such a weather condition.

obstacle-detection-unit-of-google-driverless-car

Obstacle Detection Unit detecting objects in real time


Range of Obstacle Detection Units of Driverless Car:

The sonar has a horizontal field of view of approximately 60 degrees for a maximum distance of approximately 6 meters.

The stereo cameras have an overlapping region with a horizontal field of view of approximately 50 degrees, a vertical field of view of approximately 10 degrees, and a maximum distance of approximately 30 meters.

The localization camera has a horizontal field of view of approximately 75 degrees, a vertical field of view of approximately 90 degrees and a maximum distance of approximately 10 meters.

The laser has a horizontal field of view of approximately 360 degrees, a vertical field of view of approximately 30 degrees, and a maximum distance of 100 meters.

The radar has a horizontal field of view of 60 degrees for the near beam, 30 degrees for the far beam, and a maximum distance of 200 meters.

This is all about the various detection units their location in the car and their ranges. Now you may be wondering that how car uses the obstacle detection unit on road. And to find that out, I would further recommend you to read this– How Driverless Car Predicts Movement of Vehicles on Road

Hey we are active on Twitter also. How about following each other and discussing about driverless car on DMs?

You may also like...

18 Responses

  1. INSIGHTS says:

    That’s damn neat! It reminds me of the James bond movie – Tomorrow never dies, the car which drives by itself with voice recognition. Slowly I guess we are getting there now :)

    • GreyB says:

      Hi Vinay,

      Yes you are right. Ten year back whatever was fiction is going to be a reality in next 5 to 10 years.
      This car is awesome, we have just introduced our readers to the driverless car with these four posts. String of posts on GDLC will be coming.

      Stay Tuned , 😉 keep reading….

  2. Christian says:

    Continue the great posts. I like reading stuff like this.

  3. TanMan says:

    I have a question that’s not been answered yet. How does the obstacle detection unit differentiate between leaves or paper (or snow or rain) blowing across the road and more substantial obstacles, such as small critters, children, a cart, etc.?

    • GreyB says:

      Hi Tanman,

      Great question mate. I really appreciate that you have given me a chance to answer that intriguing question.

      The autonomous driving computer is capable of determining the weather condition of road by using the obstacle detection unit.

      The computer collects the laser data of road while driving. The car computer has already stored the laser data of road for dry weather condition. It compares that collected data with the previously stored data for dry weather condition to check the road weather condition to know that whether it is raining or snow fall is happening.

      Apart from this it also compares the intensities of laser data collected by it. For example, If the value of intensity will shift toward darker side it signifies that rain is falling and because of that the road has became wet and darker.The computer also identifies rainfall by detecting whether water is being kicked up by a vehicle’s tires. It also identifies the rainfall by detecting puddles and wet tire tracks left by another vehicles.

      Similarly, if the intensity value shifts toward the brighter end that signifies that the road is covered with snow and because of that it becomes brighter than in normal conditions.

      It can also detect pedestrian, kids and bi-cycle on road. For this it compares there sizes, their behavior on road and the part of the road on which they are traveling, their speed and the like.

      I will suggest you to read this post – http://greybmusings.wordpress.com/2014/06/26/google-driverless-car-data-stored-in-car-memory/

      The car stores various kind of data in its memory. The data also contains the size info and behavior of such objects on road and based on them it makes control strategies.

  4. sremani says:

    Laser has 360 degree horizontal view, so does GCar yield to faster traffic from the rear? Any plans Geep an off-road version of GCar? What kind of porting challenges did GCar face for having to build it on an Electric platform from ICE cars.
    I really love the idea of driverless cars from Minority report and iRobot. Its great that Google is working on this with great amount of seriousness.

    • GreyB says:

      Google is working on an autonomous car computer. Google is using group of 10 cars for the test. These are
      * 6 Toyota Prius
      * 1 Audi TT
      *3 Lexus RX450

      Thus, Google is not manufacturing car but developing a computer only and this computer can work with any kind of car. I think they are not going to face too much porting challenges.

  5. Adam Brown says:

    I can’t help but wonder if any of the sensing technology is at risk of interfering with other driver-less vehicles, or with common ways police determine vehicle speed. Namely, will the laser array cause problems with laser speed guns, or the radar system confuse radar guns or set off radar detectors in other drivers cars? Anyone do any tests to make sure the sonar doesn’t bother bats?

    • GreyB says:

      No they are not going to interfere with other driverless vehicle. However, they are going to communicate with each other and they will be passing information to each other for better and safe driving on road. This is called vehicle to vehicle communication.

      I think measures will be there to protect these living species. Bats use to fly in air and Sonar has horizontal field of view. It will not be scanning a larger area above the car roof. :)

  6. Thank you to share this information and Continue the great posts. I like reading stuff like this…..

  7. Very informative post on Google Driverless cars. All along I thought that Google was manufacturing these cars, but after reading this & comments I realize that Google is using Toyota , Audi & Lexus to come up with driverless models. Thanks for sharing,

  1. August 22, 2014

    […] of various devices installed in the vehicle like the condition of brakes, tires and the like. The obstacle detection sensors give the information related to the  road condition, traffic information and the like. After […]

  2. September 9, 2014

    […] software integrates all the data from these remote sensing systems (as much as 1GB per second) to build a map of the car’s position. Other cars are rendered as […]

  3. September 9, 2014

    […] software integrates all the data from these remote sensing systems (as much as 1GB per second) to build a map of the car’s position. Other cars are rendered as […]

  4. September 9, 2014

    […] software integrates all the data from these remote sensing systems (as much as 1GB per second) to build a map of the car’s position. Other cars are rendered as […]

  5. September 9, 2014

    […] software integrates all the data from these remote sensing systems (as much as 1GB per second) to build a map of the car’s position. Other cars are rendered as […]

  6. September 16, 2014

    […] 의존할 수 없지만 레이더로 정보를 보완할 수 있는 식이다. 관련 내용은 이곳에서 확인할 수 […]

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>