The obstacle detection unit includes sonar, stereo cameras, a localization camera, a laser, and a radar detection unit that helps the car computer to make driving strategy.
In one of our last posts, we discussed computer, display, and map & navigation system of a driverless car of Google that it carries under its hood.
We are studying patents filed by Google to write these articles so that we can give you a holistic view of the technology behind driverless cars before anyone else.
Besides Google, there are more than 30 companies working on driverless vehicles.
Today’s post is about a patent revealing obstacle detection unit of Google’s driverless car that includes sonar, stereo cameras, a localization camera, a laser, and a radar detection unit.
These sensors help robotic chauffeur (driverless car’s computer) to identify, track and predict the movements of pedestrians, bicycles, and other vehicles on the roadway. And based on the data provided by these sensors, the robotic chauffeur makes driving strategies.
We will talk about the field of view and ranges of these sensors later in today’s post. Let’s first find out which sensor is located where and how it helps in self-driving.
Position and Use of Various Obstacle Detection Units in Driverless Car
According to Chris Urmson (tech lead of driverless car project), the laser range finder of the car is “heart of the system”.
The car carries a Velodyne 64-beam laser on its roof. It measures the distance between the vehicle and the object surfaces facing the vehicle by spinning on its axis and changing its pitch, in other words by rotating.
The radar detection unit is used for adaptive cruise control systems that are located on the front and back of the car as well as on either side of the front bumper. For the same purpose, Sonar will also be used.
Varieties of cameras are mounted on the car separated from each other by a small distance. The parallax from different images, which are captured from multiple cameras, is used to compute the distance to various objects.
The purpose of using different units of different field of view and range is that they provide superb situational awareness and works in all type of weather.
For example, on a foggy day, chances are that the camera may provide wrong inputs, however, the radar will work efficiently in such a weather condition.

Range of Obstacle Detection Units of Driverless Car:
The sonar has a horizontal field of view of approximately 60 degrees for a maximum distance of approximately 6 meters.
The stereo cameras have an overlapping region with a horizontal field of view of approximately 50 degrees, a vertical field of view of approximately 10 degrees, and a maximum distance of approximately 30 meters.
The localization camera has a horizontal field of view of approximately 75 degrees, a vertical field of view of approximately 90 degrees and a maximum distance of approximately 10 meters.
The laser has a horizontal field of view of approximately 360 degrees, a vertical field of view of approximately 30 degrees, and a maximum distance of 100 meters.
The radar has a horizontal field of view of 60 degrees for the near beam, 30 degrees for the far beam, and a maximum distance of 200 meters.
This is all about the various detection units their location in the car and their ranges. Now you may be wondering that how car uses the obstacle detection unit on road.
To find that out, read its next part – Google Driverless car data storage and robust privacy.
That’s damn neat! It reminds me of the James bond movie – Tomorrow never dies, the car which drives by itself with voice recognition. Slowly I guess we are getting there now 🙂
Hi Vinay,
Yes you are right. Ten year back whatever was fiction is going to be a reality in next 5 to 10 years.
This car is awesome, we have just introduced our readers to the driverless car with these four posts. String of posts on GDLC will be coming.
Stay Tuned , 😉 keep reading….
Continue the great posts. I like reading stuff like this.
Thanks for the good words Christian. Here are suggested reads for you – http://greybmusings.wordpress.com/tag/google/
I have a question that’s not been answered yet. How does the obstacle detection unit differentiate between leaves or paper (or snow or rain) blowing across the road and more substantial obstacles, such as small critters, children, a cart, etc.?
Hi Tanman,
Great question mate. I really appreciate that you have given me a chance to answer that intriguing question.
The autonomous driving computer is capable of determining the weather condition of road by using the obstacle detection unit.
The computer collects the laser data of road while driving. The car computer has already stored the laser data of road for dry weather condition. It compares that collected data with the previously stored data for dry weather condition to check the road weather condition to know that whether it is raining or snow fall is happening.
Apart from this it also compares the intensities of laser data collected by it. For example, If the value of intensity will shift toward darker side it signifies that rain is falling and because of that the road has became wet and darker.The computer also identifies rainfall by detecting whether water is being kicked up by a vehicle’s tires. It also identifies the rainfall by detecting puddles and wet tire tracks left by another vehicles.
Similarly, if the intensity value shifts toward the brighter end that signifies that the road is covered with snow and because of that it becomes brighter than in normal conditions.
It can also detect pedestrian, kids and bi-cycle on road. For this it compares there sizes, their behavior on road and the part of the road on which they are traveling, their speed and the like.
I will suggest you to read this post – http://greybmusings.wordpress.com/2014/06/26/google-driverless-car-data-stored-in-car-memory/
The car stores various kind of data in its memory. The data also contains the size info and behavior of such objects on road and based on them it makes control strategies.
Laser has 360 degree horizontal view, so does GCar yield to faster traffic from the rear? Any plans Geep an off-road version of GCar? What kind of porting challenges did GCar face for having to build it on an Electric platform from ICE cars.
I really love the idea of driverless cars from Minority report and iRobot. Its great that Google is working on this with great amount of seriousness.
Google is working on an autonomous car computer. Google is using group of 10 cars for the test. These are
* 6 Toyota Prius
* 1 Audi TT
*3 Lexus RX450
Thus, Google is not manufacturing car but developing a computer only and this computer can work with any kind of car. I think they are not going to face too much porting challenges.
I can’t help but wonder if any of the sensing technology is at risk of interfering with other driver-less vehicles, or with common ways police determine vehicle speed. Namely, will the laser array cause problems with laser speed guns, or the radar system confuse radar guns or set off radar detectors in other drivers cars? Anyone do any tests to make sure the sonar doesn’t bother bats?
No they are not going to interfere with other driverless vehicle. However, they are going to communicate with each other and they will be passing information to each other for better and safe driving on road. This is called vehicle to vehicle communication.
I think measures will be there to protect these living species. Bats use to fly in air and Sonar has horizontal field of view. It will not be scanning a larger area above the car roof. 🙂
Thank you to share this information and Continue the great posts. I like reading stuff like this…..
Very informative post on Google Driverless cars. All along I thought that Google was manufacturing these cars, but after reading this & comments I realize that Google is using Toyota , Audi & Lexus to come up with driverless models. Thanks for sharing,
Thank you a lot for your effort in developing this blog post .