LiDAR design challenges

Optical cameras and sensors for the safety of autonomous vehicle

April 17th, 2017 | Innovation

Photo Credit: NXP©

One of the major challenges of the automotive industry is the autonomous vehicle. To carry out its mission of comfort and safety, the autonomous vehicle is equipped with cameras and optical sensors necessary to position itself in a dense traffic.


Many situations are impossible to test in real conditions because they are too risky for the safety of passengers or other road users. This is particularly the case when an autonomous car has to brake in an emergency in front of a pedestrian crossing. If the algorithm of the autonomous vehicle is not developed yet, the vehicle will strike the pedestrian. To remedy these security problems, virtually testing their product is vital and implies that the sensor simulator must be physically correct. 

For those reasons, OPTIS is developing an autonomous vehicles' simulator which can take the same reliable decisions as a real-world connected vehicle. This tool aims at eliminating costly and risky real tests. 


The sensors of the autonomous vehicle are mainly video cameras and LiDAR. LiDAR is a remote measurement technique based on the analysis of the properties of a beam of light returned to its emitter.


Courtesy of Volvo


OPTIS re-created virtual cameras and LiDARs that are digital components of optical sensor types. After developing ultra-realistic 3D roads available in its library for the VRX simulator, OPTIS now adds optical sensors. The objective is that the autonomous vehicles of tomorrow can ride on these virtual roads thanks to the VRX simulator which, besides being able to simulate precisely the external lighting, knows perfectly simulate the sensors.


The LiDAR was developed for the automotive industry with the ambition of eliminating the main disadvantages of configurations induced by the stereo camera. These cameras generate an increasing depth estimation error with distance, are sensitive to glare and depend on external light sources. However, these problems are today perfectly well managed by the LiDAR sensors, although these are much more expensive than the configurations of stereo cameras. 


The LiDAR consists of a laser beam emitter and a sensor to see what the laser beam is impacting. The result is a cloud of points (3D scan) of the environment. A LiDAR sensor can be modeled similarly to a camera sensor, the main difference being the light of a different wavelength (eg, near infrared). In addition, the LiDAR sends its own light, in the form of laser pulses, unlike the camera. The LiDAR mainly generates two beam patterns: flat horizontal and sweeping through 360 degrees.


The OPTIS LiDAR virtual components have been structured to follow the physical path of the laser beam:  

  1. Beam emission  
  2. Propagation in air  
  3. Reflections  
  4. Propagation in air 
  5. Measurement of the radius returning to the LiDAR.

Recreating this physical path takes into account the directions, the beam divergence, the power, the weather, the road surface and of course the sensitivity of the sensor. 

Photo source: USDOT

Developments allow users to obtain 'field truth' data, from which a material (eg a fabric) and an object classification (eg a pedestrian) can be obtained, as well as their distance and distance The actual orientation of each of their surfaces, for each radius.


In SPEOS / VRX, the user can view the results. These refer to several possible magnitudes such as distance, angle, reflected power, impacted material and object.




Articles that may be of interest to you:  

Some webinars to go further:  

Photo Credit: NXP©