Autonomous vehicles must be able to distinguish what is not moving in addition to detecting the moving traffic around them.
At first look, camera-based perception could appear to be adequate for making these judgements. However, poor illumination, bad weather, or scenarios with densely obscured objects might impair cameras’ ability to see. This implies that various redundant sensors, like radar, must also be able to carry out this function. However, adding more radar sensors that just use conventional analysis might not be sufficient.
You can know how AI may improve autonomous vehicle perception by addressing the limitations of conventional radar signal processing in differentiating between moving and stationary objects.
Radar signals are traditionally processed by reflecting them off of nearby objects and measuring their strength and density. ye.commastmastmastmastmastmastmastmastmastmastmas, and That cluster is presumably a car if it also happens to be moving over time.
While this method can be effective for implying a moving vehicle, it could not be in the case for a stationary one. In this instance, the item produces a substantial cluster of reflections while remaining stationary. This indicates that the item could be a railing, a wrecked car, a highway overpass, or another object, according to conventional radar processing. The method frequently lacks a mechanism to tell which.
Radar DNN introduction
Using AI in the form of a deep neural network is one technique to get around this method’s limitations (DNN).
Prior to training the DNN, radar data sparsity issues have to be fixed. It is nearly impossible for humans to visually recognise and categorise cars from radar data alone since radar reflections can be relatively sparse.
Lidar, on the other hand, uses laser pulses to provide a 3D view of the surrounding objects. As a result, as illustrated in Figure 1, bounding box labels from the appropriate lidar dataset were propagated onto the radar data to provide ground truth data for the DNN. This efficiently translates the visual car identification and labelling skills of a human labeler into the radar realm.
The radar DNN also learns to recognise cars’ 3D shape, dimensions, and orientation through this process, something traditional approaches find difficult to do.
With this extra information, the radar DNN can distinguish between various obstacles, even those that are immovable, boost the accuracy of real positive detections, and lessen the likelihood of false positive detections.
The radar DNN’s improved 3D vision gives the AV prediction, planning, and control algorithms more confidence to drive more safely, even in difficult situations. Radar makes it possible to detect stationary vehicles as well as vehicles under highway overpasses and other traditionally challenging challenges with a lot fewer failures.
The integration of the radar DNN output with conventional radar processing is seamless. These two elements work together to create the radar obstacle perception software stack.
This stack is made to provide radar-only input to planning and control, as well as fusion with camera- or lidar-based obstacle perception software, in addition to providing full redundancy for camera-based obstacle perception.
Autonomous vehicles can confidently assess their surroundings because to the extensive radar perception capabilities.