Results for ""
US-based driving technology company Waymo is utilising artificial intelligence to generate camera images for simulation. Sensor data from its self-driving vehicles will be utilised for this. To train and test their system before deploying in their car, Waymo uses simulation environments.
There are many ways to do simulation like simulating mid-level object representations which are complex. But the AI-based method, termed as SurfelGAN, is a simpler, data-driven approach for simulating sensor data. SurfelGAN makes use of a texture-enhanced surfel map representation, which is very easy-to-construct scene representation that preserves sensor information while retaining reasonable computational efficiency.
SurfelGAN gets signs from the real-world lidar sensors and cameras. Using the sensor data, the AI generates and maintains rich information about the 3D geometry, semantics, and appearance of all objects within the scene. The system then administers the simulated scene from different ranges and observing angles.
“We’ve developed a new approach that allows us to generate realistic camera images for simulation directly using sensor data collected by a self-driving vehicle,” a Waymo spokesperson reportedly told.
To manage mobile objects like vehicles, SurfelGAN uses data from the Waymo Open Dataset. Data from lidar scans are collected and used in the simulation. Using this Waymo can create reconstructions of cars and pedestrians that can be placed in any location.
A module in SurfelGAN will convert surfel image renderings into realistic images. These synthetic examples, along with real examples from a training dataset, are supplied to discriminators, which attempt to differentiate between the two. Both the generators and discriminators develop their abilities until the discriminators are incapable to recognise real examples from the synthesised examples.