Contact us
FREE webinar featuring Gatik CTO & Co-Founder, Arjun Narang Register ×
Getting Autonomous Vehicles Ready for an Unpredictable Life with ANYVERSE
Jun 12, 2020 | By Jane Maynard, Digital Marketing Manager

Interview with Ángel Tena, CTO of ANYVERSE, discussing how synthetic data solutions support ADAS and AV system development

ANYVERSE synthetic data solutions support ADAS and AV development: ANYVERSE Variations Example
ANYVERSE synthetic data solutions support ADAS and AV development: ANYVERSE Variations Example

Autonomous vehicle companies use simulation and machine learning to train, test and validate their self-driving systems for roadways. Even with massive data capture efforts, real-world data is not enough to cover all possible driving scenarios. Synthetic data fills the gap by modeling roadway environments, complete with people, traffic lights, empty parking spaces, and more.

ANYVERSE is a synthetic data provider that supports a wide range of applications, including autonomous vehicle (AV) and advanced driver assistance system (ADAS) development. The parent company behind ANYVERSE is Next Limit, a Spanish software company that is an Oscar-winning pioneer in 3D simulation and rendering.

ANYVERSE can model any driving situation using geographically-stylized urban, suburban, rural and highway environments. Their solutions cover the full range of real-life conditions, including lighting, weather, varying physical conditions, color ranges, and vehicle and pedestrian behaviors.

We interviewed Ángel Tena, CTO of ANYVERSE, to learn how ANYVERSE helps autonomous system developers mirror the wide-ranging scenarios of reality in sumlation and machine learning.

Ángel Tena, CTS for ANYVERSE

Q: How can ANYVERSE synthetic data accelerate the development of autonomous systems?

Tena: It is very well known in the machine learning world that to train a model properly, in addition to the quantity of data, the quality and diversity of the data are also important. We have a clear example in the automotive industry. Simply having thousands of driven miles where nothing special happens is worthless from the machine learning model point of view. In this case the model will start overfitting very quickly. In ANYVERSE, diversity and quality are the two most important features, allowing us to tackle this problem. ANYVERSE includes a high-fidelity spectral rendering system that allows us to create images that are very close to real world images. It also includes a scene procedural system that allows us to create a large number of different scenes automatically.

Q: Traditional approaches to simulated  data generally struggle to match real sensor – such as lidar – specifications.  How does ANYVERSE overcome that challenge?

Tena: In the case of the image sensor, we simulate all the components that are part of the acquisition process. In our rendering system, light is modeled using wavelengths, just as in the real world. This allows us to simulate different modules inside the sensor in a very precise way. For example, QE (Quantum Energy) curves, uniform and non-uniform noise, and more.

In the case of lidar, we take a geometric approach and leverage the use of our physically-based definition of materials to try to mimic the returns from a real lidar. For instance, the roughness of the material determines how strong the return from the lidar sensor is. In our model, there are some limitations we are now removing. For instance, the ability to simulate divergence. We are very close to simulating the characteristic of any lidar within our system.

Q: Data customization plays an important role in training AV and ADAS solutions for the unpredictable world of our roadways. How does ANYVERSE put control over customization into the hands of the developer?

Tena: There are many parameters that can be adjusted to provide dataset customization. Usually we start off with the type of scene: urban, suburban, highway, parking lot or other scenarios. Then we can continue with the position and the type of dynamic objects that will populate the base scene: vehicles, pedestrians, props, etc. Environmental conditions, like weather, time of day and sky cover, can be also be defined.

Additionally, we have developed a procedural scene-generation engine that is able to automatically recreate any plausible driving scenario, based on standard sources like OpenStreetMaps and OpenDrive. This module can dramatically expand the number of different scenarios and dynamic conditions.

ANYVERSE synthetic data solutions support AV and ADAS development
ANYVERSE synthetic data solutions support AV and ADAS development

Q: Tell us about your Velodyne lidar sensor simulation model and how it can help autonomous systems developers.

Tena: Based on our ray tracing technology, we cast a ray from the lidar sensor and compute the geometric intersection with the scene. We evaluate the material properties and the surface normal at the intersection point to filter valid points (valid returns). For every intersection, we provide the 3D position of the point (either in device or in world coordinate system), the distance from the lidar sensor, the class of the object, and the material of that object. We can implement any sweeping pattern just providing some parameters, for instance points per second, vertical/horizontal field of view, vertical/horizontal angular resolution, and rotation rate. We can also combine lidar and camera optics to simulate advanced sensors with both capabilities.

Q: ANYVERSE can be used in more than AV and ADAS applications. For instance, how can it help unmanned aerial vehicle (UAV) developers?

Tena: Yes, ANYVERSE can definitely be used for many other applications in addition to AV and ADAS. In general, we can always help when there is a perception  system for data acquisition. UAVs are used broadly nowadays for visual inspection and detection of defects in different areas. For instance, the detection of distresses in the airport’s runways, defects in power line insulators or any other infrastructure that requires a lot of inspection cycles.

About Velodyne Lidar

Velodyne provides smart, powerful lidar solutions for autonomy and driver assistance. Headquartered in San Jose, Calif., Velodyne is known worldwide for its portfolio of breakthrough lidar sensor technologies. Velodyne’s Founder, David Hall, invented real-time surround view lidar systems in 2005 as part of Velodyne Acoustics. Mr. Hall’s invention revolutionized perception and autonomy for automotive, new mobility, mapping, robotics, and security. Velodyne’s high-performance product line includes a broad range of sensing solutions, including the cost-effective Puck™, the versatile Ultra Puck™, the autonomy-advancing Alpha Prime™, the ADAS-optimized Velarray™ and the groundbreaking software for driver assistance, Vella™.

Browse products

Join our Newsletter Newsletter
© Velodyne Lidar, Inc. 2020 All Rights Reserved