Investors
Blog
Deepen Delivers Annotation Excellence for Autonomous Development
Mar 18, 2020 | By Velodyne Lidar
Lidar point cloud used for Deepen annotation process, using Velodyne lidar
Lidar point cloud used for Deepen annotation process.

Interview with Andrew Lee, Vice President of Business Development at Deepen

One of the most foundational building blocks of any modern autonomous system is the perception module. This helps any machine interpret information from the real world through sensors such as cameras, lidar and radars. In recent years, advances in deep learning have spawned an entirely new generation of perception algorithms that are developed by the huge amounts of data captured by these sensors.  

Deepen plays directly into this technology trend, providing AI-powered software tools and services that help process sensor data efficiently and accurately. Deepen helps companies curate, annotate and validate real-world, multi-sensor data for perception development.  

To learn more about Deepen and how its solutions help autonomous development, we connected with Andrew Lee, Vice President of Business Development at Deepen.

V: Can you please share with our readers what is image annotation and discuss its role in developing autonomous vehicles and robots? 

AL: Most modern perception systems using deep learning techniques require vast amounts of labeled images from various sensors such as cameras, lidars and radars. These images help train their algorithms. Raw images from these sensors are labeled, or assigned semantic meaning, in order to teach machines to understand what key objects are, such as vehicles, pedestrians, traffic signs and more. With that understanding, machines can then make decisions on how to navigate around their surroundings.

V: How pivotal is developing perception models that involve multiple inputs, including lidar, for advanced driver assistance systems (ADAS) and autonomy? 

AL: Depth sensors, such as lidars, provide another dimension of information that is especially useful for many perception models out there. This information is not natively available from cameras today. In our view, most companies developing advanced autonomous systems adopt multimodal sensor suites that include lidars. This is especially true for autonomous vehicle (AV) makers.

Lidar point cloud used for Deepen annotation process.
Lidar point cloud used for Deepen annotation process.

V: How can the combination of Velodyne’s rich computer perception data and Deepen annotation AI software improve ADAS and autonomous driving development?

AL: Most of our AV clients use Velodyne lidars to collect data for perception model training. We offer a buffet of tools, services and IP to help them with that development, customized to their pipelines which are often non-standardized.

Our tools can ingest and visualize Velodyne lidar point clouds together with data from other sensors such as cameras and radars with spatial and temporal accuracy. Users can then assign semantic labels to the fused data at scale, the output of which can be used for model training or system output validation. 

We can be considered as a perception development partner and have at times helped our customers with other parts of the perception stack as well. This of course does not include building lidar sensors 😊.

V: What’s coming next for Deepen?

AL: We continue to see lidars playing a key role in sensor stacks for most AV makers. We believe penetration will continue especially when lidars are becoming more economical and more models are being introduced to the market by companies such as Velodyne. We have been handling lidar data for years and have developed a lot of techniques to address its properties. We believe enough technology has been built collectively to push this sensor modality further into AVs and other applications – smart cities, agriculture, delivery, robotics and more. Many of our customers use Velodyne lidars already, a trend we’ll continue to see in the foreseeable future.

Velodyne Lidar Sensors: Alpha Prime™, Puck 32MR, and Puck™

About Velodyne Lidar

Velodyne Lidar provides smart, powerful lidar solutions for autonomous vehicles, driver assistance, delivery solutions, robotics, smart cities, security, and more. As the leading lidar provider, Velodyne is known worldwide for its portfolio of breakthrough lidar sensor technologies. Velodyne’s high-performance product line includes a broad range of sensing solutions, including the cost-effective Puck™, the versatile Ultra Puck™, the autonomy-advancing Alpha Prime™, the ADAS-optimized Velarray™, and the groundbreaking software for driver assistance, Vella™. As a market leader, Velodyne has served more than 300 customers including nearly all of the leading global automotive original equipment manufacturers.

Browse products

Join our Newsletter Newsletter
© Velodyne Lidar, Inc. 2020 All Rights Reserved