Perception Software for Autonomous Vehicles

A Milestone in Automotive-Grade Intelligence

Innoviz was a pioneer in the LiDAR industry, being among the first to develop a comprehensive, proprietary perception software suite to accompany its hardware. Designed to extract and interpret complex 3D point cloud data, this software provided the foundation for a deep understanding of driving scenes and played a critical role in the first generation of LiDAR-enabled autonomous vehicles.

car projecting on tablet and lidar
Perception Software gif
perception_optimized

3D LiDAR Point Cloud Perception Software Solution

Innoviz was a pioneer in the LiDAR industry, being among the first to develop a comprehensive, proprietary perception software suite to accompany its hardware. Designed to extract and interpret complex 3D point cloud data, this software provided the foundation for a deep understanding of driving scenes and played a critical role in the first generation of LiDAR-enabled autonomous vehicles.

Enhancing Safety by Deploying Both LiDAR Software & Hardware

When deployed alongside and in combination with other sensors’ perception algorithms, Innoviz’s software complemented them to enhance performance and safety. The perception software was very resource-efficient and required low computing power, making it compatible with various automotive-grade computing platforms. This achievement was a key differentiator against hardware-only LiDAR providers.

Key Features

Object Detection, Classification & Tracking

Pixel Collision Classification

Continuous Calibration

Blockage and Range Estimation

ISO 26262 Compliant

Lane Marking

Object Detection, Classification, and Tracking

Detected objects with high precision. Boasting two independent detectors for identifying objects (i.e., cars, trucks, motorcycles, pedestrians, bicycles) as they appear through their shape and other attributes, and due to their movement. Delivered both high-quality object detection as well as more advanced object tracking (the ability to designate the same object as such in consecutive frames).

Pixel Collision Classification

Accurately identified the drivable area for autonomous and semi-autonomous vehicles by classifying each object in the 3D environment as collision-relevant or non-collision-relevant with pixel-level data accuracy. Collision-relevant subclasses were broken down into “objects” (i.e., car, truck, bicycle) or “obstacles”, the latter consisting of anything not classified as an object (i.e., tire, debris).

Continuous Sensor & LiDAR Calibration

Calibrated between the sensor’s coordinate system and the vehicle’s coordinate system following vehicle mounting in the assembly line. Also fixed pixels’ spatial displacement, which could occur while driving on bumps and potholes, and due to acceleration and deacceleration.

Point cloud tram