2015-346 Robust Visual-Inertial Sensor Fusion for Navigation, Localization, Mapping, and 3D Reconstruction

Robust Visual-Inertial Sensor Fusion For Navigation, Localization, Mapping, and 3D Reconstruction

UC Case No. 2015-346

 

SUMMARY:

UCLA researchers in the Computer Science Department have invented a novel model for a visual-inertial system (VINS) for navigation, localization, mapping, and 3D reconstruction applications.

 

BACKGROUND:

Vision-augmented navigation or VINS is central to augmented and virtual reality, robotics, autonomous vehicles, and navigation in mobile phones. The future growth of these applications depends on reliable navigation in dynamic environments, thus improvement to these systems is of importance. Current methods rely upon low-level processing of visual data for 3D motion estimation. However, the processing is substantially useless and easily 60 – 90% of sparse features selected and tracked across frames are inconsistent with a single rigid motion due to illumination effects, occlusions, and independently moving objects. These effects are global to the scene, while low-level processing is local to the image, so it is not realistic to expect significant improvements in the vision front-end. Instead, it is critical for algorithms utilizing vision to leverage other sensory modalities, such as inertial.

 

INNOVATION:

Researchers led by Professor Soatto have developed a novel sensor fusion system that integrates inertial and vision measurements to estimate 3D positon and orientation, along with a point-cloud model of the 3D world surrounding it. This invention has better robustness and performance that other performing VINS schemes, such as Google Tango, and with the same computational footprint. This unique technology addresses the problem of inferring ego-motion of a sensor platform from visual and inertial measurements, focusing on handling outliers.

 

POTENTIAL APPLICATIONS:

- Augmented and virtual reality

- Robotics

- Autonomous vehicles and flying robots

- Indoor localization in GPS-denied areas

- Ego-motion estimation

 

ADVANTAGES:

- Uses integrated inertial and vision measurements

- Improved robustness and performance

- Focuses on handling outliers

 

RELATED MATERIALS:

Patent Information:
For More Information:
Joel Kehle
Business Development Officer
joel.kehle@tdg.ucla.edu
Inventors:
Stefano Soatto
Konstantine Tsotsos