SUMMARY
UCLA researchers in the Department of Electrical and Computer Engineering developed a diffractive optical network training strategy that guides the design toward a scale-, shift- and rotation-invariant solution.
BACKGROUND
Optical neural networks have gained popularity in the recent years over their electronic counterparts due to their power efficiency, parallelism and computational speed. Diffractive deep neural networks (D2NNs) can form these optical networks by using light-matter interactions and free-space propagation of light over a series of trainable surfaces to compute a given learning task. Despite recent progress on this field, existing diffractive optical network designs are sensitive to object transformations and/or scaling of the input objects due to inference inaccuracies based on the training surfaces. New training methods are needed that can increase the sensitivity and robustness of diffractive optical networks for dynamic machine vision applications.
INNOVATION
UCLA researchers in the Department of Electrical and Computer Engineering developed a training strategy to enable diffractive optical networks to perform statistical inference while being resilient to input object lateral translation, scaling and rotation. The use of this training strategy significantly increased the robustness of diffractive networks against undesired object field transformations while maintaining low-latency and memory-/power-efficient inference engines. Furthermore, the training scheme enabled the diffractive optical networks to achieve significantly higher inference accuracies vs. current methods in dynamic environments.
POTENTIAL APPLICATIONS
ADVANTAGES
RELATED MATERIALS
STATUS OF DEVELOPMENT
First successful demonstration of Scale-, Shift- and Rotation-Invariant Diffractive Optical Networks