System & Method for Extended Spectrum Ultrasound Training Using Animate & Inanimate Training Objects (Case No. 2022-027)

Summary

UCLA researchers have developed a system and method for extended-spectrum ultrasound training using animate (live human) and inanimate (mannequins or models) training objects, combining tags, a reader, and rotational 3-degree-of-freedom (3-DOF) motion tracking to allow trainees to practice finding image windows and optimal ultrasound views in a virtual / simulated environment.

Background

Ultrasound training requires mastering two key psychomotor skills: (1) locating the correct “image window” (i.e. placing the probe over the anatomical site that gives access to the organ or structure of interest) and (2) adjusting probe orientation to acquire the optimal view (correct angle, plane, and positioning). Traditional training relies on hands-on clinical practice, expensive 6-DOF-tracked mannequins, or supervised scanning on patients—approaches that are costly, limited in availability, and sometimes inconvenient. There is a gap for an affordable, flexible, realistic simulator that supports both live (animate) and static (inanimate) practice and gives feedback on probe location and view orientation.

Innovation

This invention uses passive or active tags (e.g. NFC, RFID, etc.) placed on animate or inanimate models which are mapped to corresponding locations on a virtual body model. A reader device detects which tag is in proximity, combined with a 3-DOF rotation tracker to sense the orientation of the probe. The system updates a virtual probe and ultrasound image in real time on a display as the trainee moves and rotates the sensor/reader over the tags. That enables training in both where to place the probe (image window) and how to orient it (optimal view), without requiring full 6-DOF tracking or costly mannequins. The continuation patents build on and refine this system.

Advantages

  • Enables ultrasound trainees to practice both probe placement and probe orientation in a simulated environment.

  • Lower cost and more accessible than 6-DOF motion-tracking mannequins and full clinical equipment.

  • Flexible use: works with animate models (live persons) and inanimate training objects (mannequins or models).

  • Real-time visual feedback of virtual ultrasound images tied to actual probe motion and tag position.

  • Simplified sensor hardware (3-DOF rotation + tagged locations) reduces complexity and expense.

  • Scalable for training programs, allowing repeatable, standardized practice across multiple users.

Potential Applications

  • Medical education (ultrasound courses for physicians, sonographers, OB/GYN, emergency medicine).

  • Simulation centers and skills labs in medical schools or teaching hospitals.

  • Remote or low-resource settings where access to expensive ultrasound trainers or hall-availability is limited.

  • Continuing medical education and credential maintenance for ultrasound skills.

  • Integration into virtual or augmented reality training platforms.

Patent

US 10,380,919 B2 — System and method for extended spectrum ultrasound training using animate and inanimate training objects
Continuation patents:

  • US 11,315,439 B2 — System and method for extended spectrum ultrasound training using animate and inanimate training objects

  • US 11,594,150 B1 — System and method for extended spectrum ultrasound training using animate and inanimate training objects

  • US 12,249,250 B1 — System and method for extended spectrum ultrasound training using animate and inanimate training objects

Patent Information:
For More Information:
Nikolaus Traitler
Business Development Officer (BDO)
nick.traitler@tdg.ucla.edu
Inventors:
Gabe Nataneli
Dan Katz
Eric Savitsky