Digital Pathology Technologies

Interactive Reporting of Histopathological Image Analysis Performed by Artificial Intelligence (UCLA Case No. 2023-090)

Dr. Anthony Chen and his research team have developed an interactive and hierarchical reporting tool for histopathological image analysis, which combines AI diagnostics optimized for explainability with the pathologist's expertise in detecting abnormal tissue samples. This tool enhances traditional slide viewers with a top-down approach, presenting the diagnosis, critical criteria, samples meeting each criterion's conditions, and flagged abnormal regions within each sample. Users can modify features at each stage, granting ultimate diagnostic control to the pathologist and fostering trust.

The Growing Role of Digital Pathology and Machine Learning in Cancer Diagnostics (UCLA Case No. 2021-134)

Professor Corey Arnold and his research team have developed a novel method for training pathology image analysis algorithms. Their approach allows for the automatic differentiation between mislabeled data in two categories of histopathological complexities: difficult-to-classify and easily-classified samples. As a result, the method leads to reduced time expenditure and enhanced accuracy. This invention advances active learning (AL) in medical image analysis by streamlining the training process. The novel methodology demonstrates promising results in prostate cancer Gleason grading, achieving a 40% reduction in the number of required annotations while maintaining performance levels comparable to fully supervised learning approaches.

A Uniform AI Framework for Automated Detection of Disease-Related Risk Factors in 3D Medical Imaging Data (UCLA Case No. 2023-155)

Dr. Avram and his research team have developed an innovative framework called SLice Integration by Vision Transformers (SLIViT) for detecting clinical features in volumetric imaging data. SLIViT employs cutting-edge computer vision techniques to extract features from each 2D layer and comprehensively integrates them to generate a single diagnostic prediction. Importantly, the unique architecture of the model allows pre-training on 2D annotated medical imaging data, which are more affordable to annotate and thus more accessible. The pre-training enables SLIViT to effectively learn visual features and leverage them to make predictions on 3D imaging data, which typically has fewer available annotations. SLIViT surpasses domain-specific state-of-the-art computer vision models in various learning tasks and data modalities demonstrating its utility and generalizability. When trained on less than 700 annotated volumes, SLIViT was also able to reach trained clinical specialists' performance, 5,000x faster illustrating its potential to reduce the burden on clinicians and expedite ongoing research.

Multi-Resolution Model With Attention for Whole Slide Image Analysis Trained Using Weak Labels (UCLA Case No. 2019-943)

Dr. Arnold and colleagues developed a two-stage, attention-based multiple instance learning (MIL) model for slide-level cancer grading and weakly-supervised ROI detection. In contrast to previous models that necessitate manual identification of ROIs by pathologists, this innovative method is trained solely using slide-level labels (weak labels) readily obtainable from pathology reports. The two-stage model first employs a lower-resolution approach to pinpoint potential regions of interest. Subsequently, it conducts a more in-depth analysis using a higher resolution, mimicking the process a pathologist would undertake during a manual histology review. Remarkably, this model achieved state-of-the-art performance, boasting an 85% accuracy rate in classifying benign, low-grade, and high-grade biopsy slides during an independent test.

Novel Method to Group Pixels for Medical Imaging (UCLA Case No. 2022-290)

Dr. Yeager developed an algorithm that allows radiologists to identify statistically significant focal T2-intense regions whose global clustering patterns have been implicated as a defining characteristic of prostate cancer. With this algorithm and analysis pipeline, there is potential for improved cancer screening accuracy and reduced need for physical biopsy by improving the diagnostic sensitivity of MRI. This algorithm could simplify the current complex MRI prostate scanning protocols down to a single sequence and significantly reduce the total scan time per patient. Additionally, automation of this algorithm has the added potential to streamline the radiologist’s workload, diminishing the time spent to read each prostate study, as well as improving the radiologist’s consistency.
 

Single-Shot Autofocusing of Microscopy Images Using Deep Learning (DEEP-R) (UCLA Case No. 2020-795)

This innovation is an offline deep-learning based autofocusing method to overcome image drift by rapidly and blindly autofocusing a single-shot microscopy image of a specimen that is acquired at an arbitrary out-of-focus plane. The method has been successfully used with fluorescence and brightfield imaging to autofocus snapshots under different scenarios including a uniform axial defocus as well as a sample tilt within the field-of-view. Deep-R is 15x faster compared to current online algorithmic autofocusing methods and opens up new opportunities for rapid microscopic imaging of large sample areas. 
 

Quantitative COVID-19 Scores Using High Resolution Computed Tomography (UCLA Case No. 2022-225)

UCLA researchers in the Department of Radiology, led by Dr. Grace Kim, have created a machine-learning algorithm that uses visual patterns from COVID-19 CT scans to determine the extent of lung damage caused by pulmonary disease. This innovative method includes two new markers of disease that other algorithms do not consider, resulting in a more accurate representation of pulmonary health. The invention may enable medical professionals to assess a patient’s pulmonary health quantitatively and provide more tailored remedies. It also offers a means of studying the respiratory condition of chronically critically ill patients who cannot undergo traditional respiratory tests.
 

Process for Aligning and Displaying Serial Tissue Sections from Pathology Incorporated with AI (UCLA Case No. 2023-050)

Professor Arnold and his research team have invented novel systems of aligning tissue ribbons from different glass slides. Compared to the traditional method of aligning tissue ribbons, this innovative system is capable of fast tracking of structures through all available levels and stains from different glass slides in a more reliable fashion. This novel method processes and digitizes tissue ribbons to form a volumetric image using computer vision algorithms and AI solutions. The final reconstructed visual can be augmented and performed at 20X magnification, similar to microscopic examination. Additionally, the AI algorithms of the systems can be further trained on different tissue ribbons and volumetric pathology images to improve the diagnostic certainty in identifying potential cancers in the earlier stage.
 

Wide-Field Computational Imaging of Pathology Slides Using Lensfree On-Chip Microscopy (UCLA Case No. 2014-9AG)

Aydogan Ozcan and his team have developed a lens-free on-chip microscopy device, offering a cost-effective and efficient method for scanning biological samples, such as pathology slides. The innovation boasts a high numerical aperture of 1.4 across a 20.5 mm2 field-of-view, enabling digital focus and color imaging of transparent tissue samples. This compact and straightforward design presents an attractive alternative to traditional, bulky microscopy setups. Potential applications for this technology include microscopy, digital pathology, histology, and diagnostic image analysis. The invention has been successfully demonstrated and offers advantages such as high resolution, wide-field imaging, and improved signal-to-noise ratio.
 

Neural Networks for Adipose Tissue Segmentation on Magnetic Resonance Images (UCLA Case No. 2022-088)

Dr. Holden Wu and team in the Department of Radiological Sciences have developed a 3D neural network to automatically, accurately, and rapidly segment VAT and SAT from MRI. Their method uses combined multi-contrast MR images as inputs and a more sophisticated loss function for training. Using MRI scans from 629 participants, all of whom had notably high body mass index (BMI), indicating being overweight or obese, the researchers demonstrated that the method could successfully segment SAT and VAT.  This method can be extended to segment SAT into two compartments, superficial and deep SAT, as well as brown adipose tissue (BAT) and epicardial adipose tissue (EAT). The method was also successfully tested to segment SAT and VAT on free-breathing abdominal MRI scans obtained from children. Taken together, a novel neural network for adipose tissue segmentation could advance the utility of MRI for characterizing risk for cardiometabolic diseases.
 

Holographic Image Reconstruction With Phase Recovery and Autofocusing Using Recurrent Neural Networks (UCLA Case No. 2021-210)

This innovation is a neural network-based (RNN) holographic reconstruction algorithm that can efficiently reconstruct the phase and amplitude of an imaged sample. The algorithm improves image quality reconstruction by 40% at a processing rate of 15-fold compared to other deep-learning based iterative reconstruction processes. In addition, the invention reduces the complexity of the reconstruction process by removing the need for free-space back-propagation (FSP) step in existing deep learning-based holographic reconstruction algorithms. The algorithm also allows for a greater depth of field due its integrated auto-focusing feature. This innovation provides an invaluable tool for utilization in various coherent imaging applications.
 

Deep Learning-Based Color Holographic Microscopy (UCLA Case No. 2019-737)

A novel deep-neural-network-based method was created to achieve accurate color holographic microscopy. This generative adversarial network (GAN) based technology requires only a single super-resolved hologram acquired under wavelength-multiplexed illumination. The innovation achieves similar performance compared to that of the state-of-the-art absorbance spectrum estimation method, with more than four-fold enhancement in terms of data throughput. The technology was successfully demonstrated on stained lung tissue and prostate tissue sections. This method significantly simplifies the data acquisition procedures, the associated data processing and storage steps, and the imaging hardware required. 
 

A Method for Digital Pathology Using Augmented Reality (UCLA Case No. 2018-748)

Professor Jalali and colleagues have developed a novel method for digitizing pathological analyses. The method relies on phase stretch transform (PST), a computational imaging algorithm for improved image feature detection. The image is processed in multiple steps, first using PST to create a feature library and then using machine learning to identify specific regions of interest (ROIs). The microscope stage is aligned with the desired ROI and the following quantitative analysis and tissue grading is carried out. The information is then stored for physician decision support. PST exhibits superior edge detection compared to previous best-in-class techniques in visually impaired images with high noise and low contrast.
 

A Deep Learning, Computer Vision-Based Tissue Countdown to Cancer (UCLA Case No. 2019-490)

UCLA researchers in the Department of Radiological Sciences have developed a novel computer-vision based approach for analyzing serial screening and diagnostic medical images to predict the risk of cancer onset using deep learning algorithms. The developed system incorporates clinical, imaging, and genetic data from patients with known genetic risk for cancer, compares changes in the microenvironmental tissue states that are observed in serial imaging scans, and derives the probability of cancer formation based on an assessment of these changes. The detection of subtle changes in tissue imaging features can aid increase the radiologist’s diagnostic certainty in identifying potential cancers earlier and informing decisions about treatment.


High Spatial and Temporal Resolution Dynamic Contrast-Enhanced Magnetic Resonance Imaging (UCLA Case No. 2013-144)

Dr. Peng Hu and colleagues in UCLA’s Department of Radiology have developed a novel acquisition and reconstruction method for dynamic CE-MRA. The present invention utilizes a magnitude subtraction based compressed sensing algorithm to achieve high acquisition acceleration (10x) and improved reconstruction image quality than previously accessible with other approaches. The invention enables very fast reconstruction, with volumes of images available within 10 to 15 minutes and offers multiple extensions possible in combination with parallel imaging or view-sharing techniques. The present invention provides a promising technique for dynamic CE-MRA imaging of complex vascular anatomy and for improved tissue perfusion quantification.

 

Patent Information:
For More Information:
Joel Kehle
Business Development Officer
joel.kehle@tdg.ucla.edu
Inventors: