CRD's Ushizima to Discuss Using ML Algorithms to Screen Lung Images for COVID-19
October 15, 2020
Contact: [email protected]
Although testing is seen as critical to slowing the spread of COVID-19, RT-PCR tests are not always available to the general population, due to a global shortage of reagents, and getting the results can be delayed during high-demand periods. But there may be a better option in the works.
On Friday, Oct. 16, scientist Daniela Ushizima will discuss early results of using computer vision algorithms to scan medical images of lungs and automatically identify lesions that could indicate COVID-19. Ushizima, who has appointments at Lawrence Berkeley National Laboratory (Berkeley Lab), UC Berkeley, and UC San Francisco (UCSF), will present her research at the 2020 Annual Meeting of the Academic Data Science Association.
"When it comes to COVID-19 testing, one big question is whether the use of lung scans is suitable for frontline prescreening," said Ushizima, a staff scientist in Berkeley Lab’s Computational Research Division (CRD). "In several other countries, the use of lung scans is already impacting the screening protocols for inpatients with suspected COVID-19 infections."
Ushizima's talk on "ACTS: Accelerating COVID-19 Testing with Screening," will discuss research on computational methods that explore lung scans, such as computerized tomography (CT) and chest X-rays (CXR). Read more about her research in this area.
"By using machine learning associated with computer vision algorithms, we aim to characterize lung abnormalities that might indicate an early warning of COVID-19 infection," she said. "We will illustrate some preliminary results using algorithms that might amplify COVID-19 testing and surveillance, hopefully allowing patient comparison and ranking."
The discussion will include examples considering pneumothorax X-ray CT and CXR and encouraging results using a fully automated algorithm that detects the boundary of the lungs, an important first step towards identifying lung lesions. Ushizima will also share lessons learned using public datasets, such as those available at https://nihcc.app.box.com/v/DeepLesion and https://github.com/ieee8023/.
In the past, Ushizima and her team adapted algorithms used in materials research to scan for cervical cancer.
"Through partnerships with UC Berkeley, UCSF, and universities worldwide, my team and I have been translating lessons learned in materials science into tools that will accelerate cell analysis and vice-versa," Ushizima said.
Contributing to the research are Robbie Sadre (CRD), Kenneth Higa (ETA), Daniel Pelt (Centrum Wiskunde & Informatica, Netherlands), Jason Crane and Sharmila Manjumdar (UCSF), Duygu Tosun (UCSF/San Francisco VA Medical Center), and Baskaran Sundaram, Crystal Lee, Jacob Garcia, and Spencer Liem (Sidney Kimmel Medical College, Thomas Jefferson University). This project used high-performance computing resources and staff support, in particular from Cory Snavely, at the National Energy Research Scientific Computing Center, a Department of Energy Office of Science user facility at Berkeley Lab. The research work was funded by the Berkeley Lab Laboratory-Directed Research and Development (LDRD) program.