Asian Scientist Magazine (Jun. 24, 2022) — Medical imaging is an important part of modern healthcare, enhancing both the precision, reliability and development of treatment for various diseases. Over the years, artificial intelligence has further enhanced the process.
However, conventional medical image diagnosis employing AI algorithms require large amounts of annotations as supervision signals for model training. To acquire accurate labels for the AI algorithms, radiologists prepare radiology reports for each of their patients, followed by annotation staff extracting and confirming structured labels from those reports using human-defined rules and existing natural language processing (NLP) tools. The ultimate accuracy of extracted labels hinges on the quality of human work and various NLP tools. The method comes at a heavy price, being both labour intensive and time consuming.
To get around that challenge, a team of researchers at the University of Hong Kong (HKU) has developed a new approach “REFERS” (Reviewing Free-text Reports for Supervision), which can cut human cost down by 90 percent, by enabling the automatic acquisition of supervision signals from hundreds of thousands of radiology reports at the same time. Its predictions are highly accurate, surpassing its counterpart of conventional medical image diagnosis employing AI algorithms. The breakthrough was published in Nature Machine Intelligence.
“AI-enabled medical image diagnosis has the potential to support medical specialists in reducing their workload and improving the diagnostic efficiency and accuracy, including but not limited to reducing the diagnosis time and detecting subtle disease patterns,” said Professor Yu Yizhou, leader of the team from HKU’s Department of Computer Science under the Faculty of Engineering.
“We believe abstract and complex logical reasoning sentences in radiology reports provide sufficient information for learning easily transferable visual features. With appropriate training, REFERS directly learns radiograph representations from free-text reports without the need to involve manpower in labelling,” said Professor Yu.
For training REFERS, the research team uses a public database with 370,000 X-Ray images, and associated radiology reports, on 14 common chest diseases including atelectasis, cardiomegaly, pleural effusion, pneumonia and pneumothorax.
REFERS achieves the goal by accomplishing two report-related tasks, i.e., report generation and radiograph–report matching.
“Compared to conventional methods that heavily rely on human annotations, REFERS has the ability to acquire supervision from each word in the radiology reports. We can substantially reduce the amount of data annotation by 90 percent and the cost to build medical artificial intelligence. It marks a significant step towards realizing generalized medical artificial intelligence, ” said the paper’s first author Dr. ZHOU Hong-Yu.
Source: The University of Hong Kong; Photo: Unsplash