Artificial intelligence model revolutionises breast cancer surgery
Artificial intelligence (AI) and machine learning (ML) have been gaining significant attention lately, primarily in discussions about their responsible utilisation. However, these technologies possess a wide spectrum of practical applications, ranging from predicting natural disasters to addressing social disparities. Now, AI is making its mark in the field of cancer surgery, particularly breast cancer surgery.
A collaboration between the University of North Carolina’s Department of Surgery, the Joint UNC-NCSU Department of Biomedical Engineering, and the UNC Lineberger Comprehensive Cancer Center has given birth to an AI model capable of determining whether cancerous tissue has been completely excised during breast cancer surgery. Their remarkable findings were recently published in the prestigious Annals of Surgical Oncology.
Breast surgery is a complex procedure where the surgeon aims to remove both the tumour and surrounding healthy tissue to ensure the complete eradication of cancer. However, microscopic cancer cells that may linger at the tissue’s edge are nearly impossible to detect visually during the surgery. This challenge prompted senior author Kristalyn Gallagher, DO, section chief of breast surgery in the Division of Surgical Oncology and UNC Lineberger member, to lead the development of an AI tool designed to address this critical issue.
Gallagher explained that some cancers you can feel and see, but we cannot see microscopic cancer cells that may be present at the edge of the tissue removed. Other cancers are completely microscopic. This AI tool would allow them to more accurately analyse tumours removed surgically in real-time, and increase the chance that all of the cancer cells are removed during the surgery. This would prevent the need to bring patients back for a second or third surgery.
The surgery proceeds by resecting the tumour, also known as a specimen, along with a small amount of surrounding healthy tissue. A mammography machine is then used to photograph the specimen, which is reviewed by the surgical team to ensure the removal of any abnormalities. Following this, the specimen is sent to pathology for further examination.
Pathologists examine the specimen to determine if cancer cells extend to its outer edge, known as the pathological margin. If cancer cells are found at this margin, there is a risk that some cancerous tissue remains in the breast, necessitating additional surgery. However, this evaluation process can take up to a week after the initial surgery, while specimen mammography, or capturing X-ray images of the specimen, can be performed immediately in the operating room.
To train their AI model to distinguish between positive and negative margins, researchers utilized hundreds of specimen mammogram images, each paired with the final specimen reports from pathologists. Additionally, demographic data such as age, race, tumour type, and tumour size from patients were incorporated to enhance the model’s accuracy.
After evaluating the model’s performance in predicting pathologic margins, researchers compared it to the conventional accuracy of human interpretation. Remarkably, the AI model performed on par with, if not better than, human experts.
First author Kevin Chen, MD, a general surgery resident in the Department of Surgery commented how AI models can support doctor’s and surgeon’s decision making in the operating room using computer vision. They found that the AI model matched or slightly surpassed humans in identifying positive margins.
Gallagher highlighted the model’s potential significance in patients with higher breast density, where distinguishing between cancer and healthy tissue can be particularly challenging due to their similar appearance on mammograms.
Moreover, this AI model can be a valuable resource for hospitals with limited resources, lacking specialist surgeons, radiologists, or pathologists readily available for informed decision-making during surgery.
Although the AI model is still in its early stages, ongoing efforts involve expanding the dataset with images from more patients and different surgeons. Rigorous validation in further studies is required before clinical implementation. Researchers anticipate that the model’s accuracy will continue to improve as they accumulate more knowledge about the appearance of normal tissue, tumours, and margins.