A group of researchers demonstrated how deep learning algorithms could achieve high accuracy in the diagnostic interpretation of cystic hygroma in the first trimester (the earliest phase of pregnancy. It starts on the first day of a woman’s last period- before one is even pregnant- and lasts until the end of the 13th week). The results are further validated against expert clinical assessment.
What is cystic hygroma? A cystic hygroma, also known as a lymphangioma, is a birth abnormality that manifests as a sac-like structure with a thin wall and most frequently affects an infant’s head and neck. It is possible for the baby to form from fragments of material that contain fluid and white blood cells while it develops inside the womb. This substance is referred to as embryonic lymphatic tissue. A cystic hygroma typically appears as a soft bulge under the skin soon after birth. The cyst is not visible before birth and is occasionally not diagnosed until the person is an adult.
The Method
The team sought to develop a deep-learning model to analyse fetal ultrasound images and accurately identify cases of cystic hygroma compared to normal controls. The study was carried out at The Ottawa Hospital, a multi-site tertiary-care facility in Ottawa, Canada.
Digital Imaging and Communications in Medicine (DICOM) formatted first-trimester ultrasound images were downloaded from the institution’s Picture Archiving and Communication System (PACS) between March 2014 and March 2021. A clinical specialist reviewed and validated cases and normal images. Moreover, the images used for the model do not need any consent from the patient, and even the data were anonymised entirely and de-identified for model training. This ensures the patient’s right to privacy as well.
“A 4-fold cross-validation (4CV) design was used, whereby the same deep-learning architecture was tested and trained four different times using randomly partitioned versions (folds) of the image dataset. For each fold, 75% of the dataset was used for model training, and 25% was used for model validation,” the paper reads.
To maximise the performance of the deep-learning models within the limited dataset, the 4-fold cross-validation (4CV) design was adopted as opposed to the more popular 10-fold cross-validation (10CV) approach. Moreover, any patient personal health information (PHI) was removed prior to the analysis of the images.
Images were categorised as “normal” or having “cystic hygroma” using a DenseNet convolutional neural network (CNN) model architecture. The team further used the Gradient-weighted Class Activation Mapping (Grad-CAM) method widely used to explain deep-learning algorithms visually. All four models performed well in the validation set, and the overall mean accuracy was 93 per cent.
Neonatal image acquisition and interpretation are difficult due to the small foetal anatomy, involuntary foetal movements, and low picture quality. Ultrasound is essential in the observation of foetal growth and development. Now, with heat maps created using deep learning and Grad-CAM may accurately identify anomalies in the foetal head and neck region.
Other use cases
Recently, through the use of computer vision and artificial intelligence (AI) references of cell activity seen through time-lapse imaging, the novel technique, created in a collaborative effort between researchers at IVIRMA Valencia and AIVF, Israel, does away with the need for invasive cell biopsy IVF treatment. In addition, the ground-breaking technique may also effectively distinguish between euploid and aneuploid embryos, identifying which ones are best for IVF therapy.
To sum up, clinical diagnostics, particularly medical imaging, increasingly use machine learning and artificial intelligence (AI) models. Artificial neural networks inspired the deep-learning class of machine learning models, which can process enormous volumes of data to find key traits that are predictive of desired results. As data accumulates, model performance may be continuously and gradually improved.
Source: indiaai.gov.in