Alzheimer’s disease (AD) is a neurodegenerative disorder that affects a significant proportion of the older population worldwide. It causes irreparable damage to the brain and severely impairs the quality of life in patients. Unfortunately, AD cannot be cured, but early detection can allow medication to manage symptoms and slow the progression of the disease.
Functional magnetic resonance imaging (fMRI) is a noninvasive diagnostic technique for brain disorders. It measures minute changes in blood oxygen levels within the brain over time, giving insight into the local activity of neurons. Despite its advantages, fMRI has not been used widely in clinical diagnosis. The reason is twofold. First, the changes in fMRI signals are so small that they are overly susceptible to noise, which can throw off the results. Second, fMRI data are complex to analyze. This is where deep-learning algorithms come into the picture.
In a recent study published in the Journal of Medical Imaging, scientists from Texas Tech University employed machine-learning algorithms to classify fMRI data. They developed a type of deep-learning algorithm known as a convolutional neural network (CNN) that can differentiate among the fMRI signals of healthy people, people with mild cognitive impairment, and people with AD.
CNNs can autonomously extract features from input data that are hidden to human observers. They obtain these features through training, for which a large amount of pre-classified data is needed. CNNs are predominantly used for 2D image classification, which means that four-dimensional fMRI data (three spatial and one temporal) present a challenge. fMRI data are incompatible with most existing CNN designs.
To overcome this problem, the researchers developed a CNN architecture that can appropriately handle fMRI data with minimal pre-processing steps. The first two layers of the network focus on extracting features from the data solely based on temporal changes, without regard for 3D structural properties. Then, the three subsequent layers extract spatial features at different scales from the previously extracted time features. This yields a set of spatiotemporal characteristics that the final layers use to classify the input fMRI data from either a healthy subject, one with early or late mild cognitive impairment, or one with AD.
This strategy offers many advantages over previous attempts to combine machine learning with fMRI for AD diagnosis. Harshit Parmar, doctoral student at Texas Tech University and lead author of the study, explains that the most important aspect of their work lies in the qualities of their CNN architecture. The new design is simple yet effective for handling complex fMRI data, which can be fed as input to the CNN without any significant manipulation or modification of the data structure. In turn, this reduces the computational resources needed and allows the algorithm to make predictions faster.
Can deep learning methods improve the field of AD detection and diagnosis? Parmar thinks so. “Deep learning CNNs could be used to extract functional biomarkers related to AD, which could be helpful in the early detection of AD-related dementia,” he explains.
The researchers trained and tested their CNN with fMRI data from a public database, and the initial results were promising: the classification accuracy of their algorithm was as high as or higher than that of other methods.
If these results hold up for larger datasets, their clinical implications could be tremendous. “Alzheimer’s has no cure yet. Although brain damage cannot be reversed, the progression of the disease can be reduced and controlled with medication,” according to the authors. “Our classifier can accurately identify the mild cognitive impairment stages which provide an early warning before progression into AD.”