Brain MRIs and AI predict brain age, cancer survival, and other diseases

A new AI model from Mass General Brigham learns from unlabeled brain MRIs, then tackles tasks from brain age to tumor genetics.

Joseph Shavit
Shy Cohen
Written By: Shy Cohen/
Edited By: Joseph Shavit
BrainIAC, a new AI model, analyzes brain MRIs to predict dementia risk, tumor mutations, and brain cancer survival.

BrainIAC, a new AI model, analyzes brain MRIs to predict dementia risk, tumor mutations, and brain cancer survival. (CREDIT: AI-generated image / CC BY-SA 4.0)

Mass General Brigham researchers are betting that the next big leap in brain medicine will come from teaching artificial intelligence to “read” MRI scans in a more flexible way.

The team, led by Benjamin Kann, MD, in the Artificial Intelligence in Medicine (AIM) Program at Mass General Brigham, built a new AI foundation model called BrainIAC. In a study published in Nature Neuroscience, the model handled many brain MRI jobs at once, from estimating brain age to predicting dementia risk. It also looked for tumor gene changes and helped forecast survival in brain cancer.

“BrainIAC has the potential to accelerate biomarker discovery, enhance diagnostic tools and speed the adoption of AI in clinical practice,” Kann said. “Integrating BrainIAC into imaging protocols could help clinicians better personalize and improve patient care.”

BrainIAC is a generalizable foundation model for neuroimaging that supports diverse downstream clinical tasks, from molecular subtype prediction to brain aging and survival estimation, by enabling efficient adaptation across a wide range of applications. (CREDIT: Divyanshu Tak, Mass General Brigham)

Why brain MRIs trip up many AI tools

If you follow medical AI news, you have seen a pattern. Many models do one thing well, then struggle outside their home hospital. Brain MRI makes that problem worse. Scans can look different across institutions, scanner brands, and settings. Even the same patient can have several MRI “sequences,” each showing tissue in a different way.

Common sequences include T1-weighted, T2-weighted, FLAIR, and T1-weighted with contrast enhancement (T1CE). Hospitals do not always collect the same set. Scanner strength can vary from 1.5T to 7T. Imaging settings also shift brightness and contrast. That mix can confuse models trained on narrow, labeled datasets.

You also run into a basic bottleneck. Many AI systems need lots of labeled images, meaning experts must mark findings by hand. For rare diseases or specialized scans, those labels can be hard to get.

A model that learns from unlabeled scans

To get around that, the Mass General Brigham team designed BrainIAC as a general-purpose “encoder” for full 3D MRI volumes. Instead of learning mainly from labeled examples, it used self-supervised learning. That method lets the model learn patterns from scans without annotations.

BrainIAC was pretrained on 32,015 multiparametric MRIs pulled from 16 datasets that covered 10 medical conditions. Across the full set of experiments, the researchers curated 48,965 scans from 34 datasets.

The pretraining approach was based on SimCLR, a contrastive learning method. The model saw many cropped patches from 3D scans. It learned to treat two altered views of the same brain region as related. It also learned that different patches should look less related in its internal “map” of features.

BrainIAC is a general-purpose foundation model for brain MRI analysis, trained using a contrastive SSL approach and validated on seven diverse downstream applications. (CREDIT: Nature Neuroscience)

The researchers tested three options for the model’s backbone and chose SimCLR-ViT-B. That version performed most consistently when the team gave it only a few labeled examples.

Putting BrainIAC through seven tough tests

After pretraining, the team evaluated BrainIAC on seven tasks that range from simple sorting to hard clinical prediction. Those tasks included MRI sequence classification, brain age prediction, early dementia prediction, time-to-stroke prediction, IDH mutation status prediction in glioma, survival prediction for glioblastoma, and adult glioma tumor segmentation.

They compared BrainIAC with three baselines. One model trained from scratch. Another used transfer learning from MedicalNet, a 3D medical imaging pretrained model. A third was a segmentation-focused foundation model called BrainSegFounder.

In MRI sequence classification, BrainIAC stood out when training data was limited. Using the BraTS 2023 dataset, BrainIAC hit 90.8% balanced accuracy with only 10% of the training data. With more data, it rose to 97.2%.

For brain age prediction, the team used 6,249 T1-weighted scans and measured error in years. On an external test set with 20% training availability, BrainIAC reached a mean absolute error of 6.55 years. Other models posted larger errors.

When the goal is to predict what eyes cannot see

Some of the hardest tasks involve information you cannot easily spot on a scan. One example is predicting IDH mutation status in low-grade glioma. That gene change can shape treatment plans and outcomes. With just 10% training availability, BrainIAC reached an AUC of 0.68. With full training data, it reached 0.79.

Representative axial FLAIR images with predicted tumor segmentation masks (red overlay) from BrainIAC models fine-tuned with 10%, 100%, K = 1 and K = 5 of the training data. (CREDIT: Nature Neuroscience)

The team also tested survival prediction for glioblastoma, using the UPENN-GBM dataset. The target was survival at one year post-treatment. At 10% training availability, BrainIAC reached an AUC of 0.62 and outperformed the comparison models. With full training data, it reached 0.72. The researchers also split patients into high- and low-risk groups. They reported significant separation in survival curves in several settings.

BrainIAC also performed well on early dementia-related prediction. Using OASIS-1, the task was mild cognitive impairment (MCI) versus healthy control classification. With full training data, BrainIAC reached an AUC of 0.88.

The “almost no labels” reality check

Real clinics do not always have big, labeled datasets waiting for a model. So the team ran few-shot tests. In these trials, the model learned from K = 1 or K = 5 labeled examples per class.

BrainIAC generally held up better than the alternatives. In sequence classification with K = 1, it reached 0.53 balanced accuracy. In IDH mutation prediction with K = 1, it reached an AUC of 0.64. In tumor segmentation, it reached a Dice score of 0.51 with only one sample.

The researchers also ran “linear probing,” where the core BrainIAC encoder stays frozen and only a small task head is trained. Across all seven tasks, the frozen BrainIAC features still supported strong results, which suggests the model learned broadly useful patterns.

The team tested robustness too. They added synthetic changes meant to mimic real MRI issues, like contrast shifts and imaging artifacts. BrainIAC stayed more stable than the other models, especially in low-data settings.

The researchers say the model still has limits. It focuses on standard structural sequences, including T1w, T2w, FLAIR, and T1CE. It did not include diffusion-weighted imaging or functional MRI. It also used skull-stripped images, which narrows use to intracranial analysis. The team says larger datasets and more imaging types could push performance further.

Practical implications of the research

If BrainIAC or similar models move into wider testing, you could see faster progress in brain biomarkers, the measurable signs of disease that help guide care. A foundation model that works with fewer labels could help hospitals build tools even when expert annotations are scarce.

You could also see more consistent AI performance across sites. BrainIAC was designed to handle the messy reality of MRI variation. That may help reduce the gap between lab demos and real clinic use.

For patients, better prediction tools could support earlier risk estimates for dementia, sharper planning for brain cancer care, and stronger guidance when time matters, like after a stroke. For researchers, a reusable brain MRI model could speed studies by cutting the time needed to build new systems from scratch.

Research findings are available online in the journal Nature Neuroscience.

The original story "Brain MRIs and AI predict brain age, cancer survival, and other diseases" is published in The Brighter Side of News.



Like these kind of feel good stories? Get The Brighter Side of News' newsletter.


Shy Cohen
Shy CohenScience and Technology Writer

Shy Cohen
Writer

Shy Cohen is a Washington-based science and technology writer covering advances in artificial intelligence, machine learning, and computer science. He reports news and writes clear, plain-language explainers that examine how emerging technologies shape society. Drawing on decades of experience, including long tenures at Microsoft and work as an independent consultant, he brings an engineering-informed perspective to his reporting. His work focuses on translating complex research and fast-moving developments into accurate, engaging stories, with a methodical, reader-first approach to research, interviews, and verification.