1. Home >
  2. Science

MIT Develops Medical AI Model That Embraces Nuance, 'Talks' to Itself

Rather than spitting out a one-and-done answer, Tyche accounts for anomalies it's previously spotted to build a more 'educated' response.
By Adrianna Nine
A man closely inspecting an X-ray.
Credit: National Cancer Institute/Unsplash

If there were ever a technology to prove adaptable for a wide range of medical applications, it's artificial intelligence. Over the last few years, we've seen AI spot signs of heart disease in the retina, help radiologists inspect X-rays, and predict people's odds of developing various types of cancers. These potentially life-saving implementations rely on segmentation, the process by which medical images are separated into distinct regions and carefully scrutinized for red flags. But even considering AI's rapid growth as of late, the above examples sometimes fall short of what a trained medical professional might detect under particularly complicated circumstances.

MIT's Tyche aims to fill this gap. Named after the Greek divinity of chance, this AI system embraces uncertainty, marries anomalous observations across segmentations, and doesn't require conventional retraining. Together, these assets could improve patient outcomes without complicating medical professionals' workflows. 

In a preprint paper on the arXiv, computer scientists and bioimage analysts at MIT, Massachusetts General Hospital, and the Broad Institute of MIT and Harvard write that most AI-powered segmentation models produce a standalone, "deterministic" result for a given input. This doesn't leave much room for nuance or allow the model to consider previously spotted anomalies as it inspects a new one. Where a model like this might spit out a one-and-done answer to an MRI or a CT scan, a group of five trained humans, called annotators, might devise five different answers, each worth investigating. 

A woman sitting in front of a computer with a CT image displayed. A patient is undergoing CT scan in the background.
The early days of computer-assisted tomographic (CT) scan technology, when computers were bulky and beige. Credit: National Cancer Institute/Unsplash

Tyche circumvents this issue using what MIT calls a "context set." Rather than retraining Tyche every time they use the system—something most medical professionals find time-consuming and likely aren't prepared to do—the user provides a 16-image context set that allows Tyche to "understand" that ambiguity might be a potential factor. Then, the system moves through segmentation layers, producing multiple answers for each layer. As it analyzes the image, Tyche looks for where its answers overlap, enabling it to hone in on an increasingly confident response by the end of the process. 

This ability for Tyche to "talk" to itself enables it to supply human-like analyses. “It is like rolling dice," Marianne Rakic, an MIT computer science PhD candidate, said in a release. "If your model can roll a two, three, or four but doesn’t know you already have a two and a four, then either one might appear again." 

Tyche reportedly works faster than conventional segmentation models and captures a level of ambiguity previously only seen in human annotators. While humans will (hopefully) always be a part of the medical image annotation process, systems like Tyche might make that process more efficient while catching elements missed by busy professionals.

Tagged In

Medicine Artificial Intelligence

More from Science

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up