University of Texas at Austin

News

'Realistic Goals for AI in Oncology' Charts the Course for the Future of Healthcare

By Aira Balasubramanian

Published Sept. 19, 2024

Speakers Jayashree Kalpathy-Cramer, Satwik Rajaram, Emily Greenspan, and Maria Rodriguez Martinez pictured alongside Tom Yankeelov, who hosted the event. Credit: Joanne Foote.

What does the hospital of the future look like? Recent advances in Artificial Intelligence (AI), deep learning, and digital twin technologies that predict tumor growth, analyze drug activity, and serve as diagnostic tools make this ‘tomorrowland’ look more possible than ever before. 

'Realistic Goals for AI in Oncology,' a half-day workshop held on September 16, hosted by Tom Yankeelov, director for the Center of Computational Oncology at the Oden Institute for Computational Engineering and Sciences, considered just that. The workshop, part of The University of Texas at Austin's Year of AI, featured experts at the forefront of bridging medicine with Artificial Intelligence, speakers and attendees bridged the gap between our wildest dreams for futuristic cancer diagnosis with scientifically, ethically, and economically sound principles for the application of AI in oncological healthcare. “The goal of this workshop is an attempt to strip away some of that hype to identify problems in oncology for which AI is well-suited to practically solve,” shared Yankeelov, as attendees and speakers filtered into the Peter O'Donnell Building’s Avaya Auditorium.

The goal of this workshop is an attempt to strip away some of that hype to identify problems in oncology for which AI is well-suited to practically solve.

— Tom Yankeelov

The event began with a talk delivered by Maria Rodriguez Martinez, an associate professor in biomedical informatics and data science at the Yale School of Medicine. Initially trained as a physicist, her work explores integrates mechanistic and AI models in understanding the role of B and T cells in cancer and autoimmune diseases. In using interpretable deep learning methods to uncover the rules behind AI model predictions, she seeks to uncover the ‘black box’ surrounding how some AI models make their predictions. This is key to ensuring that models developed through computational research are accepted in clinical practice - as they allow physicians to understand the ‘why’ behind a model’s conclusion, building trust in its validity. 

This emphasis on trust was developed further by Satwik Rajaram, assistant professor at UT Southwestern Medical School’s Lyda Hill Department of Bioinformatics. His talk covered the role of AI and deep learning models in analyzing cancer morphologies. Though tumors are often colloquially discussed as a monolithic collection of cells, molecular and cellular analysis reveals their complexity and heterogeneity - which adds to the challenge of crafting effective treatments. Rajaram outlined the principles of designing mathematical models that predict morphological changes during tumor evolution - but emphasized that we do not consistently understand how these models make predictions. “I want to get to a place where models can make hypotheses predictions about tumor development, rather than simply a conclusion,” he noted, adding that this ability “goes deeper, provides more clinical insight, and serves as a reality check of model rationale.”

block.caption

Jayashree Kalpathy-Cramer. Credit: Joanne Foote.

The contrast between the limitations and risks of AI and its benefits was further considered by Jayashree Kalpathy-Cramer, professor of phthalmology at the University of Colorado Boulder. While discussing the role of deep learning in image reconstruction, Kalpathy-Cramer noted the risk that biases or insufficiencies in model training data may lead to hallucinations and inaccuracies in model predictions. When applied to healthcare, these errors can be catastrophic, and serve to erode the trust of medical practitioners.

“Model bias can be present without us recognizing it,” she noted, adding that AI advances in healthcare must be ‘matched by mechanistic models’ to ensure that the results provided by these technologies are technically sound. 

block.caption

CSEM students Sophia Epstein, Graham Pash, and Casey Stowers (left to right) lead a panel discussion on the role of AI in oncology with the guest speakers. Credit: Joanne Foote.

Many of these ideas were reemphasized in a graduate student panel featuring Computational Oncology CSEM students Sophia Epstein, Graham Pash, and Casey Stowers. Epstein shared that it was “refreshing to see that AI is more than just a buzzword - it is increasingly permeating every aspect of life, including oncology,” noting that she was encouraged by the emphasis on “deep learning methods to uncover the rules behind model predictions.” Pash concurred, sharing that it was “exciting to get a look into the future of clinical AI applications.”

The panel blended insightful questions surrounding the factors necessary for clinicians to trust AI models in their practice, and what hospitals of the future may look like. Panelists, attendees, and Oden Institute faculty members shared insights developed through their research and practice. Thomas J.R Hughes, lead of the Computational Mechanics Group at the Oden Institute, noted that convincing clinicians relied on “convincing clinical trials,” while Dr. Boone Goodgame, medical director for oncology at Ascension Seton, shared that he required “proof that models predict what they say they do,” when applied to his practice. 

block.caption

Dr. Boone Goodgame and Thomas J.R Hughes discuss their takeaways from the event. Credit: Joanne Foote.

Hughes went on to praise the event’s keynote speaker, Dr. Emily Greenspan, who serves as a program director of a collaborative AI development effort between the U.S. Department of Energy and the National Cancer Institute, for her “comprehensive overview of activity in the field from the government point of view.” Greenspan’s talk outlined the NCI’s strategic pillars surrounding AI in oncology, joking that unregulated implementation was akin to "letting the horse out of the barn," underscoring that democratic access to datasets, patient outreach, and security served as essential elements in ensuring safe, ethical clinical outcomes. 

block.caption

Emily Greenspan delivering her keynote address. Credit: Joanne Foote

Greenspan commiserated with attendee Alaa Melek, a Ph.D student at UT Austin’s Dynamic Medical Image and Computing Lab within the Department of Biomedical Engineering, surrounding the role that bias plays in AI models, and steps that can be taken to mitigate it.

“AI in medicine has the potential to exacerbate biases or close the gap,” shared Malek. Greenspan outlined the Biden Administration’s trans-agency roadmap for development of ethical AI technologies that work towards the sustainability and fairness of model development and utilization. 

‘Realistic Goals for AI’ in oncology provided a rare framework for clinicians and computational researchers to collaborate on how they see the future of healthcare. “One of my big takeaways was getting to see conversations between clinicians and modelers - it’s cool to see them argue on where they agree and disagree, and find where they can collaborate,” said CSEM student Casey Stowers. Often, AI in healthcare is discussed in absolutes - this event centered around tangible collaborations and necessary limitations.

“Attendees were able to hear a deep presentation and discussion of where AI makes an impact in oncology, as well as the areas where it can cause problems - these points are not frequently presented in the popular press or at conferences,” shared Tom Yankeelov, as he delivered a closing note. “I was happy these points were raised at the workshop.”