University of Texas at Austin


Endless Power - George Biros

By John Holden

Published June 16, 2022

Many computational scientists begin as mathematicians. Theirs is a community that believes our salvation lies in the universal language of numbers. Many societal problems, particularly those where we must predict future behaviors or outcomes, are best solved using complex, applied mathematical tools — notably one tool known as partial differential equations (PDEs). Such problems include everything from designing new spacecraft, to developing novel materials or determining what the climate will look like 500 years from now.

George Biros is one such computational scientist. His research touches on a variety of interdisciplinary computational applications in healthcare, defense, fluid dynamics and additive manufacturing. He also gets “under the hood” to advance the fundamental mathematical tools underpinning computational science and engineering (CSE) and high performance computing (HPC).

Biros is excited about the age of the exascale supercomputing — which we recently entered with the unveiling of the Frontier system at Oak Ridge National Laboratory, capable of more than a billion billion, or one exaflop, calculations per second. (108). Supercomputers happen to be very good at solving PDEs. Although computational models can be realized using alternative mathematical and scientific tools, PDEs are used ubiquitously for modeling or simulating various physical, biological and chemical systems. And the increasing power, performance, and storage capacity of HPC facilities, coupled with more sophisticated modeling techniques, means CSE is now providing such accuracy that it has moved into almost every applicable area of science imaginable.  

A professor of mechanical engineering and computer science at The University of Texas at Austin, Biros also holds the W. A. "Tex" Moncrief Chair in Simulation-Based Engineering Sciences at the Oden Institute for Computational Engineering and Sciences.

His research as head of the Parallel Algorithms for Data Analysis and Simulation Group at the Oden Institute solves various fundamental problems in mathematics. Improving the underlying math can accelerate methods used in data analysis and machine learning, inverse problems, uncertainty quantification and simulation. These can be applied to many real-world scenarios, from medical image analysis and to computational oncology.

Biros' Group solve various fundamental problems in mathematics to accelerate methods used in data analysis and machine learning, inverse problems, uncertainty quantification and simulation.

Machines for Math Momentum

The Texas Advanced Computing Center (TACC) at UT Austin, is home to more than a dozen different supercomputers, many designed for specific computing tasks. “Some supercomputing systems are primarily focused on machine learning,” said Bill Barth, Director of Future Technologies at TACC. “Other machines run what might be described as a more conventional style of computing. Then we have some systems focused entirely on storage.”

As one of UT Austin’s HPC gurus, Biros has developed close ties with TACC and is frequently called upon to provide expertise to both industry and government peers working in the HPC space. Biros uses several systems at TACC to find solutions to real world challenges where numbers might hold the answers. He currently has two projects looking at medical or health related areas. “We have been studying tumor growth by interpreting MRI scans of patients for over a decade with the NIH,” he said. The focus is on aggressive brain tumors, trying to characterize the malignancy of the tumor from where it may have started. “We believe that might have some correlation with the specific mutations of a tumor.”

In this context, the PDEs model the tumor growth and its interaction with the surrounding tissue.  Biros’s team has been working on HPC numerical algorithms that can trace back the origin of the tumor based on observed images of each individual patient. This type of process is common in CSE and is referred to as an inverse problem. In this particular project, his workflow also includes advanced deep-learning methods for image segmentation - used to drive the inverse problem solution.

In a recent study, Biros, and collaborators from the University of Pennsylvania, applied the method retrospectively to a 450-subject dataset. “We were able to achieve excellent overlap between the predicted and observed tumor growth,” he said.  

Biros also looks at blood rheology, or how blood flows in very fine capillaries. This research is closely related to fundamental fluid dynamics work undertaken in earlier studies, but the mathematical tools used in CSE frequently overlap from one application to the next. In this case, Biros found his methods to be applicable in the design of more efficient microfluidic devices used to treat deep vein thrombosis and other conditions.

In a recent study, the team designed a microfluidic device that separates normal from abnormal red blood cells, e.g., cells infected with the malaria parasite, by passing the cells through a “forest” of appropriately shaped micropillars. 

“This configuration of pillars is called a deterministic lateral displacement device,” he added. “The methods used involved novel mathematical formulations for the blood flow, numerical methods, reduced order model construction, and derivative-free optimization methods.”

In addition to these two areas, Biros is co P.I. on a $17 million Predictive Science Academic Alliance Program (PSAAP III) funded by the Department of Defense to develop a computational model of a plasma torch that will provide insights into a number of key strategic defense challenges. His work in additive manufacturing – a type of 3D printing using metal alloy powders - is funded by the Department of Energy, and brings together Brookhaven and Oak Ridge National Laboratories, MIT and Texas A&M. 


Biros' group are looking at vesicles - closed membranes possessing tension and bending energies - for modeling blood cell motion and the transport of drug-carrying capsules.

Higher Performance Computing

Biros doesn’t just exploit the formidable power of TACC’s systems. He is also finding new ways to improve efficiencies and scale performance and capacity to the next level.

That next level happens to be the exascale - computing systems capable of a billion billion, or one exaflop, calculations per second. (108)

Akin to an interior architect who redesigns and retrofits existing structures to enable greater efficiencies, Biros and his research team are working to improve existing algorithms, already in use, by providing the mathematical upgrades required to enable their application at larger scales.

“Monolithic algorithms are not going to be appropriate for the future systems that comprise heterogeneous compute and memory subsystems,” he said. “Novel spatio-temporal, nonlinear, domain-decomposition methods are required that can simultaneously minimize or avoid communication altogether, localize computation, and combine multiple algorithms tailored to the underlying architectures.”

Unsurprisingly, this mathematical retrofitting toolkit has come in handy as TACC prepares for a major expansion of its facilities through the NSF-funded building of a Leadership Class Computing Facility (LCCF) at J.J. Pickle Research Campus. George Biros has been a central advisor in providing the NSF with both a list of areas where supercomputing will be needed most in the future as well as what it will take to equip any system with the power and capacity to solve the Grand Challenges of tomorrow. 

“People like George, who run several kinds of problems on our machines and publish a lot of important papers in the field of HPC, will do a lot of benchmarking as they compare the performance of their algorithms on as many resources as they can get their hands on,” said Bill Barth.

Centers like TACC have early access to new technologies as vendors and designers of new HPC products and services are keen to repair any potential bugs before getting their products into the hands of end-users. “George is chomping at the bit all the time to get his hands on those new resources and give us the advice that we in turn give back to the vendor,” added Barth. “It's in his interest. But it's also in the interest of industry to receive free advice from one of the top supercomputing scholars before making a new tool more widely available.”

The Biros Scale

As we move into the exascale era, challenges remain, Biros says.  

The gap between memory bandwidth and computation continues to widen making many existing algorithms prohibitively expensive.  The new architectures require extremely careful tuning and optimization.

Rapid prototyping will become quite challenging at scale. The driving applications require much tighter integration of massive streaming datasets with data analysis and simulation software, which require a whole range of new algorithms for inverse problems, optimal control, data assimilation, Bayesian inference and decisions under uncertainty.

His response, however, to the question of how we might overcome said challenges may serve to explain why George Biros is a two-time Gordon Bell prize winner, the most prestigious award in the field of high performance computing.

Dismissing any doubts, he casually explained, “we are currently aiming for this benchmark and we’ll reach it. But this is a relentless quest that can never be satisfied. Whatever is the next biggest, that’s what we aim for.”   

What are Partial Differential Equations?

Partial differential equations (PDEs) are essential in fields like physics and engineering, as they supply useful mathematical descriptions of fundamental behaviors in a variety of systems – from heat to quantum mechanics.   

These equations generally involve two or more independent variables, an unknown function (dependent on those variables), and partial derivatives of the unknown function with respect to the independent variables. Many different types of PDEs exist, and most cannot be solved directly. However,  they can be studied systematically as mathematical objects. For example, one of the most important PDEs is the “heat equation” - developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. Scientists have found that diffusive behavior can be found in a number of systems that don’t initially appear to be closely related to heat propogation, including, for example, descriptions of how colorful spots, stripes and other patterns develop in various animal species. Properties of the well-understood heat equation can be used to explain, study, and develop methods for computationally simulating approximate solutions to an entire family of related PDEs. 

Partial Differential Equations provide useful mathematical descriptions of physical, and other, problems where many variables exist. This has made them central to our modern scientific understanding of sound, heat, diffusion, electrostatics, thermodynamics, fluid dynamics, general relativity and quantum mechanics etc.