University of Texas at Austin


Texascale Days with FRONTERA: A Win-Win for researchers and TACC

By Jorge Salazar and Joanne Foote

Published Feb. 28, 2023

TACC's Frontera, the fastest academic supercomputer in the U.S., is a strategic national capability computing system funded by the National Science Foundation.

Texascale Days at the Texas Advanced Computing Center (TACC) allow awarded scientists full access each quarter to the capabilities of the Frontera supercomputer, the most powerful academic system in the U.S., which is funded by the National Science Foundation (NSF).

During Fall 2022, scientists from around the world, including three from The University of Texas at Austin, two of which represent The Oden Institute for Computational Engineering and Sciences, benchmarked, tested, and completed production runs on code that needed at least half of Frontera's 8,192 compute nodes. Several projects used the whole system, remarkably running on over 450,000 computer processors, or cores. The event supports experiments that need more than 2,000 nodes, the more typical size run on Frontera. Researchers also stress test new techniques for scaling up their codes. The Texascale Days projects listed below demonstrate what they accomplished.


The material silicon is of interest across all length scales, especially owing to the pursuit of smaller electronic devices. Some of these devices approach the nanoscale, on the order of one billionth of a meter. In this size regime the properties of nanoscale silicon can change from the bulk regime. Although systems of this size are physically small, they still contain large numbers of atoms.

"Our approach enables us to study larger nanocrystals of silicon and access this size regime for the first time using quantum based methods," said James Chelikowsky of the Oden Institute for Computational Engineering and Sciences (Oden Institute), UT Austin.

His team uses a unique, real-space based method that can scale up more easily on modern computational platforms. They've invented new algorithms using this method that are designed specifically for nanoscale systems.

"These algorithms allow us to rapidly solve complex eigenvalue problems without an emphasis on any individual energy state," Chelikowsky said. "We also avoid computational bottlenecks involved with communication between processors that plague other approaches."

His team's project ran a system with more than 200,000 atoms on 8,192 nodes using methods uniquely designed to work with such large numbers of computational nodes.

"Modern materials science is driven by a unique synergy between computation and experiment," Chelikowsky said. "As computational scientists, we're able to guide and aid experimental scientists thanks to our access to cutting-edge supercomputers."


"Our Texascale Days runs were meant to push our flagship software EPW and the PHonon code of the Quantum ESPRESSO materials simulation suite to extreme scaling," said Feliciano Giustino of the Oden Institute.

Giustino is the co-developer of the "Electron-phonon Wannier" (EPW) open-source F90/MPI code which calculates properties related to the electron-phonon interaction using Density-Functional Perturbation Theory and Maximally Localized Wannier Functions.

EPW is a leading software for calculations of finite-temperature materials properties and phonon-mediated quantum processes. The code enables predictive atomic-scale calculations of functional properties such as charge carrier mobilities in semiconducting materials, the critical temperature of superconductors, phonon-mediated indirect optical absorption spectra, and angle-resolved photoelectron spectra.

The code is developed by an international team of researchers and is led by UT Austin via its lead software developer Hyungjun Lee. Lee has performed extensive tests during previous editions of Texascale Days, and has successfully demonstrated near-unity strong scaling of the code in full system runs on Frontera.

The Fall 2022 Texascale Days provided Giustino's team with the opportunity to benchmark the low-I/O mode that they recently implemented into EPW and PHonon codes in the Quantum ESPRESSO suite. One of the main bottlenecks in large-scale calculations is file I/O operations. In particular, file-per-process (FPP), one of the most common parallel I/O strategies, and employed in EPW and Phonon codes, overwhelms the file systems by creating a huge number of files in large-scale calculations.

To address this issue, his team added the low-I/O feature to EPW and Phonon which exploits memory instead of the file system.

Giustino's team used the Frontera supercomputer at half (224,000 cores) and full (448,000 cores) scale to carry out electron-phonon calculations with EPW for the superconductor MgB2, and lattice-dynamics calculations with PHonon for the type-II Weyl semimetal WP2.

"We demonstrated that in both cases the speedup for the full-system run with respect to the half-system run reaches above 75 percent. We believe that with this low-I/O feature we are further pushing the limits of EPW and Phonon in massively parallel calculations," Guistino said.


In this project, the team of Brian R. La Cour, Center for Quantum Research, Applied Research Laboratories, UT Austin, used random quantum circuits to test the limits of classical digital computing over quantum computers.

Quantum circuits are the basic instructions used to program a quantum computer, which instead of ‘bits' of value 0 or 1 found in digital computers, it uses ‘qubits' that can be 0, 1, or in a state in-between.

"We use random quantum circuits to create problems that are hard to solve with classical computers but are expected to be more easily solved by a quantum computer," La Cour said. "Although they do not solve a practical problem, they are used to demonstrate the computational advantage of quantum computers over their classical counterparts."

Fully simulating large quantum circuits on a classical computer requires an enormous amount of memory. Previously, the largest such simulation consisted of 45 quantum bits (qubits) and 0.5 petabytes of memory.

During this Texascale event, La Cour's team performed a full-scale run on Frontera with all 8,192 nodes to complete a 46-qubit simulation.

"This is the largest such simulation ever performed and required a combination of one petabyte worth of real and virtual memory," La Cour added. "Running such memory intensive simulations is extremely challenging due to the complexity of inter-node communication and the difficulty of coordinating with local storage for virtual memory. Large quantum circuit simulations serve as a benchmark for assessing the accuracy and computational advantage of future quantum computers."


In addition to benefitting researchers, Texascale Days helps TACC with an opportunity to ‘stress-test' Frontera with simulations beyond the bulk of the allocations awarded that utilize about 2,000 nodes.

"We use the Texascale Days events to prepare our users for the next LCCF system and to push our resources to the limits of their capabilities," Cazes said. A list of other research projects run during Texascale Days can be found here.

"These scaling studies during Texascale help scientists determine if their application can handle four times as many nodes and tasks as they normally use. This will help enormously as the LCCF nears the launch horizon," he added.


Adapted from a press release published by the Texas Advanced Computing Center