Letter From The Director

TACC Executive Director Dan Stanzione reflects on past year and new developments

As 2025 draws to a close, we have seen a challenging year but promise in things to come. 

It has been an exceptionally difficult year for science funding in the U.S. requiring significant effort to navigate the shifting landscape. Despite this, TACC remains a priority for the National Science Foundation and the federal government, with the NSF Leadership-Class Computing Facility continuing to advance. While global attention is drawn to the AI arms race, demand for physical modeling across key scientific and industrial fields continues to drive progress.

The NSF LCCF moved from concept to reality this year — we have watched the new datacenter move toward completion; Ranch, the new archive system is in production; and the first compute racks for Horizon are just a few weeks from arriving as I write this column. The software has been moving closer as well, and we have a year of experience on the integrated NVIDIA CPU-GPU chips on Vista to add maturity to our simulation and AI applications.

This work has happened in the context of the unprecedented infrastructure investment in AI, now reaching into the trillions of dollars globally. We still grapple with the implications for the science workflow, but the technical computing marketplace is irrevocably changed. 

Horizon will usher in a new era of computing infrastructure for us, but it is far from the end of the road — we need to push on two ideas in computing: 

  • How can we do AI more efficiently? 
  • How can we use hardware built for AI to do scientific computing? 

In many ways, this marks a third age of supercomputing. The first was defined by the rise of original computing architectures and advances in materials and circuit design, culminating in the “big iron” custom vector supercomputers of the late 1980s and early 1990s. 

In many ways, this marks a third age of supercomputing.
Dan Stanzione

The second began with the Beowulf revolution, shifting from custom silicon to mass-produced microprocessors — chips built for WordPerfect and games, not “serious” science. This era demanded that software evolve with hardware, moving from a few vectorized threads and shared memory to massive parallelism and distributed memory algorithms. It began with a handful of well-suited problems, but decades of work by thousands of clever programmers adapted the full range of scientific computing to this new kind of machine.

Today, with the immense financial and power demands of AI infrastructure, we must repeat that trick. We need to effectively exploit hardware built for AI through mixed-precision algorithms or innovative ways to map stencil computations to GPU tensor cores. We can build the systems, but the software must evolve.

More than 120,000 researchers have used TACC for computational workflows over the past two decades, and I’m confident we’ll find the way forward again.


NEXT STORY: Meet TACC’s Scalable Computational Intelligence Group