Feature Stories
John Holden
Related Articles
From its humble beginnings with just a handful of staff housed in modest surroundings to becoming one of the leading academic supercomputing centers in the world, TACC has not stopped evolving since opening its doors in June 2001.
That year, TACC’s first supercomputer boasted 50 gigaFLOPS (the average home computer today has around 60). In 2021, the center’s most powerful supercomputer, Frontera, is 800,000 times more powerful, capable of 40 petaFLOPS or 40 quadrillion (1015) floating point operations per second.
As TACC marks its 20th anniversary, let’s pause and reflect upon the center’s success thus far. Because the TACC story is only getting started.
High performance computing (HPC) had in fact already been at UT Austin long before TACC was established. Its precursor, the Center for High Performance Computing, dates back to the 1960s. But things have dramatically improved since then, as J. Tinsley Oden — one of UT’s most decorated scientists, engineers, mathematicians, and widely considered to be the father of computational science and engineering — explains.
“I came to the university for the first time in 1973 as a visitor,” he said. “The computing was still very primitive, even by the standards of the time. People were using desk calculators, which shocked me.”
Oden was central to nurturing an HPC-friendly environment in Texas. But supercomputing was beginning to grow exponentially nationwide also.
What is Supercomputing?
The term "supercomputing" refers to the processing of massively complex or data-laden problems using the compute resources of multiple computer systems working in parallel (i.e. a "supercomputer"). The term supercomputing also denotes a system working at the maximum potential performance of any computer, typically measured today in petaFLOPS. Use cases include weather, energy, life sciences, and manufacturing.
Despite being a top tier university, enthusiasm for supercomputing at UT ebbed and flowed over the years. This meant HPC headquarters shifted locations across campus (including a brief stint where operations were conducted out of a stairwell in the UT Tower.) “It’s remarkable how far things have come since then,” Oden added.
"The most important thing about TACC is the people here who make us able to achieve the scientific work of our users. We’re first and foremost a place for scientists to make discoveries."
— Dan Stanzione, Executive Director, TACC
Calls from HPC advocates like Oden and many others had grown so loud throughout the 1980s and 1990s, university leadership couldn’t ignore them any longer.
“When I became president of UT in 1998, there was already a lot of discussion around the need to prioritize advanced computing — or ‘big iron’ as we called it back then — at the university,” said former UT President Larry Faulkner.
In 1999, Faulkner named Juan M. Sanchez as Vice President for Research (VPR), an appointment that proved central to the TACC story. Instrumental in a variety of university initiatives, Sanchez echoed the calls from HPC advocates to build a home for advanced computing at UT.
Two years later those calls were answered. In 2001, a dedicated facility was established at UT’s J.J. Pickle Research Campus in north Austin with a small staff led by Jay Boisseau, who influenced the future direction of TACC and the wider HPC community.
TACC grew rapidly in part due to Boisseau’s acceptance of hand-me-down hardware, an aggressive pursuit of external funding, and success forging strong collaborations with technology partners, notably hometown success story, Dell Technologies.
The center also enjoyed access to a rich pipeline of scientific and engineering expertise at UT.
One entity in particular became a key partner — the Institute for Computational Engineering and Sciences (ICES). Established in 2003, ICES also enjoyed the support of then VPR Sanchez, another calculated risk that proved successful. Renamed as the Oden Institute for Computational Engineering and Sciences, in recognition of its founder J. Tinsley Oden, the institute quickly became regarded as one of the leading computational science and engineering (CSE) institutes in the world.
Thanks to the unwavering support of Texan educational philanthropists Peter and Edith O’Donnell, Oden was able to recruit the most talented computational scientists in the field and build a team that could not only expand the mathematical agility of CSE as a discipline, but also grow the number of potential real-world applications.
“ICES was such a successful enterprise, it produced great global credibility for Texas as a new center for HPC,” Faulkner said.
"TACC systems have been constantly powering discoveries with profound societal impact in all areas one can imagine."
— Ritu Arora, Assistant Vice President of Research Computing, University of Texas at San Antonio
Computational science and advanced computing tend to move forward symbiotically, which is why Oden Institute faculty have been instrumental in planning TACC’s largest supercomputers, providing insights into the types of computing environment that researchers require to deliver impactful research outcomes.
“Computational science and engineering today is foundational to scientific and technological progress — and HPC is in turn a critical enabler for modern CSE,” said Omar Ghattas, who holds the John A. and Katherine G. Jackson Chair in Computational Geosciences with additional appointments in the Jackson School of Geosciences and Mechanical Engineering.
Ghattas has served as co-principal investigator on the Ranger (2008) and Frontera (2018) supercomputers and is a member of the team planning the next-generation system at TACC.
“Every field of science and engineering, and increasingly medicine and the social sciences, relies on advanced computing for modeling, simulation, prediction, inference, design, and control,” he said. “The partnership between the Oden Institute and TACC has made it possible to anticipate future directions in CSE. Having that head start has allowed us to deploy systems and services that empower researchers to define that future.”
"One of the things that fascinated me is how all of the staff pitch in to help the overall center progress. That collab-orative spirit makes TACC special to me."
— Marques Bland, Senior Program Coordinator, TACC
No(de) Time for Complacency
Over its first 10 years, TACC deployed several new supercomputers, each larger than the last, and gave each a moniker appropriate to the confident assertion that everything is bigger and better in Texas — Lonestar, Ranger, Stampede, Frontera.
While every system was and still is treated like a cherished member of TACC’s family, current Executive Director Dan Stanzione doesn’t flinch when asked to pick a favorite child. “We really got on the map with Ranger,” said Stanzione, who was a co-principal investigator on the Ranger project.
The successful acquisition of the Ranger supercomputer in 2007, the first “path to petascale” system deployed in the U.S., catapulted TACC to a national level of supercomputing stardom. At this point, Stanzione was the Director of High Performance Computing at Arizona State University but was playing an increasingly important role in TACC operations.
Ranger was slated to be the largest open science system in the world at that time, and TACC was still a relatively small center compared to other institutions bidding for the same $59 million NSF grant. TACC’s chances of successfully winning the award had its fair share of skeptics.
However, Stanzione (who officially joined TACC as deputy director soon after the successful Ranger bid) and Boisseau were a formidable pair.
“TACC has a reputation for punching above its weight,” Oden said. “Jay and Dan just knew how to write a proposal that you simply could not deny.”
Stanzione continued the winning approach when he took over as executive director in 2014. In addition, he has held the title of associate vice president for research at UT since 2018.
“Juan Sanchez also made the decision to appoint Stanzione as Jay Boisseau’s successor,” Faulkner noted. “And while the success of TACC has, of course, come from those actually working in the field, Sanchez deserves great credit for establishing an environment where that success could thrive.”
The environment at TACC is certainly unique in academia, rarely known for producing agile organizations. In particular, the center stands out for the entrepreneurial culture it has cultivated over the years.
Stanzione describes the approach as being akin to a startup. “We were hungry to win the Ranger contract and, against the odds, it paid off.”
"The science community looks at TACC as a part-ner. That collaborative attitude, the ability to take risks, push the envelope, and do the right thing is something I’ve always appreciated about TACC."
— Manish Parashar, Office Director, Office of Advanced Cyberinfrastructure, National Science Foundation
The Formula for Success
TACC has kept its edge for 20 years by prioritizing the needs of researchers and maintaining strong partnerships.
“TACC has repeatedly demonstrated integrity and commitment to the relationships it has with academics, industry, and government,” said Irene Qualters, associate laboratory director for Simulation and Computation at Los Alamos National Laboratory and former head of advanced computing at NSF, speaking in a personal capacity.
“The work that has been done with Dell Technologies, for example, that was only just emerging as a player in the HPC space when the connection was originally made, has really endured. I think there’s something uniquely Texan to TACC’s success. TACC has always done things their own way and, in doing so, they’ve become a national exemplar.”
As important as TACC systems are as a resource for the academic community, it's the people of TACC that make the facility so special. “They‘re not just leaders in designing and operating frontier systems, they also support numerous users across campus, teach HPC courses, and collaborate on multiple research projects,” Ghattas said.
The center’s mission and purpose has always focused on the researcher and enabling discovery in open science. A comprehensive list of advances would include everything from forecasting storm surge during hurricanes to confirming the discovery of gravitational waves by LIGO to identifying one of the most promising new materials for superconductivity.
But in the last decades, the nature of computing and computational science have changed. Whereas researchers have historically used the command line to access supercomputers, today, a majority of scholars use supercomputers remotely through web portals and gateways, uploading data and running analyses through an interface that would be familiar to any Amazon or Google customer. TACC’s portal team — nonexistent at launch — is now the largest group at the center, encompassing more than 30 experts, and leading a nationwide institute to develop best practices and train new developers for the field.
Likewise, life scientists were a very small part of TACC’s user base in 2001. Today, they are among the largest and more advanced users of supercomputers, leveraging TACC systems to model the billion-atom coronavirus or run complex cancer data analyses.
The physical growth of data in parallel with rapid advances in data science, machine learning and AI, marked another major shift for the center, requiring new types of hardware, software, and expertise that TACC integrated into its portfolio.
"I think we at TACC are native risk-takers. And we are 1000 percent about impact, whether it's on COVID-19, hurricanes, or many other things I’ve seen over the years, TACC wants to be part of the solution."
— Kelly Gaither, Director of Health Analytics, TACC
“The Wrangler system, which operated from 2014 to 2020, was the most powerful data analysis system for open science in the U.S.,” Stanzione said. “Two current systems, Maverick and Longhorn, were custom-built to handle machine and deep learning problems, and are leading to discoveries in areas from astrophysics to drug discovery.”
With almost 200 staff members, TACC’s mission has grown well beyond servicing and maintaining the needs of the big iron. It also includes large, active groups in scientific visualization, code development, data management and collections, and CS education and outreach.
The Next Decades
The past 20 years have been hugely successful by anyone’s standards. Still they are just the prelude for what’s to come. In 2019, TACC was successful in its bid to build and operate Frontera, a $120 million NSF-funded project that created, not just the fastest supercomputer at any university worldwide, but one of the most powerful systems on the planet. (It was #13 on the November 2021 Top500 list).
The NSF grant further stipulated that TACC would develop a plan for a Leadership-Class Computing Facility (LCCF) that would operate for at least a decade and deploy a system 10 times as powerful as Frontera.
TACC now has an opportunity not just to build a bigger machine, but to define how computational science and engineering progress in the coming decade.
"We’ll help lead the HPC community, particularly in computational science and machine learning, which will both play greater roles than ever before,” Stanzione said.
“The LCCF will be the open science community's premier resource for catalyzing a new generation of research that addresses societal grand challenges of the next decade,” said Ghattas.
The design, implementation, and management of the next system, as well as a new facility, means that TACC could be about to experience its most transformative period of change to date. Which begs one final question: If TACC has made it this far since 2001, what might we be celebrating at its 40th anniversary?
“I don’t know what TACC will look like two years from now,” Stanzione said. “But in another 20 years, I hope it looks entirely different. Then at least I’ll know the center’s legacy has persisted.”