Meet TACC’s Scalable Computational Intelligence Group
Fusing AI and supercomputing to accelerate real-world science
The Scalable Computational Intelligence (SCI) group focuses on bridging the divide between artificial intelligence, machine learning, and high performance computing for researchers who work on solving real-world problems.
Using TACC’s new SambaNova Suite, the SCI team is creating new tools, chatbots, and other resident AI services.
Juliana Duncan
SCI Manager
What is an interesting AI project you have been involved in at TACC?
The SCI team is working on several exciting projects. Many projects rely on large language models (LLMs) trained on vast amounts of data. However, some critical problems lack abundant data. One project I worked on involved child maltreatment cases in the U.S., where limited data made it difficult to correlate cases with demographic information. Optimizing ML in data-scarce situations is an exciting and important problem to solve.
How do you ensure the accuracy and reliability of AI models?
We have AI models now that are so powerful that it is easy to over trust them. We build trust in the models by going back to the basics like evaluating ML models with unseen scenarios, exploring edge cases where your model will fail, and using feedback to improve model performance.
What do you enjoy about using SambaNova Suite on supercomputers to integrate AI inference into research?
I am particularly excited to see how we can use LLMs to solve problems. They know languages but are not great at problem solving. Our group is exploring how to leverage LLMs to solve problems using technologies like AI agents. AI inference is a critical component of all these technologies.
Gabriel Jaffe
Research Associate
What is an interesting AI project you have been involved in at TACC?
I added the world’s largest and most advanced algorithms from 2025 to a text editor written in 1991, and at the end of the day found it quite useful. I engage in larger AI research projects such as CosmicAI that is making an astronomy coding assistant and chatbot as well as using modern AI techniques to accelerate the simulations of star formation.
How do you ensure the accuracy and reliability of AI models?
A classic way is to have the AI model take a comprehensive test that covers everything you plan to do with the model. The grade the model gets on the test tells you its accuracy and what subject areas with which it might struggle.
What do you enjoy about using SambaNova Suite on supercomputers to integrate AI inference into research?
SambaNova is a dedicated hardware space for users who want to run large LLMs, which in my personal definition means models that we can’t run on a single node on a supercomputer — anything with more than 40 billion parameters. SambaNova’s hardware can have multiple of these large models preloaded into memory to switch quickly between them if your application requires that.
Sikan LI
Scientist Associate
What is an interesting AI project you have been involved in at TACC?
One of the most fascinating projects I have worked on is the Momentum-Enabled Kronecker-Factor-Based Optimizer Using Rank-1 Updates, which enhances the training time and convergence properties of deep neural networks. Collaborating with experts across disciplines, I was able to integrate this optimizer into real-world workflows, demonstrating measurable improvements in both speed and accuracy.
How do you ensure the accuracy and reliability of AI models?
Ensuring accuracy and reliability involves rigorous manual checks, which remains an effective and reliable method. In addition, I incorporate cross-validation techniques and monitor performance metrics across diverse datasets to detect potential biases or inconsistencies.
What do you enjoy about using SambaNova Suite on supercomputers to integrate AI inference into research?
The SambaNova Suite’s advanced hardware is ideal for hosting large models and delivering premium speed that meets the needs of researchers. I particularly appreciate its seamless integration with TACC’s infrastructure, which allows for efficient deployment and scaling of AI workloads.
Amit Gupta
Research Engineering Scientist
What is an interesting AI project you have been involved in at TACC?
I worked on the Domain Information Vocabulary Extraction project which extracts entities from technical articles for the purpose of curation. I also worked on the Scalable Object Detection project which involves pedestrian detection in traffic camera videos for the City of Austin.
How do you ensure the accuracy and reliability of AI models?
It depends on several factors like the quality and volume of training data and model architecture. Current research explores both directions, namely architectures for greater interpretability and control, and getting higher accuracy when high quality and volume of training data is not available.
What do you enjoy about using SambaNova Suite on supercomputers to integrate AI inference into research?
I enjoy learning about the custom hardware architecture, software stack, and performance optimizations. Inference being an important and emerging use case, I am also interested in customizing the system for specific use cases we see at TACC.
Luke Smith
Research Associate
What is an interesting AI project you have been involved in at TACC?
I put together an AI model of a supernova simulation as part of our ML tutorial. I have also been involved in the group’s natural hazards modeling effort of building an agentic system, where the AI is more autonomous, for particle simulations of landslides, floods, and more.
How do you ensure the accuracy and reliability of AI models?
By understanding ML architecture, the metrics upon which they are trained, and a few statistics about your dataset, you can generate benchmarks that provide a reasonable sense of what a model has learned versus what it has not learned.
What do you enjoy about using SambaNova Suite on supercomputers to integrate AI inference into research?
The SambaNova Suite gives users a machine that is specialized in evaluating LLMs. This significantly speeds up inference time while also freeing up TACC’s other GPU resources for model training and development.
Meet Tejas: TACC’s AI Inference System
TACC has rolled out the SambaNova Suite, supercharging its artificial intelligence and generative inference capabilities for science. The system, dubbed Tejas to honor the center’s Texas-inspired tradition, brings powerful scientific models online making it possible to run inference directly within the research workflow. With Tejas, AI is not just supporting discovery — it is accelerating it.
What are AI generative inference services?
They are tools or systems that let you use an AI model to generate something new, such as text, images, or code, based on a request you give it.
- Generative means the AI is creating new content, not just retrieving or matching something that already exists.
- Inference is the technical word for when the AI model takes your input (like a question or prompt) and produces an output (such as an answer or paragraph). It can also summarize or query a collection of documents.
- Services means these capabilities are offered to users, usually through software, application programming interfaces, or platforms.