Powering Discoveries

AI Tools for Natural Hazards Research

Q&A with Geotechnical Engineer Krishna Kumar

When a natural disaster occurs, time is of the essence. For this reason, urgent computing systems in the last decade have become increasingly necessary in supporting rapid alerting and forecasting frameworks associated with monitoring and management of natural hazards.

HPC systems, with their ability to process large datasets in an extremely limited time, offer effective computing environments for providing accurate prediction of disaster scenarios.

Now, AI is becoming a part of the equation, too. UT Austin Professor Krishna Kumar works in geotechnical engineering on natural hazards research. Kumar and Professor Ellen Rathje, the Janet S. Cockrell Centennial Chair in Engineering, are driving the exploration of AI tools into the NSF-funded DesignSafe natural hazards research infrastructure.

Q: Why is DesignSafe important?

Over the past eight years, DesignSafe has been training civil engineers and helping them to use supercomputing and HPC systems in their work. DesignSafe has become the entity where researchers go to publish their natural hazard data sets. There are about six petabytes of natural hazards data in the DesignSafe Data Depot — it’s a wealth of information for the open science community. To give you an idea of the quantity of data, if one petabyte was composed of printed books, the stack would be as tall as 77,000 Empire State Buildings — that’s over 4,000 miles high!

Q: How do we translate this large amount of data into useful knowledge?

Well, that’s the next evolution. With GPT (generative pre-trained transformer) entering the conversation, there has been a huge explosion of interest around AI. GPT is a type of AI that excels at natural language processing. It can assist in summarizing text, generating code, and function as a natural interface for research, taking AI and natural hazards research to the next level. We have the data and all of the computer power behind it at TACC. Now, with these new AI tools, we are asking how can we make this useful for engineers and researchers. Here's an example — let’s say a researcher wants to analyze a particular building against an earthquake. Traditionally, a graduate student trained in that software would pull data, create a numerical model, run a simulation, and then post-process the data to see if they could get an accurate analysis. The question then becomes “Can we use an AI tool to automate this structural analysis workflow just by talking to a system?” We are several years away from accomplishing this; however, these new tools provide a different way to look at a problem.

The bias of AI object detection in two different neighborhoods. The top image matches the training data; the bottom image lacks suitable training data. Image credits: Sean McLean, Paul Navrátil, and Krishna Kumar

Q: What is the primary way AI tools can help with research?

The big one is knowledge discovery. Currently, we cannot search for contextual information on DesignSafe. Although information exists on DesignSafe to answer a question like this, “Under what ground shaking levels do I get liquefaction failures in dense sand?” it is not readily accessible due to lack of context. To date, we’ve built a working prototype of a knowledge graph using an AI model like GPT to mine through data sets on DesignSafe. With this prototype, researchers can ask questions and receive contextual information (similar to if you typed “Pablo Picasso” into Google and it generated a box of knowledge). Thus, we are trying to use all of the data published in DesignSafe to build our own knowledge graph to answer questions on natural hazards.

Q: How are you using the Jupyter notebook interface for AI learning?

Jupyter provides researchers with an interactive web-based workspace to access and analyze the large natural hazards datasets available on DesignSafe. Within Jupyter, researchers can write and execute code in languages like Python while seeing results in real time. This interactive interface facilitates exploration and insight generation by leveraging the computational power and data storage of TACC. Our goal is to transition researchers from limited local computing resources to Jupyter on DesignSafe for more advanced natural hazards analyses.

Q: How can you help researchers accomplish the goal?

This is where the AI GPT tools help again because we can make a request like — “Write Python code to analyze triaxial tests in sands on DesignSafe.” It seems simple, but here’s the tricky part — GPT is only trained on knowledge that is open, so if you ask it something more specific it might not know the answer. In fact, it will hallucinate, which is a fancy way to say that it will generate a false answer. The challenge is to help it say “I don’t know” rather than make things up. So, what we have done is teach it to only answer based on its current knowledge. We are hoping to roll out this Code Assistant feature to DesignSafe Jupyter users by Spring 2024.

Q: Why is this aspect of your work exciting?

It is inspiring to find niche areas to do research that can make transformative progress in science. We can use AI to mine this vast wealth of knowledge to discover where gaps exist and enable understanding while solving bigger problems more quickly and equitably. There is a whole new paradigm of possibilities which exist in this new space.

AI in Action

If you’ve ever asked a question to Siri or Alexa or used predictive text on Google, you’ve already used AI. Diving a bit deeper, GPT is a type of AI that’s adept at understanding and creating human language. Picture this: you’re drafting an email, and the system predicts what you’re about to type — that’s GPT in action.