Skip to main content
5 scientists in blue and white coats are leaning over the wind blades covered in orange and yellow material

ORNL researchers reached a significant milestone by building an entire 6.5-foot turbine blade tip using novel materials. The team then tested it against the forces of simulated lightning in a specialized lab at Mississippi State University, where the blade tip emerged pristine after tests that isolate the effects of high voltage. 

The summit supercomputer logo on a computer cabinet off center going to the left. There are 7 cabinets going off to the left.

The Summit supercomputer did not have its many plugs pulled as planned after its five years of service. Instead, a new DOE Office of Science-backed allocation program called SummitPLUS was launched, extending Summit's production for another year. What did we learn during Summit’s bonus year of scientific discovery? Here are five projects with important results.

Scientists used neutron scattering to study how tweaking the ionic clusters in ionizable polymer solutions affects their structure. The polymer building blocks are marked in gold and the ionizable groups in red. Findings could open doors to lighter, more efficient clean energy devices. Credit: Phoenix Pleasant/ORNL, U.S. Dept. of Energy

Electrolytes that convert chemical to electrical energy underlie the search for new power sources with zero emissions. Among these new power sources are fuel cells that produce electricity. 

microscopic ctherm biomass

Using a best-of-nature approach developed by researchers working with the Center for Bioenergy Innovation at the Department of Energy’s Oak Ridge National Laboratory and Dartmouth University, startup company Terragia Biofuel is targeting commercial biofuels production that relies on renewable plant waste and consumes less energy. The technology can help meet the demand for billions of gallons of clean liquid fuels needed to reduce emissions from airplanes, ships and long-haul trucks.

A small sample from the Frontier simulations reveals the evolution of the expanding universe in a region containing a massive cluster of galaxies from billions of years ago to present day (left).

In early November, researchers at the Department of Energy’s Argonne National Laboratory used the fastest supercomputer on the planet to run the largest astrophysical simulation of the universe ever conducted. The achievement was made using the Frontier supercomputer at Oak Ridge National Laboratory. 

7 people from ORBIT research team accept their award from Tom Tabor (middle)

ORNL has been recognized in the 21st edition of the HPCwire Readers’ and Editors’ Choice Awards, presented at the 2024 International Conference for High Performance Computing, Networking, Storage and Analysis in Atlanta, Georgia.

Black computing cabinets in a row on a white floor in the data center that houses the Frontier supercomputer at Oak Ridge National Laboratory

Two-and-a-half years after breaking the exascale barrier, the Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory continues to set new standards for its computing speed and performance.

Graphic representation of ai model that identifies proteins

Researchers used the world’s fastest supercomputer, Frontier, to train an AI model that designs proteins, with applications in fields like vaccines, cancer treatments, and environmental bioremediation. The study earned a finalist nomination for the Gordon Bell Prize, recognizing innovation in high-performance computing for science.

Pictured here are 9 scientists standing in a line in front of the frontier supercomputer logo/computer

Researchers at Oak Ridge National Laboratory used the Frontier supercomputer to train the world’s largest AI model for weather prediction, paving the way for hyperlocal, ultra-accurate forecasts. This achievement earned them a finalist nomination for the prestigious Gordon Bell Prize for Climate Modeling.

Nine men are pictured here standing in front of a window, posing for a group photo with 5 standing and 4 sitting.

A research team led by the University of Maryland has been nominated for the Association for Computing Machinery’s Gordon Bell Prize. The team is being recognized for developing a scalable, distributed training framework called AxoNN, which leverages GPUs to rapidly train large language models.