Skip to main content
Scientists stands at podium in front of group; stage has green and blue lights

ORNL welcomed attendees to the inaugural Southeastern Quantum Conference, held Oct. 28 – 30 in downtown Knoxville, to discuss innovative ways to use quantum science and technologies to enable scientific discovery. 

seven scientists' headshots are listed horizontally in a graphic representing the Battelle Distingished Inventors

Seven scientists affiliated with ORNL have been named Battelle Distinguished Inventors in recognition of being granted 14 or more United States patents. Since Battelle began managing ORNL in 2000, 104 ORNL researchers have reached this milestone.

Scientists used neutron scattering to study how tweaking the ionic clusters in ionizable polymer solutions affects their structure. The polymer building blocks are marked in gold and the ionizable groups in red. Findings could open doors to lighter, more efficient clean energy devices. Credit: Phoenix Pleasant/ORNL, U.S. Dept. of Energy

Electrolytes that convert chemical to electrical energy underlie the search for new power sources with zero emissions. Among these new power sources are fuel cells that produce electricity. 

A small sample from the Frontier simulations reveals the evolution of the expanding universe in a region containing a massive cluster of galaxies from billions of years ago to present day (left).

In early November, researchers at the Department of Energy’s Argonne National Laboratory used the fastest supercomputer on the planet to run the largest astrophysical simulation of the universe ever conducted. The achievement was made using the Frontier supercomputer at Oak Ridge National Laboratory. 

7 people from ORBIT research team accept their award from Tom Tabor (middle)

ORNL has been recognized in the 21st edition of the HPCwire Readers’ and Editors’ Choice Awards, presented at the 2024 International Conference for High Performance Computing, Networking, Storage and Analysis in Atlanta, Georgia.

Black computing cabinets in a row on a white floor in the data center that houses the Frontier supercomputer at Oak Ridge National Laboratory

Two-and-a-half years after breaking the exascale barrier, the Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory continues to set new standards for its computing speed and performance.

Oak Ridge National Laboratory entrance sign

The Department of Energy’s Quantum Computing User Program, or QCUP, is releasing a Request for Information to gather input from all relevant parties on the current and upcoming availability of quantum computing resources, conventions for measuring, tracking, and forecasting quantum computing performance, and methods for engaging with the diversity of stakeholders in the quantum computing community. Responses received to the RFI will inform QCUP on both immediate and near-term availability of hardware, software tools and user engagement opportunities in the field of quantum computing.

Graphic representation of ai model that identifies proteins

Researchers used the world’s fastest supercomputer, Frontier, to train an AI model that designs proteins, with applications in fields like vaccines, cancer treatments, and environmental bioremediation. The study earned a finalist nomination for the Gordon Bell Prize, recognizing innovation in high-performance computing for science.

Pictured here are 9 scientists standing in a line in front of the frontier supercomputer logo/computer

Researchers at Oak Ridge National Laboratory used the Frontier supercomputer to train the world’s largest AI model for weather prediction, paving the way for hyperlocal, ultra-accurate forecasts. This achievement earned them a finalist nomination for the prestigious Gordon Bell Prize for Climate Modeling.

Nine men are pictured here standing in front of a window, posing for a group photo with 5 standing and 4 sitting.

A research team led by the University of Maryland has been nominated for the Association for Computing Machinery’s Gordon Bell Prize. The team is being recognized for developing a scalable, distributed training framework called AxoNN, which leverages GPUs to rapidly train large language models.