
Filter News
Area of Research
- Biological Systems (1)
- Biology and Environment (36)
- Computational Biology (2)
- Computational Engineering (1)
- Computer Science (1)
- Electricity and Smart Grid (1)
- Energy Science (14)
- Functional Materials for Energy (1)
- Fusion and Fission (4)
- Fusion Energy (1)
- Isotopes (6)
- Materials (12)
- Materials for Computing (4)
- National Security (5)
- Neutron Science (18)
- Nuclear Science and Technology (6)
- Supercomputing (67)
News Topics
- (-) Biomedical (73)
- (-) Molten Salt (10)
- (-) Simulation (65)
- (-) Summit (71)
- 3-D Printing/Advanced Manufacturing (146)
- Advanced Reactors (40)
- Artificial Intelligence (131)
- Big Data (79)
- Bioenergy (112)
- Biology (128)
- Biotechnology (39)
- Buildings (74)
- Chemical Sciences (86)
- Clean Water (33)
- Composites (35)
- Computer Science (226)
- Coronavirus (48)
- Critical Materials (29)
- Cybersecurity (35)
- Education (5)
- Element Discovery (1)
- Emergency (4)
- Energy Storage (114)
- Environment (218)
- Exascale Computing (67)
- Fossil Energy (8)
- Frontier (64)
- Fusion (66)
- Grid (74)
- High-Performance Computing (130)
- Hydropower (12)
- Irradiation (3)
- Isotopes (62)
- ITER (9)
- Machine Learning (68)
- Materials (157)
- Materials Science (158)
- Mathematics (12)
- Mercury (12)
- Microelectronics (4)
- Microscopy (56)
- Nanotechnology (64)
- National Security (86)
- Neutron Science (171)
- Nuclear Energy (122)
- Partnerships (68)
- Physics (69)
- Polymers (35)
- Quantum Computing (53)
- Quantum Science (93)
- Security (31)
- Software (1)
- Space Exploration (26)
- Statistics (4)
- Transportation (103)
Media Contacts

Phong Le is a computational hydrologist at ORNL who is putting his skills in hydrology, numerical modeling, machine learning and high-performance computing to work quantifying water-related risks for humans and the environment.

ORNL’s annual workshop has become the premier forum for molten salt reactor, or MSR, collaboration and innovation, convening industry, academia and government experts to further advance MSR research and development. This year’s event attracted a record-breaking 365 participants from across the country, highlighting the momentum to bring MSRs online.

Researchers at Stanford University, the European Center for Medium-Range Weather Forecasts, or ECMWF, and ORNL used the lab’s Summit supercomputer to better understand atmospheric gravity waves, which influence significant weather patterns that are difficult to forecast.

Scientists conducted a groundbreaking study on the genetic data of over half a million U.S. veterans, using tools from the Oak Ridge National Laboratory to analyze 2,068 traits from the Million Veteran Program.


The Summit supercomputer did not have its many plugs pulled as planned after its five years of service. Instead, a new DOE Office of Science-backed allocation program called SummitPLUS was launched, extending Summit's production for another year. What did we learn during Summit’s bonus year of scientific discovery? Here are five projects with important results.

In early November, researchers at the Department of Energy’s Argonne National Laboratory used the fastest supercomputer on the planet to run the largest astrophysical simulation of the universe ever conducted. The achievement was made using the Frontier supercomputer at Oak Ridge National Laboratory.

Two-and-a-half years after breaking the exascale barrier, the Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory continues to set new standards for its computing speed and performance.

Researchers used the world’s fastest supercomputer, Frontier, to train an AI model that designs proteins, with applications in fields like vaccines, cancer treatments, and environmental bioremediation. The study earned a finalist nomination for the Gordon Bell Prize, recognizing innovation in high-performance computing for science.

A research team led by the University of Maryland has been nominated for the Association for Computing Machinery’s Gordon Bell Prize. The team is being recognized for developing a scalable, distributed training framework called AxoNN, which leverages GPUs to rapidly train large language models.