Computational scientists and neutron structural biologists from Oak Ridge National Laboratory developed an integrated workflow using small-angle neutron scattering (SANS), atomistic molecular dynamics (MD) simulation, and an autoencoder-based deep learn
Filter Research Highlights
Area of Research
- (-) Computational Biology (1)
- (-) Data (8)
- (-) Materials (7)
- (-) Mathematics (11)
- (-) Quantum information Science (10)
- Advanced Manufacturing (1)
- Biological Systems (3)
- Building Technologies (1)
- Clean Energy (6)
- Climate and Environmental Systems (3)
- Computational Chemistry (3)
- Computational Engineering (8)
- Computer Science (111)
- Energy Sciences (3)
- Engineering Analysis (1)
- Geographic Information Science and Technology (1)
- Materials for Computing (4)
- Renewable Energy (2)
- Sensors and Controls (1)
- Supercomputing (35)
- Visualization (3)
A team of Oak Ridge National Laboratory (ORNL) scientists involved in research topics of cybersecurity, statistical approaches, control systems, and dynamical models, reported a basic approach to security of physical systems that are interfaced with IT
A graph convolutional neural network (GCNN) was trained to accurately predict formation energy and mechanical properties of solid solution alloys crystallized in different lattice structures, thereby advancing the design of alloys for improving mechanic
Analyzing the logs of even the smallest Information Technology (IT) system can be a challenge, considering that they can generate millions of lines of log data in a very short time.
In this work we focus on dynamics problems described by waves, i.e. by hyperbolic partial differential equations.
This work develops an approach for engineering non-Gaussian photonic states in discrete frequency bins.
A research team from ORNL, Pacific Northwest National Laboratory, and Arizona State University has developed a novel method to detect out-of-distribution (OOD) samples in continual learning without forgetting the learned knowledge of preceding tasks.
ORNL researchers developed a novel nonlinear level set learning method to reduce dimensionality in high-dimensional function approximation.
The team conducted numerical studies to demonstrate the connection between the parameters of neural networks and the stochastic stability of DMMs.