Skip to main content
SHARE
News

Tracing throughlines: A brief history of artificial intelligence at Oak Ridge National Laboratory

A researcher plays checkers against an AI-powered robotic arm in 1984. Credit: ORNL, U.S. Dept. of Energy
A researcher plays checkers against an AI-powered robotic arm in 1984. Credit: ORNL, U.S. Dept. of Energy

Can machines think? Computer scientists have grappled with this question for more than 70 years as they’ve attempted to instill the ability to reason into inanimate systems. Despite its futuristic essence, artificial intelligence has a history that can be traced through several decades.

The Department of Energy’s Oak Ridge National Laboratory has played a major role throughout this time. During the field’s early days, AI research focused on expert systems while more recent developments have been made in deep learning. Scientists from around the world have achieved AI-enabled breakthroughs with ORNL’s resources, including supercomputers housed at the Oak Ridge Leadership Computing Facility, or OLCF. 

Although science fiction narratives about AI blur the line between man and machine, reality is more concrete. “Computers do not ‘think’ in the same manner that humans do,” said Ramakrishnan Kannan, leader of ORNL’s Discrete Algorithms group. “Simply put, they are powerful calculators that can solve mathematical equations efficiently. In light of this, AI is best understood as a tool that revolutionizes data analysis.”

“Exactly for this reason, computers can be used to design rockets and guide them to other planets but cannot easily identify different cat breeds in an image with high accuracy,” said Prasanna Balaprakash, director of AI programs at ORNL.

Pictured here, from left, in 1981, key contributors to ORNL's early AI efforts included John Allen, a consultant from ProPhysica Inc.; Michelle Buchanan, an ORNL spectroscopy expert; and Sara Jordan, a University of Tennessee professor and ORNL researcher. Credit: ORNL, U.S. Dept. of Energy.
Pictured here, from left, in 1981, key contributors to ORNL's early AI efforts included John Allen, a consultant from ProPhysica Inc.; Michelle Buchanan, an ORNL spectroscopy expert; and Sara Jordan, a University of Tennessee professor and ORNL researcher. Credit: ORNL, U.S. Dept. of Energy.

An age of expert systems

In October 1979, ORNL launched the Oak Ridge Applied Artificial Intelligence Project, which evaluated AI’s potential to advance scientific research. Computer scientists, mathematicians and domain scientists collaborated to develop technologies that supported four key areas: spectroscopy, environmental management, nuclear fuel reprocessing and programming assistance.

In a story published in the Spring 1981 issue of ORNL Review, Carroll Johnson, an AI pioneer at ORNL wrote, “The ORNL effort in AI is an interdisciplinary exploratory project designed to evaluate the potential usefulness of AI methodology for problems at the laboratory. The preliminary results all seem very encouraging and one cannot help but predict that eventually AI approaches will have considerable impact on computer programming at ORNL.”

Early AI efforts concentrated on expert systems, which codified human expertise into knowledge databases. Machines stored information in libraries with rules in the form of “if-then” statements that outlined sets of conditions alongside recommended actions.

“The influence of AI on medical diagnosis, oil exploration and a broad spectrum of other activities is beginning to emerge throughout the country,” Johnson wrote. “With AI techniques, such as the use of heuristic reasoning and rule-based knowledge, programmers can approach unusual and complex problems which previously seemed intractable.”

Carroll Johnson’s experiences as an AI pioneer at ORNL spanned a wide array of disciplines, from neutron diffraction to AI concepts and applications. Credit: ORNL, U.S. Dept. of Energy.
Carroll Johnson’s experiences as an AI pioneer at ORNL spanned a wide array of disciplines, from neutron diffraction to AI concepts and applications. Credit: ORNL, U.S. Dept. of Energy.

ORNL housed the DECSystem-10, a computer that ran AI frameworks, including Rutgers University’s EXPERT, during this era. Originally designed for healthcare consulting, EXPERT used rule sets written by medical professionals and programmed by computer scientists to form chains of reasoning.  

EXPERT provided a framework for the development of other applications at ORNL, including a spectroscopy rule set. Using EXPERT, computers could identify functional groups — the parts of a molecule that drive reactions — on organic molecules with up to 15 carbon atoms. 

The knowledge library also provided useful programming assistance. For example, researchers designed a rule set based on EXPERT to aid computer users as they established and debugged their job control language, which increased the efficiency of their codes. 

Smart simulations

A few years later, researchers at ORNL developed the Simulation Analysis Module, or SAM, which used AI to model, analyze and answer questions about real-world systems. Equipped with a knowledge library on the characteristics of energy flow, SAM estimated the cost to heat and cool houses as part of the lab’s Thermal Energy Storage Program.

“Though after a while most of this information is fairly predictable, occasionally a difference in behavior of, say, the surface temperature of a wall will be noted,” wrote ORNL mathematician Alan Solomon in ORNL Review in 1984. “Hence, a key aim of SAM is to restrict the output to novel, unexpected or significant information.”

SAM used components common to AI codes to perform statistical analyses and make logical inferences, automating the flow of information. Over time, SAM learned and adapted strategies to address novel situations. 

Up in (robotic) arms

Since the early 1980s, ORNL researchers have integrated AI capabilities into robotic instruments. Early developments centered on basic component technologies such as sensors and robotic arms. 

A computer is typing on its own keyboard using a programmed robotic arm. Credit: ORNL, U.S. Dept. of Energy.
A computer is typing on its own keyboard using a programmed robotic arm. Credit: ORNL, U.S. Dept. of Energy.

“The goal of creating intelligent machines has occupied mankind since the dawn of civilization,” Solomon wrote. “The 20th century development of electronic computers has sparked great interest and has resulted in some real achievements, making it possible for machines to play board games such as chess and checkers, prove mathematical theorems and diagnose diseases.”          

For a robot to be considered intelligent, it must adapt its behavior to make sense of its existing knowledge base and new data acquired through its sensors. ORNL worked to make such machines move smoothly and avoid obstacles by using “fuzzy logic,” which encompasses more variables than the traditional true and false. These capabilities were then deployed in safety-critical areas, such as nuclear power plants.  

“[These machines are being developed] to extend human capabilities into work environments that are hazardous to people; to perform work that people cannot or should not do; and to transcend the limitations of human sensory, manipulatory and control capabilities,” wrote Reinhold Mann in a 1993 issue of ORNL Review. Mann was director of ORNL’s Center for Engineering Systems Advanced Research.

In 1991, several automated machines focused on obstacle avoidance problems that laid the foundations for AI-enabled vehicles. Credit: ORNL, U.S. Dept. of Energy
In 1991, several automated machines focused on obstacle avoidance problems that laid the foundations for AI-enabled vehicles. Credit: ORNL, U.S. Dept. of Energy

ORNL researchers also integrated these AI capabilities into vehicles. The prototypes could process and analyze sensory information, then analyze incoming data and reference knowledge libraries to make informed recommendations to the drivers. Smart cars available on the market today exhibit similar, more advanced versions of such capabilities that can be traced back to these early designs. 

Despite laying the groundwork for AI applications through intelligent simulations and robotics, expert systems had limitations that kept them from truly revolutionizing research. Their knowledge databases were not sustainable because they required humans to write the rules — which were often imprecise — resulting in uncertainties, unique problems, and fuzzy logic. These constraints hindered progress, leading to a winter period in AI. 

From the shoulders of Titan

In the era of big data, researchers adapted their approach to AI research. Many began to focus on machine-learning tools like neural networks, which mimic the human brain’s methods of processing information. These models demonstrated advantages over traditional expert systems, namely through an added capacity for adaptive learning, fault tolerance and parallel processing.  

“Under the covers, you can trace the basis of neural networks back to the 1940s. However, they didn't have the computing power necessary for simulating  mathematical models,” said Tom Potok, the founder of ORNL’s Computational Data Analytics group. “The development of neural networks was limited by the speed and capability of computers, which had relied solely on CPUs that completed calculations in sequence.”

In 2012, a new generation of supercomputers enabling faster research began with the launch of Titan at the OLCF. It was the first supercomputer to integrate GPUs into its hardware, which was considered a risky design choice at the time.

Although GPUs were originally designed to render graphics for televisions and video game consoles, researchers realized that they could work in tandem with CPUs to bolster computing power. The hybrid architecture made Titan a remarkably productive machine, enabling rapid prototyping that led to breakthroughs in AI research. 

A technician installs GPUs on the Titan supercomputer. Credit: Curtis Boles/ORNL, U.S. Dept. of Energy
A technician installs GPUs on the Titan supercomputer. Credit: Curtis Boles/ORNL, U.S. Dept. of Energy

A new era of neural networks

When Titan was built, researchers in the fields of high-performance computing and AI realized that GPUs were a holy grail,” said Robert Patton, a data scientist at ORNL. “My team thought that a marriage between the two would be revolutionary.”  

To take advantage of this resource, Patton’s team developed an algorithm known as Multimode Evolutionary Neural Networks for Deep Learning, or MENNDL, in 2014. The algorithm was designed to auto-generate neural networks for a variety of domain applications, and the performance of these networks exceeded those created by human experts. 

MENNDL is still creating neural networks today. One early application involved a partnership with Fermi National Accelerator Laboratory that looked for neutrino collisions in images produced by a detector. Another one of its early neural networks was trained on images of tumors in a dataset provided by Stony Brook University. It identified cancerous tissue 16 times faster than the previous network while maintaining similar accuracy. 

ORNL’s Summit boasts 4,608 nodes, each with six NVIDIA GPUs. Credit: Carlos Jones/ORNL, U.S. Dept. of Energy
ORNL’s Summit boasts 4,608 nodes, each with six NVIDIA GPUs. Credit: Carlos Jones/ORNL, U.S. Dept. of Energy

Seeing the Summit

Advances in HPC have further expanded the realms of AI and scientific discovery. Enter Summit, an IBM supercomputer optimized for AI with over 27,000 GPUs that was the fastest supercomputer of its time and remains a world-leading system to this day. 

In 2018, the same year as Summit’s debut, AI researchers at ORNL ran a MENNDL-generated algorithm on the machine. For this project, MENNDL developed a neural network to analyze images collected by scanning transmission electron microscopes, or STEMs. Previously, reviewing these images was a laborious task that required human expertise. Using AI to sort through massive amounts of data would be useful for science domains that utilize STEMs, including fundamental materials discovery. The team behind this effort was a 2018 finalist for the Association for Computing Machinery’s, or ACM’s, Gordon Bell Prize, which recognizes outstanding achievement in HPC.

That same year, ORNL launched a lab-wide AI initiative to centralize its endeavors in the field and facilitate faster applications of AI to enhance large-scale research efforts in materials science, medicine and cybersecurity both at the lab and nationwide. The initiative’s priorities are organized into three thrust areas — AI for Scientific Discovery and Complex Systems, AI for Experimental Facilities and AI for National Security — which make the most of the lab’s AI resources and expertise to manage complex engineered systems, automate science workflows at experimental facilities and develop technologies required for advanced threat detection, decision-making and planning. 

During the height of the COVID-19 pandemic in 2020, researchers used Summit to run the Distributed Accelerating Semiring All-Pairs Shortest Path algorithm, or DSNAPSHOT. This algorithm revealed previously unknown links among millions of medical papers, connecting decades’ worth of research to find remedies that could combat the coronavirus. The team was also named an ACM Gordon Bell finalist for this work in 2020 and for related work on ORNL’s Frontier, the world’s first exascale supercomputer, in 2022

“AI is already an integral part of research at ORNL, and we see the impact of HPC-enabled AI in areas such as materials science, advanced manufacturing, climate modeling and biology,” said Gina Tourassi, associate laboratory director for Computing and Computational Sciences, which manages the OLCF. “With Frontier, we are deploying collaborative frameworks and infrastructure to train large-scale AI models with scientific data from scratch. These models would have a transformative impact in scientific discovery and translational applications.”

AI’s expanding Frontier 

Today, ORNL’s AI initiative ensures safe, trustworthy and energy-efficient AI for scientific research. “With the development and strengthening of the foundations of the three pillars of AI — security, trustworthiness and energy efficiency — the AI initiative seeks to ensure that any progress made in this promising field remains ethically and environmentally responsible,” said Prasanna Balaprakash, director of AI programs at ORNL. “Building upon its rich history, the lab continues to develop and apply responsible AI technologies that drive transformation and scientific discovery.” 

ORNL is developing the science underpinning AI security through its newly launched Center for AI Security Research, or CAISER. CAISER provides objective scientific analysis of the vulnerabilities, threats and risks — from individual privacy to international security — related to emerging and advanced AI. “ORNL has a proud history of leadership and excellence in cybersecurity, biometrics, geospatial intelligence and nuclear nonproliferation,” said Edmon Begoli, director of CAISER. “With this center, we want to build upon that legacy by applying nonproliferation methodological expertise and experience to AI safety and AI security threats, risks and concerns.”

Geospatial artificial intelligence, or GeoAI, weaves together geographic knowledge with HPC resources and novel AI algorithms. ORNL’s groundbreaking GeoAI capabilities lay the foundations for research about infrastructure resilience, including disaster preparedness and recovery. These capabilities help authorities make informed choices and enable rapid responses to crises. Most recently, ORNL’s GeoAI capabilities have been used to assess building damage in regions affected by global conflicts, including  Ukraine, as well as Israel and Gaza.

ORNL’s AI programs also support a variety of scientific disciplines, including biomedical and physical sciences research. One AI-driven technique tracks medical data in real time and has already been used to gauge the impact of COVID-19 on cancer diagnoses. This approach has made data accessible a few months after a diagnosis, a process that used to take more than two years. ORNL also developed programs for materials research — including AtomAI and HydraGNN, two machine-learning software packages that quickly derive statistically meaningful information from complex datasets. These often include sets of images illustrating thousands of atoms with abnormalities in a molecular structure. The streamlined, intelligent analysis allows for deeper insights into the materials’ physical and chemical qualities. 

Overall, from the early days of expert systems to the evolutionary neural networks that run today, researchers at ORNL have created generations of code and applications that power AI. These systems have changed the nature of discovery and created solutions to previously intractable problems. 

“Historically, scientific research has had three pillars: theory, experiment and simulation,” said Kannan. “Now, artificial intelligence has become a necessary pillar for science. It is a tool that efficiently analyzes massive amounts of data, enabling research that would have been impossible before its development.” 

To learn more about ORNL’s ongoing AI research and resources, visit the lab’s AI webpage

UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science. — Reece Brown

Learn more about AI at ORNL by watching: Artificial intelligence for science at ORNL