Supercomputing and Computation

SHARE

Primary Systems


Titan

Titan is a Cray XK7 system consisting of 18,688 AMD sixteen-core Opteron™ processors providing a peak performance of more than 3.3 petaflops (PF) and 600 terabytes (TB) of memory. A total of 512 service input/output (I/O) nodes provide access to the 10 petabytes (PB) “Spider” Lustre parallel file system at more than 240 gigabytes (GB/s). External login nodes (decoupled from the XK7 system) provide a powerful compilation and interactive environment using dual-socket, twelve-core AMD Opteron processors and 256 GB of memory.  Each of the 18,688 Titan compute nodes is paired with an NVIDIA Kepler graphics processing unit (GPU) designed to accelerate calculations. With a peak performance per Kepler accelerator of more than 1TF, the aggregate performance of Titan exceeds 20PF. Titan is the Department of Energy’s most powerful open science computer system and is available to the international science community through the INCITE program, jointly managed by DOE’s Leadership Computing Facilities at Argonne and Oak Ridge National Laboratories.

The Spider disk subsystem will be upgraded in 2013 to provide up to 1 TB/s of disk bandwidth and up to 30 PB of storage.

Gaea

Gaea consists of a pair of Cray XE6 systems. The smaller partition contains 2,624 socket G34 AMD 16-core Opteron processors, providing 41,984 compute cores, 84 TB of double data rate 3 (DDR3) memory, and a peak performance of 386 teraflops (TF). The larger partition contains 4,896 socket G34 AMD 16‑core Interlagos Opteron processors, providing 78,336 compute cores, 156.7 TB of DDR3 memory, and a peak performance of 721 TF.

The aggregate system provides 1.106 PF of computing capability, and 248 TB of memory. The Gaea compute partitions are supported by a series of external login nodes and two separate file systems. The FS file system is based on more than 2,000 SAS drives and provides more than 1 PB (formatted) space for fast scratch to all compute partitions. The LTFS file system provides more than 2000 SATA drives and 4  PB formatted capacity as a staging and archive file system. Gaea is the NOAA climate community’s most powerful computer system and is available to the climate research community through the Department of Commerce/NOAA.

The ORNL Institutional Cluster

The ORNL Institutional Cluster (OIC) consists of two phases. The original OIC consists of a bladed architecture from Ciara Technologies called VXRACK. Each VXRACK contains two login nodes, three storage nodes, and 80 compute nodes. Each compute node has dual Intel 3.4 GHz Xeon EM64T processors, 4 GB of memory, and dual gigabit Ethernet interconnects. Each VXRACK and its associated login and storage nodes are called a block. There are a total of nine blocks of this type. Phase 2 blocks were acquired and brought online in 2008. They are SGI Altix machines. There are two types of blocks in this family.

  • Thin nodes (3 blocks). Each Altix contains 1 login node, 1 storage node, and 28 compute nodes within 14 chassis. Each node has eight cores and 16 GB of memory. The login and storage nodes are XE240 boxes from SGI. The compute nodes are XE310 boxes from SGI.
  • Fat nodes (2 blocks). Each Altix contains 1 login node, 1 storage node, and 20 compute nodes within 20 separate chassis. Each node has eight cores and 16 GB of memory. These XE240 nodes from SGI contain larger node-local scratch space and a much higher I/O to this scratch space because the space is a volume from four disks.

Frost (SGI Altix ICE 8200) consists of three racks totaling 128 compute nodes, 5 service nodes (1 batch node and 4 login nodes), 2 rack leader nodes, and 1 administration node. Each compute node has two Intel quad-core Xeon X5560 at 2.8 GHz (Nehalem) processors, 24 GB of memory, a 1 Gb Ethernet connection, and two 4x DDR Infiniband connections. Each rack of compute nodes contains eight Infiniband switches (Mellanox InfiniScale III MT47396, 24 10‑Gb/s Infiniband 4X ports) that are used as the primary interconnect between compute nodes and for connection to the Lustre file system. The center-wide Lustre file system is the main storage available to the compute nodes. The Frost cluster is available to ORNL staff and collaborators.

The University of Tennessee

Kraken is a Cray XT5 system consisting of 18,816 AMD six-core Opteron processors providing a peak performance of 1.17 PF and 147 TB of memory. It is connected to more than 3 PB of disk space for scratch space. Originally deployed in 2010, it remains one of the fastest academic computers in the world and a significant resource on the NSF XSEDE network.

 

ASK ORNL

We're always happy to get feedback from our users. Please use the Comments form to send us your comments, questions, and observations.