Welcome to the RSPH HPC documentation.

The RSPH HPC cluster is a system that consists of 25 compute nodes, 24 of which have 32 compute cores and 196GB of RAM each. The last node is a “large memory node” with 1.5 TB of RAM. These systems are connected together via 25GB Ethernet network, and all have access to a shared 1 Petabyte Panasas parallel file system.

In addition to the hardware, the system runs the CentOS Linux operating system (currently version 8), which is a “white-box” implementation of the Red Hat Enterprise Linux OS that purports to be 100% binary compatible with the commercial version.

Job scheduling is handled by the SLURM job scheduler, which is an application that currently runs on the majority of the Top 500 supercomputing sites in the world.

To request help, send an email to help@sph.emory.edu and mention “HPC cluster” in the subject line of your request.