Purpose
High-performance computing provides an aggregation of computing power in a way that delivers higher performance than any single personal computing system in order to address research questions requiring large scale computation. The Rollins School of Public Health (RSPH) recognizes the current and ongoing need for this infrastructure resource for its researchers and so has invested considerably in the development of a state-of-the-art high performance cluster. In order to assure broad adoption of the system, the optimal performance of the cluster, and equity in access by all interested RSPH researchers, the RSPH Computation and Data Science Advisory Group (CDAG) along with the Office of Research have developed this set of use policies based on best practices in the academic research computing community and the needs and resources of RSPH researchers. These provide the overall framework for the management and use of the cluster. As research needs change and develop, there could be opportunity to amend and expand these policies. All users of the cluster should be familiar with these policies and abide by them, to assure continued open access to all RSPH users.
Enabled vs Supported Cluster Activities
The main unique features of the HPC cluster are that of the aggregation of a large number of computers via a fast, dedicated network, and that of a job-scheduled runtime environment. The combination of these two features allows for very large numbers of computations to take place unsupervised by human users.
The main focus of HPC cluster support is to provide access to and stability of these features. This access is provided securely via supported clients on the major desktop vendor platforms (Windows, Mac OS and Linux), in which cluster users can count on support for security updates and regular feature updates by operating system vendors. All major desktop operating systems currently provide access to a supported, industry-standard ssh client via their own version of the command line terminal, for instance.
While a popular feature for interactive use, graphical (aka “GUI”) displays present a more difficult support challenge, as remote clients for GUI displays are not supported by developers with the same scrutiny for security or bug fixes as the standard command line tools. In addition, technical support for such remote displays is very difficult without access to the remote display itself such as in a remote work environment. The use of remote displays in an HPC environment are of only limited use, as one cannot individually observe hundreds of jobs across several compute nodes. The export of graphical displays also generates additional networking traffic which can affect cluster performance. For these reasons, the standard practice in HPC is to use an interactive version of one’s computation on a local computer, and when one is ready to scale up to larger computations, migrate the program in batch form to the HPC environment.
In our HPC cluster environment we endeavor to allow for a wide range of uses as reasonably possible while providing sufficient technical support. To that end, there may be several system
features that are enabled that experienced users may be able to take advantage of while not explicitly being supported by the technical staff. In such cases, users are not banned from certain practices (as long as they do not affect cluster performance for others), but no promise support is provided by the technical staff for those activities. Some examples are:
– remote export of X windows displays, such as GUIs for MATLAB, SAS or R
– creation of ssh tunnels (except in cases of security exceptions, which are prohibited) for Notebooks
– GUI File-copying programs or ssh clients (PuTTY or WinSCP)
– document creation programs, such as printing or formatting and others. In general, users interested in using additional clients or methods should engage peers or other experienced users for advice on their setup and use.
Storage
- A tiered storage system will be implemented. The LITS file system will be mounted on the cluster head node, for archival purposes. A /home file system will be created with 25GB/user quota to store compiled programs, input, or configuration files with no purge policy. A /project file system will be created with 1Tb/group quota to store intermediate files related to longer projects, with a 1-year purge policy. A /scratch file system with 100G/user quota is used for high-performance IO with a 2-week purge policy. A summary is listed below.
- Users will be notified before any file purge takes place.
- For large file transfers, users are encouraged to initiate the transfer process within SLURM jobs.
- Backups of data are responsibility of user. Panasas storage is robust, but is not backed up. There are two days of daily snapshots currently.
- LITS (archival) storage is covered under a separate policy.
Type | Quota | Purge Policy |
/home | 25G/user | No Purge |
/project | 1T/group | 1-year |
/scratch (created upon request) | 100G/user | 2-week |
Computation
- The Simple Linux Utility for Resource Management (SLURM) job scheduler will be adopted to manage job submissions. All interactions with the cluster worker nodes need to go through the job scheduler. This allows users to gain experience with standard HPC environments which are used in other places, including national labs and the XSEDE.
- The login node is shared by all users and must be used only for submitting jobs and other light weight activities. No CPU or memory intensive programs are allowed on the login node. Any abuser’s processes may be killed without notice. Abuse behaviors mainly include occupying 50% or more CPU power or cause physical RAM to be long time fully occupied, and large data operations or transfers (including SFTP connections) that have visible adverse impact on the responsiveness of the login node.
- Processes will be killed if generating really large amounts of garbage files. Using computer resources to do non-research-related storage and calculation, such as cyber-coin mining, or use the platform to run hacking procedures or other persistent services that are illegal or are against University policies are prohibited.
- A module system will be used to manage third party software installations. The module system can dynamically change the users’ environment to give them access to different software stacks. It can be used to maintain different versions of the same software and avoid package conflicts.
- All proprietary software must have an appropriate and current license. Users may need to purchase their own license and maintain a license server if one is not provided by the University.
- There are five partitions on the cluster with runtime ranging from 30 minutes to 1 month, i.e. short-cpu (30 minutes), day-long-cpu (1 day), week-long-cpu (7 days), month-long-cpu (31 days), and interactive-cpu (2 days). The runtime limit can be removed on owned partitions.
- Guest jobs are allowed to run on owned partitions when the resources are idle, but are preemptable (suspended and automatically re-queued) when the owner job comes in.
- A fairshare mechanism is implemented in the SLURM job scheduler. It provides a way to ensure all cluster users have the same share of the free computing resource. In short, the more one uses, the lower priority their jobs receive. The fairshare factor is calculated on a group basis by the job scheduler and cannot be micromanaged.
- Backfill is turned on to allow small jobs to get in and out of reserved resources while the scheduler is reserving resources for larger jobs.
Future Expansion
- Compute Nodes:
- HPC admin and the CDAG have established a recommended specification for a standard compute node at a pre-negotiated price.
- Compute nodes must become part of the cluster, i.e., have the same OS, scheduler, storage, networking and login.
- Researchers receive priority and exclusive use of the compute node when requested. This use is pre-emptive and has a higher priority over non-reserved use.
- Storage:
- Storage can be purchased for a reserved quota on Panasas storage.
- Reserved/purchased quota has no time limit.
- Storage is integrated and is not extractable when a researcher leaves.
Miscellaneous
- RSPH HPC is freely available to all RSPH faculty, staff, students, and external collaborators who have been granted access. Only RSPH faculty members can be group owners. New accounts need to be approved by corresponding group owners.
- Emory VPN is enforced for off-campus access to the cluster.
- Users are encouraged to acknowledge RSPH HPC in their publications, e.g. news, poster, journal paper, conference paper, thesis if they used RSPH HPC in their work.
- One account per user. No account sharing is allowed.