About the RCF Computing Cluster

The RCF now hosts a Beowulf-style computing cluster, accessible to all members of the Department of Mathematics & Statistics. There is (and will forever remain) zero cost to using the cluster. Resources are currently allocated on a “first come, first serve” basis (although we expect this to change as demand increases). Right now, there are 752 cores distributed across 26 nodes, with an average of ~1 GB of memory per core, and we’re constantly in the process of adding more nodes as they become available.

Currently the cluster runs Centos 7, uses Slurm for workload management, and uses OpenMPI for message passing. The user home directories available on ssh are also mounted on the cluster. The development environment consists of the GNU autotools and compiler toolchain, python, R, Pari/GP, and many other common packages and libraries. If there is any additional software you would find useful for your research, please don’t hesitate to put in a request to rcfsupport@groups.umass.edu.

We will be adding to this documentation consistently as this project grows!

Hardware Configuration

The specs of the individual nodes are as follows:

Node

CPU Model Name

CPU cores

Sockets:Cores:Threads

Memory (mB)

compute-0

Six-Core AMD Opteron(tm) Processor 2435

12

2 : 6 : 1

64487

compute-1

Intel(R) Xeon(R) CPU X3440 @ 2.53GHz

8

1 : 4 : 2

32160

compute-2

Intel(R) Xeon(R) CPU X3440 @ 2.53GHz

8

1 : 4 : 2

32160

compute-3

Intel(R) Xeon(R) CPU X3440 @ 2.53GHz

8

1 : 4 : 2

32160

compute-4

Intel(R) Xeon(TM) CPU 3.00GHz

4

2 : 1 : 2

7917

compute-6

Intel(R) Xeon(TM) CPU 3.00GHz

4

2 : 1 : 2

7917

compute-7

Intel(R) Xeon(TM) CPU 3.00GHz

4

2 : 1 : 2

5901

compute-8

Intel(R) Xeon(TM) CPU 3.00GHz

4

2 : 1 : 2

3885

compute-9

Intel(R) Xeon(TM) CPU 3.00GHz

4

2 : 1 : 2

3885

compute-10

Dual-Core AMD Opteron(tm) Processor 2212

4

2 : 2 : 1

5904

compute-11

Dual-Core AMD Opteron(tm) Processor 2212

4

2 : 2 : 1

3948

compute-12

Dual-Core AMD Opteron(tm) Processor 2212

4

2 : 2 : 1

7920

compute-13

Dual-Core AMD Opteron(tm) Processor 2212

4

2 : 2 : 1

7980

compute-14

Dual-Core AMD Opteron(tm) Processor 2212

4

2 : 2 : 1

7980

compute-15

Dual-Core AMD Opteron(tm) Processor 2212

4

2 : 2 : 1

7920

compute-16

Dual-Core AMD Opteron(tm) Processor 2212

4

2 : 2 : 1

3948

compute-17

Dual-Core AMD Opteron(tm) Processor 2212

4

2 : 2 : 1

7980

compute-18

Dual-Core AMD Opteron(tm) Processor 2212

4

2 : 2 : 1

7920

compute-19

Dual-Core AMD Opteron(tm) Processor 8212

8

4 : 2 : 1

7978

compute-20

Dual-Core AMD Opteron(tm) Processor 8212

8

4 : 2 : 1

16042

compute-21

Dual-Core AMD Opteron(tm) Processor 2212

4

2 : 2 : 1

7982

compute-22

AMD EPYC 7542 32-Core Processor

128

2 : 32 : 2

128586

compute-23

AMD EPYC 7542 32-Core Processor

128

2 : 32 : 2

128586

compute-24

AMD EPYC 7542 32-Core Processor

128

2 : 32 : 2

128586

compute-25

AMD EPYC 7542 32-Core Processor

128

2 : 32 : 2

128586

compute-26

AMD EPYC 7542 32-Core Processor

128

2 : 32 : 2

128586

Scheduling Manager

The cluster uses a scheduling manager called “slurm” to allocate CPUs and memory. If you want to run a job on the cluster, you must submit it through slurm. Note users without a slurm account will NOT be able to submit jobs. To request an acount, please contact the RCF at rcfsupport@groups.umass.edu. Documentation for using slurm on our cluster can be found in the Submitting Jobs section. There are also many slurm tutorials available online. In particular, we recommend checking out the Slurm FAQ and this Quick Start Guide.

Software

A sample of the most frequently used software on the rcfcluster is listed in the table below. For a complete list of ALL available software, one can type module available on the command line, after logging in to the cluster.

Name

Version

boost

1.71.0

conda

4.10.1

fttw

3.3.8

gcc

5.4.0

8.3.0 (default)

jags

4.3.0

julia

1.5.3

1.7.2 (default)

make

3.82

compute-[22-26] only

magma

2.26-11

compute-[22-26] only

openblas

0.3.7

openMPI

3.1.4

Pari / GP

2.14.0 (pthread engine)

compute-[22-26] only

python2

2.7.5

python3

3.6.8

R

3.6.1

4.2.0 (default)

sagemath

9.1

singularity

3.4.1

Sending Feedback

Please remember that this cluster is still in beta testing, and we welcome all questions, suggestions, and general feedback! Please direct all comments to rcfsupport@groups.umass.edu. Thank you!