SCARF Configuration

The hardware for SCARF is purchased on a yearly basis, with each new addition added to the same set of computational job queues so as to give the users a seamless experience. This has the advantage of meaning that no large outlay of money is needed in a single year, as well as enabling us to get the best quality hardware for our money on each occasion.

The original cluster was a 256 core, 128 node cluster purchased in 2004, which has since been retired from service and completely replaced with the incremental expansions listed below.

Current Hardware Configuration

The SCARF computational cluster is comprised of a number of host groups which are listed in the table below (Scroll to see all of table).

Host Group

Slurm Feature

CPU Type/Freq

Nodes

Cores /Node

Total Cores

Infiniband

Memory

Tot Mem

Disks

SCARF 23

scarf23

AMD Epyc 7502

132

32

4224

HDR

256 GB

33792GB

1x500G

SCARF 22

scarf22

AMD Epyc 7502

32

32

1024

HDR

256 GB

8192GB

1x500G

SCARF 21

scarf21

AMD Epyc 7502

168

32

5376

HDR

256 GB

43008GB

1x500G

SCARF 20

scarf20

AMD Epyc 7502

78

32

2560

EDR

256 GB

21504GB

1x500G

SCARF 19

scarf19

Intel Gold 6126

16

24

384

EDR

192 GB

3072GB

1x1TB

SCARF 18

scarf18

Intel Gold 6126

148

24

3552

EDR

192 GB

28416GB

1x1TB

SCARF 17

scarf17

Intel E5-2650 v4

201

24

4824

EDR

128 GB

25728GB

1x1TB

SCARF 16

scarf16

Intel E5-2650 v3

56

20

1120

FDR

128 GB

7168GB

1x500G

SCARF Compute Total

831

23064

170880GB

SCARF also has a number of nodes equipped with GPUs.

Host Group

Slurm Feature

CPU Type/Freq

GPU Type/Count

Nodes

Cores /Node

Total Cores

Infiniband

Memory

Tot Mem

Disks

SCARF 23

scarf23

AMD 7302

NVidia A100x4

6

32

192

HDR

256 GB

1536GB

1x480G

SCARF 21

scarf21

AMD 7302

NVidia A100x4

6

32

192

HDR

256 GB

1536GB

1x480G