LCC system overview

LCC system overview

LCC is a traditional batch-processing cluster, with high-speed interconnects and a shared filesystem.

It consists of:

  • 2 Admin nodes (Motherships)

Intel Processor Number

Processor Class

Cores per node

Nodes In Cluster

Memory per node (GB)

Network 

Node Names

Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz

Haswell

24

2

128

Infiniband  EDR(100Gbps)

mothership[1-2]

  • 6 Login nodes

Intel Processor Number

Processor Class

Cores per node

Nodes In Cluster

Memory per node (GB)

Network

Node Names

E5-2670

Haswell

24

6

128

Infiniband  EDR(100Gbps)

login[001-006]

  • 202  Compute nodes

Node Type

Intel Processor Number

Processor Class

Cores per node

Nodes In Cluster

Total Cores in Cluster

Memory per node (GB)

GPU Type

Total GPU's

GPU RAM

Network

Node Names

Skylake Nodes

6130

Skylake

32

56

1,792

192

 

 

 

Infiniband  EDR(100Gbps)

skylake[001-056]

SKYLAKE with NVIDIA P100 cards

6130

Skylake

32

2

64

192

P100

8

16GB

Infiniband  EDR(100Gbps)

gpdnode[001-002]

SKYLAKE with NVIDIA P100 cards

6130

Skylake

32

10

320

192

P100

40

12GB

Infiniband  EDR(100Gbps)

gphnode[001-010]

SKYLAKE with NVIDIA V100 cards

6130

Skylake

32

6

192

192

V100

24

32GB

Infiniband  EDR(100Gbps)

gvnode[001-006]

CASCADE Nodes

6252

Cascade

48

52

2496

192

 

 

 

Infiniband  EDR/2(50Gbps)

cascade[001-052]

CASCADE Nodes

6252

Cascade

48

60

2880

192

 

 

 

Infiniband  EDR(100Gbps)

cascadeb[001-060]

CASCADE with NVIDIA V100 cards

6230

CASCADE

40

12

240

192

V100

48

32GB

Infiniband  EDR(100Gbps)

gvnodeb[001-012]

Icelake  with NVIDIA A100 cards

6330

Icelake

56

4

224

256

A100

8

80GB

Infiniband  EDR(100Gbps)

ganode[001-004]

Sapphire Rapids with NVIDIA H200 cards

8480

Sapphire Rapids

112

2

224

2048

H200

16

144GB

Infiniband EDR(100Gbps)

ghnode[001-002]

  • 1 Data transfer node

Node Type

Intel Processor Number

Processor Class

Cores per node

Nodes In Cluster

Total Cores in Cluster

Memory per node (GB)

Network

Node Names

DTN Node

6152

Cascade

44

1

44

192

Ethernet (40 Gbps external)

EDR(100 Gbps Internal)

dtn

 

Lenovo GPFS (DSS-G) parallel file system -1: 1.3PB Usable (1.9PB RAW)

Lenovo GPFS (DSS-G) parallel file system -2: 1.6PB Usable (2.2PB RAW)

 

For details on SLURM queue names see the SLURM Queues page.

 

 

 

 

Center for Computational Sciences