GPU Cluster

Our GPU Cluster has nine rack servers. Each server equipped with four NVIDIA GeForce GTX Titan XP GPU cards and connected to a shared SSD storage server. More details are listed below:

1 Shared iPS-42-324-EXP “Headnode/24 bay Storage”

CPU: 2 x Intel Xeon Silver 4110, 2.1GHz (8-Core, HT, 2400 MT/s, 85W) 14nm
RAM: 192GB (12 x 16GB DDR4-2666 ECC Registered 1R 1.2V RDIMMs) Operating at2666 MT/s Max
Management: IPMI 2.0 & KVM with Dedicated LAN – Integrated NOTE: Intel dual-port X557 10GBase-T
Ethernet Controller – Integrated Controller: SAS3 via Broadcom HBA 3008 AOC, IT Mode – up to 122 Devices
Backplane: SAS3 12Gb/s expander backplane, with 24 x SAS3 3.5-in drive slots SAS3
Expander: Expander provides connectivity to all drives and expansion port
Expansion Port: External SAS3 Connector (SFF-8644) for JBOD Expansion
PCIe 3.0 x8 – 3: Mellanox ConnectX-3 Pro VPI FDR InfiniBand and 40/56GbE NetworkAdapter, Single-Port QSFP,
Drive Set: 6 x Seagate 10TB Exos X10 HDD (12Gb/s, 7.2K RPM, 256MB Cache, 512e) 3.5-in SAS
Drive: 2 x Intel 240GB DC S4500 Series 3D TLC (6Gb/s, 1 DWPD) 2.5″ SATA

9 Rackform R353.v6 Servers

CPU: 2 x Intel Xeon E5-2650v4, 2.2GHz (12-Core, HT, 30MB Cache, 105W) 14nm
RAM: 128GB (8 x 16GB DDR4-2400 ECC Registered 1R 1.2V DIMMs) Operating at 2400MT/s Max
NIC: Intel i350 Dual-Port RJ45 Gigabit Ethernet Controller Management: IPMI 2.0 & KVM with Dedicated LAN – Integrated Integrated
Drive Controller: 4 Ports 6Gb/s SATA3 via Intel C612 Chipset
PCIe 3.0 x16 – 1: NVIDIA GeForce GTX Titan XP Graphics Card, 12GB GDDR5X, 250W PCIe 3.0 x16, DP 1.4/HDMI 2.0b – Active Cooling
PCIe 3.0 x16 – 2: NVIDIA GeForce GTX Titan XP Graphics Card, 12GB GDDR5X, 250W,PCIe 3.0 x16, DP 1.4/HDMI 2.0b – Active Cooling
PCIe 3.0 x16 – 3: NVIDIA GeForce GTX Titan XP Graphics Card, 12GB GDDR5X, 250W,PCIe 3.0 x16, DP 1.4/HDMI 2.0b – Active Cooling
PCIe 3.0 x16 – 4: NVIDIA GeForce GTX Titan XP Graphics Card, 12GB GDDR5X, 250W,PCIe 3.0 x16, DP 1.4/HDMI 2.0b – Active Cooling
PCIe 3.0 x8 (x16) – 1: Mellanox ConnectX-3 Pro VPI FDR InfiniBand and 40/56Gb
Network Adapter, Single-Port QSFP, PCIe 3.0 x8
Hot-Swap Drives: 1 x Seagate 2TB Exos 7E2000 HDD (6Gb/s, 7.2K RPM, 128MB Cache,512e) 2.5-in SATA

GPU Desktops

Besides the GPU cluster, our lab also have 3 Alienware Area-51 GPU Desktop. Each desktop is equipped with 2 NVIDIA® GeForce® RTX 2080 Ti OC with 11GB GDDR6 GPU Cards. The CPU is Intel® Core™ i9 7980XE and the DRAM is 64GB DDR4 at 2666MHz Quad Channel.

CASS Laboratory

The Computer Architecture and Storage System Research (CASS) laboratory was founded in 2002 at the University of Nebraska-Lincoln by the PI Wang and has currently grown to seven student investigators at University of Central Florida in addition to seven Ph.D. alumni. The CASS lab is well equipped with the state of the art computing platforms for file storage and high-performance computing research activities. In 2018, We have successfully launched a brand new 10-node Silicon Mechanics Rackform R353.v6 GPU cluster sponsored by the DEFENSE UNIVERSITY RESEARCH INSTRUMENTATION PROGRAM 2017. Its headnode machine is configured with CPU: 2 x Intel Xeon Silver 4110, 2.1GHz (8-Core, HT, 2400 MT/s, 85W) 14nm RAM: 192GB (12 x 16GB DDR4-2666 ECC Registered 1R 1.2V RDIMMs) Operating at 2666 MT/s Max. Nine serve nodes are configured with CPU: 2 x Intel Xeon E5-2650v4, 2.2GHz (12-Core, HT, 30MB Cache, 105W) 14nm RAM: 128GB (8 x 16GB DDR4-2400 ECC Registered 1R 1.2V DIMMs) Operating at 2400 MT/s Max plus four NVIDIA GeForce GTX Titan XP Graphics Card, 12GB GDDR5X cards. In total, our new CASS distributed GPU cluster is equipped with 36 NVIDIA GeForce GTX Titan XP Graphics Cards, 232-core CPU 2x Intel Xeon (16+24*9), 1.344TB RAM, 40 TB Flash SSD direct attached storage and an Mellanox SX6005 SwitchX-2 FDR InfiniBand Switch, 12-port QSFP, 2PS, Short Depth, Power-to-Port Airflow. . In addition, there are six Dell Precision series high-performance workstations, several 10-SCSI-Disk RAID and all Flash/SSD array test beds, an OpenSSD FPGA development platform, three Dell high-performance Precision Workstations T3420 SFF, two Dell Optiplex GX series desktops, as well an Agilent 34970A 20-channel data acquisition system.

PRObE

The Parallel Reconfigurable Observational Environment (PRObE) is a collaboration between the National Science Foundation (NSF), New Mexico Consortium (NMC), Los Alamos National Laboratory (LANL), Carnegie Mellon University (CMU), and the University of Utah (Utah). Started in Oct 2010 computer facilities at NMC were constructed, and computers built to make a world unique systems research facility available. We have accesses to two available PRObE clusters for prototyping, including a 128-node, 256-core Marmot and a 34-node, 2176-core Susitna. As one of key participants, the PI’s lab has accesses to all PRObE cluster resources for prototyping and development. See https://www.cass.eecs.ucf.edu/?page_id=114 for more detail.
I2Lab at UCF: The I2 (Interdisciplinary Information science and technology) Laboratory is to establish partnerships with universities and research institutions in the US and abroad to solve challenging scientific and societal problems at UCF. It consists of several medium-scale cluster machines, including a 64-node Dell 1950 server cluster, a 65-node Sun V20 cluster, etc.

UCF HPC computing resources

The UCF ARCC houses high performance computing (HPC) resources that are subsized by the UCF Provost and Vice President of Research and Commercialization for use in research by faculty (and their students) across the campus. In addition, advanced network capabilities exist and additional ones are being installed (a so-called “Science DMZ”). Research supported by the ARCC covers a wide variety of areas, including Engineering, Modeling and Simulation, Optics, Nanoscience, Chemistry, Physics, Astronomy, and Biology. Qualified personnel maintain and upgrade the center’s resources and help researchers who want to use them in their research. The administration of the University has committed to providing ongoing support for the ARCC as a central resource for the campus. Center personnel are actively engaged in seeking funding to expand the center’s capability to perform HPC research, in addition to providing “production” computing resources for UCF researchers. The ARCC is a full member of Florida’s Sunshine State Education and Research Computing Alliance. Currently, the ARCC includes an HPC system (known as Stokes) with over 3,000 cores, and 240 TB of storage. Stokes includes 56Gb Infiniband interconnect between all nodes and has a 20Gb connection to the UCF campus core network. A small visualization cluster with GPU and Intel Phi cards will also soon be available. In addition, the ARCC has 250kW of available power with an uninterruptible power supply and generator, all supported by 65 tons of dedicated cooling. The PI’s team already has access to these machines.