Compute Resources

You are here

 

Computing Resources

The data center in the CMRR is a 750 sq. ft. room with redundant power (100KW Uninterruptible Power Supply and Diesel Generator) and redundant cooling systems (chilled water and glycol). The CMRR has an all fiber optic ethernet network throughout the building with 10Gb connections to each magnet room. There is also a 10Gb uplink to the University of Minnesota backbone that has a local point-of-presence for Internet2 enabling 10Gb connectivity to all other participating Universities. The data center in the CMRR has over 100 servers to handle the post-processing, analysis and storage of the enormous amount of data generated by the 9 MR scanners operating in the building.   

Data Storage

Fifty (50) servers provide NFS file sharing with a total of over 800 Terabytes of RAID data storage and an additional 500 Terabytes is used for XFS snapshot backups for an aggregate of 1.3 Petabytes of HDD data storage. The data center also houses a 500 TB of rack of storage used for disaster recovery contingency for the Human Connectome Project. An Overland Storage NEO XL 80 LTO-6 SAS Tape Library is used for automated backups and is currently configured with 80 tape slots and 4 Tape Drives for LTO6 tapes each with a capacity of 2.5TB native.

Compute Nodes

Thirty (30) of the CMRR servers are compute nodes, with an aggregate of over 1500 CPU cores and 8 TB of memory where the three largest single compute server nodes has 72 cores and 512GB of memory plus 28 additional high performance compute nodes with at least 128GB of memory. Many of these HPC nodes are equipped with modern NVIDIA GPU accelerator cards for off-loading compute intensive workloads. Four of these systems feature the latest Nvidia Tesla A100 GPUs with NVLINK GPU-to-GPU interconnects to provide double precision performance up to 9 Teraflops which is indispensable for RF simulations, multichannel reconstruction and deep learning workflows.