Holland Computing Center

From HandWiki

The Holland Computing Center, often abbreviated HCC, is the high-performance computing core for the University of Nebraska System. HCC has locations in both the University of Nebraska-Lincoln June and Paul Schorr III Center for Computer Science & Engineering and the University of Nebraska Omaha Peter Kiewit Institute.[1] The center was named after Omaha businessman Richard Holland who donated considerably to the university for the project.[2]

Both locations provide various research computing services and hardware. The retrofitted facilities at the PKI location include the Crane Supercomputer[3] which “is used by scientists and engineers to study topics such as nanoscale chemistry, subatomic physics, meteorology, crashworthiness, artificial intelligence and bioinformatics”[4] and Anvil, the Holland Computing Center's "Cloud" based on the OpenStack Architecture. Other resources include "Rhino" for shared memory processing and "Red" for LHC grid computing.

Active resources

Crane

The Crane Supercomputer is HCC's most powerful supercomputer and is used as the primary computational resource for many researchers within the University of Nebraska system across a variety of disciplines. When it was implemented in 2013, Crane was ranked 474 in the TOP500.[5] As of May 2019, Crane is composed of 548 nodes offering a total of 12,236 cores, 68,000 GB[clarification needed] of memory, and 57 Nvidia GPU's. Crane has 1.5 PB of available Lustre storage (1 PB = 1 million gigabytes).

In 2017, Crane received a major upgrade, adding nodes with the Omnipath InfiniBand Architecture.  

Rhino

Rhino is the latest addition to HCC's Resources, taking the place of the former Tusker super computer, using nodes from both Tusker and Sandhills. At its creation in June 2019, Rhino was composed of 112 nodes offering a total of 7,168 cores, 25,856 GB of memory. The cluster has 360 TB of Lustre storage available.

Red

Red is the resource for the University of Nebraska-Lincoln's US CMS Tier-2 site. Initially created in August 2005, the cluster initially contained 111 nodes with 444 AMD Opteron 275 or AMD Opteron 2216 processors and 100TB of storage. Over time, Red has grown to 344 nodes with 7,280 cores mixed between Intel Xeon processors and AMD Opteron processors and 7 PB of storage using the Hadoop Distributed File System.

Red's primary focus is the CMS project in Switzerland, including the recent discovery of the LIGO gravitational wave discovery.

Attic

Attic is HCC's near-line data archival system for researchers to use either in aggregation with the computing resources offered, or independently. Attic currently has 1 PB of available data storage backed up daily at both Omaha and Lincoln locations.

Anvil

Anvil is HCC's cloud computing resource, based on the OpenStack software. Anvil allows researchers to create virtual machines to do research or test concepts not well suited to a cluster environment or where root access is needed. Anvil currently has 1,500 cores, 19,400 GB of memory, and 500 TB of available storage for use by researchers.

Retired resources

RCF

RCF, Research Core Foundation, was used from March 1999 to January 2004. It was an Origin 2000 machine with 8 CPUs, 108GB of storage, 24 GB of memory in total and a 155Mbit/s connection to Internet2.

Homestead

Homestead was the successor to RCF, running from January 2004 to September 2008. Its name comes from Nebraska being a large portion of the Homestead Act of 1862. The cluster consisted of 16 nodes with 2 R10k CPUs, 256MB of memory, and 6GB of storage.

Bugeater

Bugeater was the first cluster with the Holland Computing Center, running from October 2000 to 2005. Its namesake is the original University of Nebraska mascot, The Bugeaters. The cluster was a prototype Beowulf cluster consisting of 8 nodes, each with 2 Pentium III CPUs and 20GB of storage

Sandhills

Sandhills originally was created in February 2002 and the original hard was retired in March 2007. It consisted of 24 nodes, each with 2 Athlon MP CPUs, 1GB of memory, and 20GB of storage.  

In 2013, Sandhills received a large upgrade to a mix of 8, 12, and 16 core AMD Opteron Processors. The cluster had 108 nodes with 5,472 cores, a total of 18,000 GB of memory and a total of 175 TB of Storage. This revision was retired November 2018 and is now part of the Rhino cluster.

Prariefire

Prairiefire was the first notable cluster with the Holland computing center, ranking in the TOP500 for 3 consecutive years,[6] 2002, 2003, and 2004, placing 107th, 188th, and 292nd respectively. Prairiefire got its namesake from the Nebraska prairies. The cluster ran from August 2002 to 2006. At the time of its 2002 TOP500 placement, it had 128 nodes with 2, dual core AMD AthlonMP CPUs, 2GB of memory. Prariefire retired in 2012 when it was merged into the newer Sandhills cluster.

Merritt

Named after the Merritt Reservoir, Merritt ran from August 2007 to June 2012. Merritt was an Altix 3700 with 64 Itanium2 processors, 512 GB of memory and 8TB of storage.

Firefly

Firefly was another notable cluster, placing 43rd in the TOP500[7] at its creation in 2007. Before retiring in July 2013, Firefly consisted of 1151 nodes, each with 2 dual core AMD Opteron processors, 8GB of memory, and a total of 150TB of storage. During its use, 140 nodes were upgraded to dual quad core engineering samples from AMD.

Tusker

Tusker was the Holland Computer Center's high memory cluster, designed for researchers to be able to run jobs requiring a large quantity of memory with nodes ranging from 256 GB to 1 TB of memory per node. In total, Tusker had 5200 cores, 22 TB of memory, and 500 TB of Lustre storage space. Tusker was retired April 2019 and is now part of the Rhino cluster.

References