Purdue IT maintains many different resources for computation. Here is a brief introduction to current computational resources. More information and detailed documentation are available for each resource listed below.
Rossmann is a Community Cluster optimized for communities running applications subject to heightend security requirement such as data subject to the NIH Genomic Data Sharing (GDS) policy, licensed data, or healthcare data. Rossmann consists of Dell PowerEdge compute nodes with either a CPU or GPU focus. CPU nodes feature two 96-core AMD Epyc Genoa processors (192 cores per node) and 1.5 TB of memory. GPU nodes feature two 64-core AMD Epyc Genoa processors (128 cores per node), 1.5 TB of memory, and two NVIDIA H100 GPUs.
Gautschi is a Community Cluster optimized for communities running traditional, tightly-coupled science and engineering applications. Gautschi is being built through a partnership with Dell and AMD in the fall of 2024. Gautschi consists of Dell PowerEdge compute nodes with two 96-core AMD Epyc "Genoa" processors (192 cores per node) and either 384 GB or 1.5 TB of memory. All nodes have 200 Gbps NDR Infiniband interconnects and a 6-year warranty. Gautschi also includes a small portion of nodes featuring two NVIDIA L40 GPUs each.
Rowdy is a technology testbed intended for evaluation and benchmarking of new computing, networking and storage architectures, including accelerators and CPUs.
Negishi is a Community Cluster optimized for communities running traditional, tightly-coupled science and engineering applications. Negishi is being built through a partnership with Dell and AMD over the summer of 2022. Negishi consists of Dell compute nodes with two 64-core AMD Epyc "Milan" processors (128 cores per node) and 256 GB of memory. All nodes have 100 Gbps HDR Infiniband interconnect and a 6-year warranty.
Geddes is a Community Composable Platform optimized for composable, cloud-like workflows that are complementary to the batch applications run on Community Clusters. Funded by the National Science Foundation under grant OAC-2018926, Geddes consists of Dell compute nodes with two 64-core AMD Epyc "Rome" processors (128 cores per node)
Bell is a Community Cluster optimized for communities running traditional, tightly-coupled science and engineering applications. Bell is being built through a partnership with Dell and AMD over the summer of 2020. Bell consists of Dell compute nodes with two 64-core AMD Epyc "Rome" processors (128 cores per node) and 256 GB of memory. All nodes have 100 Gbps HDR Infiniband interconnect and a 6-year warranty.
Anvil will be a powerful new supercomputer in 2021 hosted at Purdue through a $10M NSF grant to provide advanced computing capabilities to support a wide range of computational and data-intensive research spanning from traditional high-performance computing to modern artificial intelligence applications.
Deployed in 2019, Weber is Purdue's specialty high performance computing cluster for data, applications, and research which are covered by export control regulations such as EAR, ITAR, or requiring compliance with the NIST SP 800-171. Weber consists of Dell compute nodes with two 64-core AMD EPYC 7713 processors, and Dell GPU nodes with two 8-core Intel Xeon 4110 processors and a Tesla V100 GPU All nodes have 56 Gbps EDR Infiniband interconnect.
Gilbreth is a new type of addition to Purdue's Community Clusters, designed specifically for applications which are able to take advantage of GPU accelerators. While applications must be specially-crafted to use GPUs, a GPU-enabled application can often run many times faster than the same application could on general-purpose CPUs. Due to the increased cost of GPU-equipped nodes, Gilbreth is being offered with some new purchase options to allow for shared access at a lower price point than the full cost of a node.
The Scholar cluster is open to Purdue instructors from any field whose classes include assignments that could make use of supercomputing: from high-end graphics rendering and weather modeling, to simulating millions of molecules and exploring masses of data, to understanding the dynamics of social networks.
Hammer is optimized for Purdue's communities utilizing loosely-coupled, high-throughput computing. Hammer was initially built through a partnership with HP and Intel in April 2015. Hammer was expanded again in late 2016. Hammer will be expanded annually, with each year's purchase of nodes to remain in production for 5 years from their initial purchase.