Skip to main content
Have a request for an upcoming news/science story? Submit a Request

Gilbreth

  • Unscheduled Gilbreth outage

    Gilbreth is experiencing scheduling issues and jobs have been paused while RCAC works to resolve this issue. Running jobs have also been impacted, so you will need to resubmit your job if it was running when the job scheduling resumes. We will provid...

  • Multiple clusters outage

    Multiple clusters have been powered off in MATH G109 datacenter due to a water issue in the building. Affected systems are Bell, Brown, Geddes, Gilbreth and Negishi. We will provide an update by 5:00 PM today.

  • Update to Coffee-Hour Consultation Schedule

    A few announcements for the RCAC community regarding our regular Coffee-Hour Consultation services. Support staff will be unavailable both today (Monday) and tomorrow (Tuesday) because of other scheduled events. Notably, tomorrow is our Annual Cyberi...

  • Data Depot degraded performance on RCAC clusters

    Users of Data Depot on RCAC clusters are currently experiencing significant performance degradation. The symptoms manifest as delays in listing or accessing files in /depot, significant lags in terminal sessions (especially if you have Data Depot in...

  • RCAC Support Updates

    • widget.news::news.updated:

    Hello, PurdueIT is replacing our ticketing system, Footprints, with a new product called Team Dynamix (TDX). Email to rcac-help@purdue.edu will begin to forward to the TDX environment effective Wednesday, July 12, 2023. You may notice a difference in...

  • BoilerKey Transition to Purdue Login

    Overnight on June 26-27th (Monday-Tuesday), all Purdue systems which use BoilerKey, including RCAC clusters and other systems, will switch to the new Purdue Login. For more information about this change, please see the following documentation: https:...

  • Gilbreth Cluster Maintenance

    The Gilbreth cluster will be unavailable Wednesday, June 7, 2023 from 7:30am - 5:00pm EDT for scheduled maintenance. The cluster will return to full production by Wednesday, June 7th, 2023 at 5:00pm EDT. During this time, Gilbreth will have the opera...

  • Unscheduled Gilbreth outage

    The Gilbreth cluster began experiencing issues with its scheduler spool filesystem around 10:30pm EDT on Saturday, March 18th, 2023. The problem manifests as an I/O error during new batch job submissions and in Open OnDemand gateway applications. Int...

  • Research Computing Holiday Break

    Research Computing personnel will observe the university winter break from 5:00pm EST on Thursday, December 22nd, 2022, and will resume normal business hours on Tuesday, January 3rd, 2023. During this time, Research Computing services will continue...

  • Scheduled Gilbreth Upgrade

    Gilbreth will be unavailable due to maintenance from November 30, 2022 8:00am until December 1, 2022 5:00pm EST to allow for an expansion to Gilbreth’s resources. This maintenance will complete the remaining work previously announced in the Gilbreth...

  • Gilbreth Queue Changes

    To support the expansion of Gilbreth and related pricing changes, there will be several changes to queues on Gilbreth. These changes are designed to increase the availability of GPUs on Gilbreth and reduce wait time: Each lab/PI will have a named q...

  • Scheduling paused on multiple clusters

    The Bell, Brown, Gilbreth, Halstead, and Scholar clusters began experiencing issues with their Data Depot mounts around 9:50am EST. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while...

  • Scheduling Paused on Brown, Gilbreth, Halstead, and Hammer

    • widget.news::news.updated:

    As of 11:30am EDT, the Brown, Gilbreth, Halstead, and Hammer clusters began experiencing issues with their filesystems which may cause login failures. Engineers are currently investigating the root cause, and in the interim, job scheduling has been p...

  • MATH data center cooling outage

    The Math building data center began experience issues with its cooling system around 1:40pm EDT. To minimize thermal load on the cooling infrastructure, job scheduling has been paused and all idle compute nodes on Anvil, Bell, Geddes, Gilbreth, and...

  • [REVISED] Scheduled Gilbreth Upgrade

    Gilbreth will be unavailable due to maintenance on July 20th 8:00am-5:00pm to allow for an expansion to Gilbreth’s resources. In response to the growing demand for hardware which can facilitate GPU-accelerated workloads, Gilbreth is being expanded to...

  • Replacement of "notebook.gilbreth" by Open OnDemand

    As part of the July 20th [REVISED] Scheduled Gilbreth Upgrade, the Jupyter notebook service at notebook.gilbreth.rcac.purdue.edu will be retired. The old notebook service shares a single GPU on the Gilbreth front-ends among many users, leading to res...

  • Gilbreth New Home Directories

    • widget.news::news.updated:

    As part of the July 20th Gilbreth Maintenance, home directories on Gilbreth will be separated from the legacy home filesystem shared with several other clusters. This move will allow Gilbreth to grow and maintain home directory storage for years to c...

  • Gilbreth upgrade and revised pricing structure effective July 2022

    Research Computing is excited to now offer GPUs under the same arrangement as CPU Community Clusters in an expanded Gilbreth cluster thanks to a significant investment by ITaP and EVPRP! All the benefits of Community Clusters and more: Cheaper than...

  • Gilbreth scratch degraded performance

    Following last night's scratch outage, the Gilbreth scratch filesystem is currently functional but operates with partially degraded performance. Engineers have opened a support ticket with the vendor and monitor the state of the filesystem continuou...

  • Unscheduled Gilbreth cluster outage

    The Gilbreth cluster began experiencing issues with its scratch filesystem around 7:00pm EDT. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while this issue is being addressed. We will...