Skip to main content
Have a request for an upcoming news/science story? Submit a Request

Halstead

  • Halstead Cluster Maintenance Upgrade

    • widget.news::news.updated:

    The Halstead cluster will be taken down on Monday, May 14th, 2018 at 8:00am EDT for a planned upgrade to CentOS7. This process is expected to take two days, and Halstead will not return to service until 5:00pm on %enddate%. This upgrade may affect ap...

  • Job Scheduling Issue on Clusters

    • widget.news::news.updated:

    As of Monday, April 16th, 2018 at 10:00am EDT, Halstead, HalsteadGPU, and Hammer are not properly scheduling new jobs due to a problem with the Moab scheduler. Existing jobs are unaffected. We are working with the vendor to address this and expect...

  • Common CentOS7 Upgrade Questions

    The Halstead, Rice, and Snyder clusters will be upgraded to newer CentOS7 operating system in May 2018 (detailed announcements in: Rice Upgrade to CentOS7, Halstead Upgrade to CentOS7, Snyder Upgrade to CentOS7). Along with operating system upgrade,...

  • New Halstead Scratch Storage

    • widget.news::news.updated:

    The Halstead and HalsteadGPU scratch storage will be moving to a new storage system over Thursday, April 12, 2018. There will not be any automatic transfer of files from your old scratch space to your new scratch space. You will find there are two ne...

  • Halstead Upgrade to CentOS7

    In order to continue to offer a current computational platform for research at Purdue, Halstead is going to receive a complete upgrade to CentOS7 - the Community Development Platform for the Red Hat family of Linux distributions. CentOS7 is just one...

  • All Clusters Outage

    All Research Computing systems suffered an unplanned outage Saturday, March 24th, 2018 at 8:15pm EDT due to a widespread power failure in the area. Thanks to diligent efforts all night and today by many teams across ITaP, all computational clusters h...

  • Halstead Scratch Upgrade

    • widget.news::news.updated:

    Halstead and HalsteadGPU will be having a new scratch filesystem installed on Tuesday, March 20th, 2018. All access to these systems will be stopped at 5:00am EDT in order to allow for engineers to install the new hardware. Any jobs whose requested...

  • New Windows Network Drive (SMB) Access

    All Windows network drives (SMB/CIFS access) for scratch filesystems on all clusters and home directories have moved! You should change your mapped network drives to connect to: \\scratch.my_cluster_name_here.rcac.purdue.edu\my_cluster_name_here Or f...

  • New Thinlinc Servers

    Thinlinc access to all clusters is moving! Effective immediately, please direct your Thinlinc client and/or browser for Thinlinc to: desktop.my_cluster_name_here.rcac.purdue.edu The old Thinlinc service on thinlinc.rcac.purdue.edu will be retired at...

  • Unscheduled Depot Outage on Compute Clusters

    • widget.news::news.updated:

    The servers providing access to Data Depot from Brown, Conte, Halstead, HalsteadGPU, Radon, Rice, Scholar, and Snyder suffered a partial failure. Many nodes in these clusters temporarily lost access to Depot. Jobs accessing files on Depot may have pa...

  • Holiday Break

    Purdue University will be observing a holiday break from December 23 - January 2. During this time, Research Computing services will continue to be available, but all staff will be on leave. Critical system outages will be dealt with should they occ...

  • Halstead Cluster Maintenance

    • widget.news::news.updated:

    The Halstead cluster will be unavailable beginning at Thursday, September 14th, 2017 at 8:00am EDT, for scheduled maintenance. The cluster will return to full production by %enddatetime%. During this time, Halstead will have critical security patches...

  • Unscheduled Depot Outage

    • widget.news::news.updated:

    Access to Data Depot from the Halstead, HalsteadGPU, Hathi, Rice, Scholar, and Snyder clusters has hung starting around Thursday, September 7th, 2017 at 1:30pm EDT. Engineers are currently working to restore service to these systems. Job scheduling h...

  • Unscheduled Depot Outage

    • widget.news::news.updated:

    A failure has occurred in the systems which serve Data Depot to the various research clusters. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused on all systems while this issue is being add...

  • Scratch Purge Policy Change

    • widget.news::news.updated:

    After a long review, Research Computing has determined it is necessary to alter the scratch storage purge policy on all systems. Effective August 28, 2017, all scratch storage systems will begin purging files which have not been accessed (for either...

  • Unscheduled outages on portions of clusters

    • widget.news::news.updated:

    Conte, Halstead, HalsteadGPU, and Hammer are back in full production. Job scheduling has been resumed on all clusters. Please let us know if you see any lingering issues at rcac-help@purdue.edu. UPDATE July 20, 2017 2:54pm Power has been restored to...

  • Removal of netcdf and hdf5 system-level library installations

    Research Computing has begun removing several libraries installed at the system-level that should be provided by the module command. These libraries include netcdf, hdf5, and several related packages. This change should have limited impact as module...

  • Halstead Scheduling Outage

    • widget.news::news.updated:

    Nodes have continued to gradually reboot into the new image as jobs complete. At this point, more than 80% of Halstead has completed this process, and we have not seen any issues in them doing so. This outage is closed. Update: May 25, 2017 5:00pm...

  • Data Depot Outage

    • widget.news::news.updated:

    Engineers have restored failed core servers back to a functional state. Data Depot is up and running as normal and job scheduling resumed. Should you encounter any lingering issues please let us know at rcac-help@purdue.edu Original Message Some core...

  • Clusters to complete transition to hierarchy modules

    • widget.news::news.updated:

    The transition to hierarchy modules has completed today, May 9th. If any of your job scripts still have old module names the module load will no longer work so be sure to double check your PBS job scripts and output for any old module names. We have...