<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/">
	<channel>
		<title>RCAC - Announcements, Outages and Maintenance, Outages, Maintenance, Outages, Maintenance, Science Highlights</title>
		<link>https://rcac.purdue.edu/index.php/news/rss/2,1,6,7,3,Gilbreth</link>
		<description><![CDATA[news::news.feed description]]></description>
		<atom:link href="https://rcac.purdue.edu/index.php/news/rss/2,1,6,7,3,Gilbreth" rel="self" type="application/rss+xml" />
		<language>en</language>
		<lastBuildDate>Tue, 07 Apr 2026 08:52:25 EDT</lastBuildDate>
					<item>
				<title><![CDATA[Scheduled RCAC Maintenance – April 22–23 (All Systems and Research Network Unavailable)]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2637</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2637</guid>
				<description><![CDATA[<p>Dear Research Computing Community,</p>
<p>As part of the ongoing effort to upgrade data center capacity for future computing needs, maintenance is scheduled from Wednesday, April 22 at 6 AM through Thursday, April 23 at 5 PM. This will affect all RCAC systems and the Research Network.</p>
<p>During this window, <strong>all RCAC systems and research network services will be unavailable</strong>, including:</p>
<ul>
<li>All Computing Clusters including Bell, Negishi, Gautschi, Scholar, Rowdy, Gilbreth, Hammer, Anvil.</li>
<li>All Data Storage Systems including Data Depot, Fortress, Anvil Ceph storage, scratch and home storage on clusters, Research Network and ScienceDMZ</li>
<li>Gateway services including Hubzero, GenAI Studio, Anvil GPT</li>
<li>
<a href="http://www.rcac.purdue.edu">www.rcac.purdue.edu</a>
</li>
<li>Geddes</li>
<li>Globus</li>
</ul>
<p><strong>How does this maintenance impact you?</strong></p>
<ul>
<li>Any Slurm jobs requesting a walltime that would extend past the start of the maintenance will not start and will remain in the queue until after maintenance is complete.</li>
<li>All active sessions and jobs running on affected systems will be preempted at the start of the outage
any queued jobs will not begin until services are restored.</li>
<li>Access to login nodes, storage systems, and web portals will be unavailable throughout the downtime.</li>
<li>Automated data workflows that rely on affected systems (e.g., rsync, data pipelines, archive processes) will not function until systems are back online.</li>
<li>Globus transfers may time out during the maintenance window.</li>
</ul>
<p><strong>To prepare for this maintenance, we suggest to:</strong></p>
<ul>
<li>Download any needed data or scripts before 6:00 AM on April 22.</li>
<li>Prepare instrumentation devices for the Data Depot to be unavailable and</li>
</ul>
<p>We appreciate your understanding and cooperation as we complete these necessary upgrades to improve reliability and performance across RCAC infrastructure. For assistance, questions, or concerns contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
<p>Best regards,
Rosen Center for Advanced Computing (RCAC) / Purdue IT</p>
]]></description>
				<pubDate>Wed, 22 Apr 2026 06:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Power Outage Impacting Multiple Clusters — Recovery Underway]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2617</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2617</guid>
				<description><![CDATA[<p>At approximately 6:00 AM EDT, a power outage impacted systems in the Math Data Center. Most services have now been restored.</p>
<p>Due to the outage, some jobs on Gilbreth did not requeue automatically. Users should check the status of any jobs that were running early this morning and resubmit them if needed.</p>
]]></description>
				<pubDate>Wed, 18 Mar 2026 06:00:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Data Depot Filesystem issue: Scheduling Resumed]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2592</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2592</guid>
				<description><![CDATA[<p>An internal portion of the Data Depot filesystem is currently offline, as a result, all scheduling has been paused until this issue is resolved.</p>
<p><strong>Impact to you</strong>
Attempts to read files that are on the affected storage may result in error messages</p>
<p>Our IT team is actively working with the vendor to restore service as quickly as possible. We will send an update as soon as more information is available.</p>
]]></description>
				<pubDate>Wed, 11 Feb 2026 14:30:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Gilbreth Remains Unavailable – Update Expected by 10:00 AM]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2589</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2589</guid>
				<description><![CDATA[<p>The scheduled Math Data Center maintenance is complete. The <strong>Gilbreth cluster remains unavailable</strong> as we continue post‑maintenance work. <strong>The next update will be provided by 10:00 AM tomorrow (Friday), or sooner if available.</strong></p>
<p>We appreciate your patience and understanding as we work to restore Gilbreth service.</p>
]]></description>
				<pubDate>Fri, 06 Feb 2026 00:15:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[February 5 Maintenance – Math Data Center Upgrades and Service Impact]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2533</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2533</guid>
				<description><![CDATA[<p>On Thursday, February 5, RCAC will perform planned maintenance in the MATH data center to support cooling upgrades and capacity improvements as part of the ongoing MATH datacenter renovation project.</p>
<p>During this maintenance window, several clusters will experience a temporary outage so that hardware can be safely powered down while facility work is performed:</p>
<ul>
<li>
<p>Gautschi, Gilbreth, Negishi, Bell, and Anvil cluster nodes will be powered down.</p>
</li>
<li>
<p>The Gilbreth’s legacy V100 GPUs, that are well past their lifetime, will be decommissioned.</p>
</li>
<li>
<p>Hammer (Math nodes) and Geddes: A subset of nodes will be powered down but the services will be available, unless communicated separately.</p>
</li>
</ul>
<h3>How does this maintenance impact you?</h3>
<ul>
<li>
<p>Clusters listed in this message won’t be available to run jobs during the maintenance.</p>
</li>
<li>
<p>Any jobs requesting a walltime which would take them past the start of the maintenance will not start and will remain in the queue until after the maintenance is completed.</p>
</li>
<li>
<p>Users can continue to access their data.</p>
</li>
<li>
<p>GenAI studio will remain available. This maintenance will position Purdue to support growing computational needs. Users should see long‑term benefits in system reliability and our ability to support future computing and AI resources.</p>
</li>
</ul>
<p>If you have questions about how this outage will affect your work or need support, please contact <a href="mailto:rcac%E2%80%91help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Thu, 05 Feb 2026 07:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Network Slowness Notice]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2551</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2551</guid>
				<description><![CDATA[<p>We are currently investigating network performance issues affecting network traffic.</p>
<p>Impact To You:
At this time, you may notice latency or brief disruptions when accessing certain on-campus or external resources, especially during peak usage periods.</p>
<p>We appreciate your patience while we work to fully resolve the underlying problem and restore normal network performance. We will provide an update by 5:00PM EST today or sooner.</p>
]]></description>
				<pubDate>Mon, 02 Feb 2026 15:00:00 -0500</pubDate>
									<category>Outages and Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Globus access to Depot degraded; slow Depot logins and Depot access on clusters]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2578</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2578</guid>
				<description><![CDATA[<p>Users of Data Depot on RCAC clusters are currently experiencing degraded performance, and some Globus transfers to and from Depot are failing or running slowly.  In addition, some users may see slow Globus logins or be temporarily unable to log in to Globus when accessing Depot collections.</p>
<p>System monitoring has identified an issue where heavy job activity was overloading the Data Depot filesystem used by the clusters and Globus.</p>
<p>You may see the following impacts:</p>
<ul>
<li>Globus transfers to and from Depot collections may fail, stall, or run much more slowly than usual.</li>
<li>Globus logins may be slow or occasionally fail when accessing Depot endpoints.</li>
<li>Jobs on RCAC clusters that read from or write to Depot may experience slow file access, delayed directory listings, or timeouts.</li>
</ul>
<p>Our engineers are investigating the high load from a large number of concurrent jobs and are working to reduce the impact on Depot, Globus, and cluster workloads.  Existing jobs will continue to run, but any that are heavily Depot‑I/O‑bound may run more slowly or see I/O errors until performance improves.  We will provide another update by 5:00PM EST or sooner if the issue is resolved.</p>
]]></description>
				<pubDate>Fri, 30 Jan 2026 15:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Gilbreth Scheduling paused due to Scratch filesystem issue]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2545</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2545</guid>
				<description><![CDATA[<p>We’re currently addressing an issue with the Scratch filesystem on Gilbreth, and job scheduling has been temporarily paused while we work on a fix.</p>
<p>Our team is actively investigating and will post updates as soon as more information is available or 4pm EST. Thank you for your patience and understanding!</p>
]]></description>
				<pubDate>Thu, 22 Jan 2026 14:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Cluster & Data Depot Outage]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2541</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2541</guid>
				<description><![CDATA[<p>Data Depot and clusters began experiencing issues around 8:00AM EST. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 2:00PM EST today.</p>
]]></description>
				<pubDate>Tue, 20 Jan 2026 08:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Upcoming February 5 Maintenance – Math Data Center Upgrades and Service Impact]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2525</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2525</guid>
				<description><![CDATA[<img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/images/mathrenno.png" />
On Thursday, February 5, RCAC will perform planned maintenance in the MATH data center to support cooling upgrades and capacity improvements as part of the ongoing MATH data center renovation project. This renovation will allow Purdue to better support growing AI, data‑intensive, and HPC workloads for research. When completed, MATH will see a 32% increase in floor space, a 60% increase in usable power, and a two-fold increase in cooling capacity. 
<p>During this maintenance window, several clusters will experience a temporary outage so that hardware can be safely powered down while facility work is performed:</p>
<ul>
<li>
<p>Gautschi, Gilbreth, Negishi, Bell, and Anvil cluster nodes will be powered down.</p>
</li>
<li>
<p>The Gilbreth’s legacy V100 GPUs, that are well past their lifetime, will be decommissioned.</p>
</li>
<li>
<p>Hammer (Math nodes) and Geddes: A subset of nodes will be powered down but the services will be available, unless communicated separately.</p>
</li>
</ul>
<h3>How does this maintenance impact you?</h3>
<ul>
<li>
<p>Clusters listed in this message won’t be available to run jobs during the maintenance.</p>
</li>
<li>
<p>Any jobs requesting a walltime which would take them past the start of the maintenance will not start and will remain in the queue until after the maintenance is completed.</p>
</li>
<li>
<p>Users can continue to access their data.</p>
</li>
<li>
<p>GenAI studio will remain available. This maintenance will position Purdue to support growing computational needs. Users should see long‑term benefits in system reliability and our ability to support future computing and AI resources.</p>
</li>
</ul>
<p>If you have questions about how this outage will affect your work or need support, please contact <a href="mailto:rcac%E2%80%91help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Mon, 12 Jan 2026 14:30:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Research Computing Holiday Break]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2485</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2485</guid>
				<description><![CDATA[<p>Research Computing personnel will observe the university winter break from 12:00am EST on 12/23/25 and will resume normal business hours on January 5th, 2026  During this time, Research Computing services will continue to be available, but all staff will be on leave.</p>
<p>Research Computing staff members will monitor the status of all computing and data resources in an effort to ensure continuous availability.</p>
<p>Research Computing staff members will monitor the ticketing system throughout the holiday period and answer critical issues and problems. Non-critical user issues and questions will be addressed beginning January 5th, 2026. There will also be no coffee hour consultations during this break.</p>
<p><strong>Scratch file purging (on community clusters with scratch space) will continue as normal during the break, so be sure to archive your files in scratch storage. This does not apply to Data Depot or home directories -- only scratch storage.</strong></p>
<p>Have a wonderful break, everyone, and we look forward to great things in the new year!</p>
]]></description>
				<pubDate>Tue, 16 Dec 2025 13:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled gilbreth outage]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2477</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2477</guid>
				<description><![CDATA[<p>The gilbreth cluster began experiencing issues around 9:00 AM EST. Engineers are currently working on a solution and expect for return to service by 11:30 AM EST.</p>
]]></description>
				<pubDate>Tue, 02 Dec 2025 09:30:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Data Depot Outage]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2447</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2447</guid>
				<description><![CDATA[<p>The Data Depot storage system began experiencing issues starting around 4:30pm EDT today. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 9pm.</p>
]]></description>
				<pubDate>Sat, 01 Nov 2025 16:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Data Depot outage]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2415</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2415</guid>
				<description><![CDATA[<p>Edit:</p>
<p>The Data Depot file system has returned to full service and scheduling has resumed on all clusters.</p>
<hr />
<p>The Data Depot storage system began experience issues starting around 9am EDT this morning. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 12pm (noon).</p>
]]></description>
				<pubDate>Fri, 17 Oct 2025 09:00:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Data Depot outage]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2407</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2407</guid>
				<description><![CDATA[<p>Edit:</p>
<p>Data Depot functionality has been restored.</p>
<hr />
<p>The Data Depot file system began experiencing issues with writes around 2:30pm EDT. The data migration process currently ongoing from Data Depot 2 to Data Depot 3 ran into an unexpected problem. Engineers have identified the problem and are correcting it. Users may have seen &quot;no space left on device&quot; for approximately 30 minutes. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 5 PM.</p>
]]></description>
				<pubDate>Wed, 15 Oct 2025 14:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Purdue professor in Indianapolis uses RCAC clusters to study materials, predict failures]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2380</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2380</guid>
				<description><![CDATA[<p>Shengfeng Yang, an assistant professor of mechanical engineering in Indianapolis, uses the Rosen Center for Advanced Computing (RCAC)’s Negishi community cluster supercomputer to help with his research simulating complex materials. To see how failure of materials happen at the atomic level, he and his research group simulate systems with more than a million atoms, which requires a great deal of computational power, and wouldn’t be possible without a powerful computer like <a href="https://www.rcac.purdue.edu/compute/negishi">Negishi</a>.</p>
<p>Currently, Yang and his team focus on semiconductor materials and metals such as copper that are used in semiconductor packaging to study how cracking and deformation happens at the atomic level in critical areas.</p>
<p>Yang and his research group also use the <a href="https://www.rcac.purdue.edu/compute/gilbreth">Gilbreth</a> cluster’s GPUs to train machine learning models to predict material behavior and properties. This means in the future they won’t have to run time-consuming simulations because the trained machine learning model will be able to make fast predictions about material behavior and failure.</p>
<p>Yang says there have been no obstacles to using the clusters as a faculty member at the Indianapolis campus, and he’s been able to access the clusters remotely without any difficulties.</p>
<p>He says tapping into RCAC resources has also connected him to faculty in West Lafayette he might not otherwise have met.</p>
<p>“I’ve gotten a lot of connection opportunities, and chances to collaborate with faculty in West Lafayette that are more focused on the experiment side, so we can have that connection between the computational simulation and the experimental science. So that’s been a big benefit as well.”</p>
<p>To learn more about Negishi, Gilbreth and other RCAC resources, contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a> and visit the <a href="https://www.rcac.purdue.edu/">RCAC website</a>.</p>
]]></description>
				<pubDate>Tue, 26 Aug 2025 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Gilbreth Cluster Maintenance]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2318</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2318</guid>
				<description><![CDATA[<h3>When will it Happen?</h3>
<p>The Gilbreth cluster is scheduled for maintenance and will be unavailable <strong>on Tuesday, August 19th, 2025 at 8:00am EDT until Tuesday, August 19th at 5:00pm EDT</strong>.</p>
<h3>What is being upgraded?</h3>
<p>During this maintenance, Gilbreth will have its <a href="https://www.rcac.purdue.edu/news/7275">scheduler's configurations updated</a> to allow for the use of more modern features within Slurm.</p>
<h3>How does this affect you?</h3>
<ul>
<li>The Gilbreth cluster will be unavailable during the maintenance window.</li>
<li>Slurm jobs that are still queued when this maintenance begins on Tuesday, August 19th, 2025 at 8:00am EDT will be deleted.</li>
<li>A reservation will be created that will prevent jobs from starting if their end time would take the job past the start of maintenance. Because pending jobs will be deleted, these jobs will never run.</li>
<li>
<strong>The Slurm options required for job submission will change during this maintenance. <a href="https://www.rcac.purdue.edu/news/7275">See our related news posting</a>.</strong>
</li>
<li>The output of <code>slist</code> and the default output of <code>squeue</code> will be modified to be more useful under the new design.</li>
<li>The available options for creating jobs through Open OnDemand will change to accomodate the new options.</li>
</ul>
<h3>How can you prepare for these changes?</h3>
<p>In order to minimize disruption in researcher workflows, we have updated <a href="https://www.rcac.purdue.edu/knowledge/gilbreth/run/slurm/queues">Gilbreth's User Guide page on Job Submission</a> to describe the new method of job submission and this should be reviewed by users before the maintenance.</p>
<p>If you have questions about this upgrade or need help from our support staff, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Tue, 19 Aug 2025 08:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Gilbreth Cluster Modernization]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2319</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2319</guid>
				<description><![CDATA[<p>As part of an ongoing effort to utilize modern features in the Slurm scheduler and to streamline the process of usage reporting for research groups--something that is often requested by various PIs, the scheduler configurations on the Gilbreth cluster will be modified <a href="https://www.rcac.purdue.edu/news/7274">in an upcoming maintenance</a> . <strong>Users will be required to update their job scripts</strong> to conform to the guidelines described below.</p>
<ul>
<li>All jobs on the cluster will be required to explicitly specify a partition and an account (i.e. your group's name) at submission time. You can find the names of the available partitions and accounts from the <code>showpartitions</code> and <code>slist</code> commands respectively. Any job that does not specify an account <em>and</em> a partition will be rejected at submission time.</li>
<li>All jobs on the cluster must include an explicit memory request through use of the <code>--mem</code> option. Jobs that do not make an explicit memory request will be rejected and a reasonable default will be suggested.</li>
<li>Accounts will be renamed to no longer include a suffix designating the type of resource the account contains. This means that groups with multiple types of resources broken into multiple accounts will be consolidated into a single group.</li>
<li>
<strong>GPU type will be requested through specifying a partition containing the GPU type rather than submission to an account with a suffix designating that partition.</strong> i.e. jobs previously submitted to <code>-A mylab-k </code> will now be specified as <code>-A mylab -p a100-40gb</code>
</li>
<li>The output of <code>slist</code> and the default output of <code>squeue</code> will be modified to reflect the new scheduler design.</li>
<li>All &quot;shared accounts&quot; such as <code>standby</code> that represent resources outside of your typical &quot;group accounts&quot; will continue to exist but will require a different request syntax.
<ul>
<li>Standby will become a Quality of Service (QoS) and jobs that previously ran under the &quot;standby&quot; account, will now be submitted to your &quot;group account&quot; and be tagged with the standby QoS. I.e. if your job previously used the <code>-A standby</code> option, you would now use <code>-A mylab -q standby</code>
</li>
</ul>
</li>
</ul>
<table> <caption>Summary of Changes</caption>
<thead>
<tr>
<th scope="col">Use Case</th>
<th scope="col">Old Syntax</th>
<th scope="col">New Syntax</th>
<th scope="col">What Changed</th>
</tr>
</thead>
<tbody>
<tr>
<td>Submit a job to your group's account</td>
<td><code>sbatch -A mygroup</code></td>
<td><code>sbatch -A mygroup -p &lt;appropriate partition&gt; --mem=50G</code></td>
<td>The partition and memory must be specified.</td>
</tr>
<tr>
<td>Submit a standby job</td>
<td><code>sbatch -A standby</code></td>
<td><code>sbatch -A mygroup -q standby -p &lt;appropriate partition&gt; --mem=50G</code></td>
<td><code>standby</code> is now a QoS instead of an account</td>
</tr>
</tbody>
</table>
<p><strong>How will this affect you</strong>:</p>
<ol>
<li>You will need to change your jobscripts and your method of invocation to include the required options outlined above.</li>
<li>If you have any scripts or tooling that rely on the current output of <code>slist</code> or <code>squeue</code>, those scripts will need to be modified to use the new formatted output.</li>
</ol>
<p>You can prepare for this maintenance by reviewing the new Slurm organization in our user guide's <a href="https://www.rcac.purdue.edu/knowledge/gilbreth/run/slurm/queues">Queues Page</a>.</p>
<p>If you have any questions about these upcoming changes, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Tue, 19 Aug 2025 08:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Purdue professor leverages community clusters to analyze stresses on city infrastructure and better predict problems before they happen]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2327</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2327</guid>
				<description><![CDATA[<p>A Purdue professor who uses data analysis to study city infrastructure such as roads, buildings, sewer pipelines and nuclear reactors and predict stresses before they happen, avoiding infrastructure failures, is using RCAC’s <a href="https://www.rcac.purdue.edu/compute/gilbreth">Gilbreth</a> community cluster supercomputer to deliver the huge computational power needed for his research.</p>
<p><a href="https://engineering.purdue.edu/CCE/People/ptProfile?resource_id=113437">Mohammad Reza Jahanshahi</a>, an associate professor of civil engineering, and his research team rely heavily on deep learning models to perform their data analysis of city infrastructure, which requires powerful GPUs like those available on Gilbreth.</p>
<p>In one project, Jahanshahi and his team have collected data from camera sensors on roads to identify the stresses and rate the quality of the roads. Using Gilbreth, they’ve trained a deep learning AI system to identify and classify road stresses.</p>
<p>Another project detects defects in pipelines around the points where they’re welded together. Using radiography data, Jahanshahi and his team again develop and train  deep neural networks to identify and detect where the disturbances are in the welds. Gilbreth’s GPUs are essential for the large amounts of data needed to train the deep learning agent.</p>
<p>In a third project, Jahanshahi and his team are developing a tool to help guide a robot  to detect cracks on the surface of a nuclear reactor. Rather than scanning the data and sending it home to be processed later, the goal is for the robot to inspect the reactor surface like a human would – essentially processing the data on the go, stopping to take a closer look at areas that look suspicious.</p>
<p>Before turning to RCAC’s clusters, Jahanshahi’s lab operated their own computers, which he says was a time-consuming and frustrating endeavor that often involved devoting a lot of graduate student time to fixing IT issues.</p>
<p>“Since moving our work to RCAC, we can do things much faster,” he says. “I don’t need to worry about devoting student time to server maintenance.”</p>
<p>“I’ve been very happy with not only the clusters, but also the staff support and storage available to us through RCAC,” adds Jahanshahi. “RCAC staff are available to troubleshoot anything that goes wrong with the clusters, and Integrated storage means we don’t have to worry about setting up a separate server to archive our data.”</p>
<p>To learn more about Gilbreth and other RCAC resources, contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a> and visit the RCAC website.</p>
]]></description>
				<pubDate>Tue, 19 Aug 2025 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Incorrect Account Email]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2305</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2305</guid>
				<description><![CDATA[<p>Today, RCAC user management systems sent incorrect email messages to many faculty partners and their resource managers. Please ignore any recent email about expirations or removals. You may verify who has access to your resources through our site <a href="http://www.rcac.purdue.edu">www.rcac.purdue.edu</a>, at any time, or email <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a> if you have concerns..</p>
<p>Thank you!</p>
]]></description>
				<pubDate>Tue, 15 Jul 2025 14:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
			</channel>
</rss>