<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/">
	<channel>
		<title>RCAC - Announcements, Outages and Maintenance, Outages, Maintenance, Outages, Maintenance, Science Highlights</title>
		<link>https://rcac.purdue.edu/index.php/news/rss/2,1,6,7,3,Gautschi</link>
		<description><![CDATA[news::news.feed description]]></description>
		<atom:link href="https://rcac.purdue.edu/index.php/news/rss/2,1,6,7,3,Gautschi" rel="self" type="application/rss+xml" />
		<language>en</language>
		<lastBuildDate>Tue, 07 Apr 2026 08:54:55 EDT</lastBuildDate>
					<item>
				<title><![CDATA[Scheduled RCAC Maintenance – April 22–23 (All Systems and Research Network Unavailable)]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2635</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2635</guid>
				<description><![CDATA[<p>Dear Research Computing Community,</p>
<p>As part of the ongoing effort to upgrade data center capacity for future computing needs, maintenance is scheduled from Wednesday, April 22 at 6 AM through Thursday, April 23 at 5 PM. This will affect all RCAC systems and the Research Network.</p>
<p>During this window, <strong>all RCAC systems and research network services will be unavailable</strong>, including:</p>
<ul>
<li>All Computing Clusters including Bell, Negishi, Gautschi, Scholar, Rowdy, Gilbreth, Hammer, Anvil.</li>
<li>All Data Storage Systems including Data Depot, Fortress, Anvil Ceph storage, scratch and home storage on clusters, Research Network and ScienceDMZ</li>
<li>Gateway services including Hubzero, GenAI Studio, Anvil GPT</li>
<li>
<a href="http://www.rcac.purdue.edu">www.rcac.purdue.edu</a>
</li>
<li>Geddes</li>
<li>Globus</li>
</ul>
<p><strong>How does this maintenance impact you?</strong></p>
<ul>
<li>Any Slurm jobs requesting a walltime that would extend past the start of the maintenance will not start and will remain in the queue until after maintenance is complete.</li>
<li>All active sessions and jobs running on affected systems will be preempted at the start of the outage
any queued jobs will not begin until services are restored.</li>
<li>Access to login nodes, storage systems, and web portals will be unavailable throughout the downtime.</li>
<li>Automated data workflows that rely on affected systems (e.g., rsync, data pipelines, archive processes) will not function until systems are back online.</li>
<li>Globus transfers may time out during the maintenance window.</li>
</ul>
<p><strong>To prepare for this maintenance, we suggest to:</strong></p>
<ul>
<li>Download any needed data or scripts before 6:00 AM on April 22.</li>
<li>Prepare instrumentation devices for the Data Depot to be unavailable and</li>
</ul>
<p>We appreciate your understanding and cooperation as we complete these necessary upgrades to improve reliability and performance across RCAC infrastructure. For assistance, questions, or concerns contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
<p>Best regards,
Rosen Center for Advanced Computing (RCAC) / Purdue IT</p>
]]></description>
				<pubDate>Wed, 22 Apr 2026 06:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Power Outage Impacting Multiple Clusters — Recovery Underway]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2615</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2615</guid>
				<description><![CDATA[<p>At approximately 6:00 AM EDT, a power outage impacted systems in the Math Data Center. Most services have now been restored.</p>
<p>Due to the outage, some jobs on Gilbreth did not requeue automatically. Users should check the status of any jobs that were running early this morning and resubmit them if needed.</p>
]]></description>
				<pubDate>Wed, 18 Mar 2026 06:00:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Scheduling paused on Gautschi CPU cluster due to cooling leak]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2608</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2608</guid>
				<description><![CDATA[<p>We have identified a cooling leak in Gautschi cluster that impacts CPU nodes and are actively working to resolve the issue.</p>
<p>**How does this impact you?
**
We have paused job scheduling on the CPU nodes. You can continue to submit jobs to Gautschi nodes, however the jobs will remain queued until after the maintenance is complete.</p>
<p>The cooling leak does not affect Gautschi AI nodes. Therefore, those nodes continue to operate normally.</p>
<p>We will provide additional updates by Thursday, March 5 at 12 pm. Please reach out to <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a> if you have any questions or need support.</p>
]]></description>
				<pubDate>Wed, 04 Mar 2026 17:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[MATH Datacenter Cooling issue - Job scheduling paused on Anvil/Gautschi]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2606</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2606</guid>
				<description><![CDATA[<p>The MATH datacenter started experiencing issues with cooling systems around 12pm. Job scheduling on the Anvil and Gautschi clusters was paused shortly after and scheduling resumed at 1:30pm.</p>
]]></description>
				<pubDate>Wed, 04 Mar 2026 12:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Data Depot Filesystem issue: Scheduling Resumed]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2591</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2591</guid>
				<description><![CDATA[<p>An internal portion of the Data Depot filesystem is currently offline, as a result, all scheduling has been paused until this issue is resolved.</p>
<p><strong>Impact to you</strong>
Attempts to read files that are on the affected storage may result in error messages</p>
<p>Our IT team is actively working with the vendor to restore service as quickly as possible. We will send an update as soon as more information is available.</p>
]]></description>
				<pubDate>Wed, 11 Feb 2026 14:30:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[February 5 Maintenance – Math Data Center Upgrades and Service Impact]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2531</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2531</guid>
				<description><![CDATA[<p>On Thursday, February 5, RCAC will perform planned maintenance in the MATH data center to support cooling upgrades and capacity improvements as part of the ongoing MATH datacenter renovation project.</p>
<p>During this maintenance window, several clusters will experience a temporary outage so that hardware can be safely powered down while facility work is performed:</p>
<ul>
<li>
<p>Gautschi, Gilbreth, Negishi, Bell, and Anvil cluster nodes will be powered down.</p>
</li>
<li>
<p>The Gilbreth’s legacy V100 GPUs, that are well past their lifetime, will be decommissioned.</p>
</li>
<li>
<p>Hammer (Math nodes) and Geddes: A subset of nodes will be powered down but the services will be available, unless communicated separately.</p>
</li>
</ul>
<h3>How does this maintenance impact you?</h3>
<ul>
<li>
<p>Clusters listed in this message won’t be available to run jobs during the maintenance.</p>
</li>
<li>
<p>Any jobs requesting a walltime which would take them past the start of the maintenance will not start and will remain in the queue until after the maintenance is completed.</p>
</li>
<li>
<p>Users can continue to access their data.</p>
</li>
<li>
<p>GenAI studio will remain available. This maintenance will position Purdue to support growing computational needs. Users should see long‑term benefits in system reliability and our ability to support future computing and AI resources.</p>
</li>
</ul>
<p>If you have questions about how this outage will affect your work or need support, please contact <a href="mailto:rcac%E2%80%91help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Thu, 05 Feb 2026 07:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Network Slowness Notice]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2549</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2549</guid>
				<description><![CDATA[<p>We are currently investigating network performance issues affecting network traffic.</p>
<p>Impact To You:
At this time, you may notice latency or brief disruptions when accessing certain on-campus or external resources, especially during peak usage periods.</p>
<p>We appreciate your patience while we work to fully resolve the underlying problem and restore normal network performance. We will provide an update by 5:00PM EST today or sooner.</p>
]]></description>
				<pubDate>Mon, 02 Feb 2026 15:00:00 -0500</pubDate>
									<category>Outages and Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Globus access to Depot degraded; slow Depot logins and Depot access on clusters]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2576</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2576</guid>
				<description><![CDATA[<p>Users of Data Depot on RCAC clusters are currently experiencing degraded performance, and some Globus transfers to and from Depot are failing or running slowly.  In addition, some users may see slow Globus logins or be temporarily unable to log in to Globus when accessing Depot collections.</p>
<p>System monitoring has identified an issue where heavy job activity was overloading the Data Depot filesystem used by the clusters and Globus.</p>
<p>You may see the following impacts:</p>
<ul>
<li>Globus transfers to and from Depot collections may fail, stall, or run much more slowly than usual.</li>
<li>Globus logins may be slow or occasionally fail when accessing Depot endpoints.</li>
<li>Jobs on RCAC clusters that read from or write to Depot may experience slow file access, delayed directory listings, or timeouts.</li>
</ul>
<p>Our engineers are investigating the high load from a large number of concurrent jobs and are working to reduce the impact on Depot, Globus, and cluster workloads.  Existing jobs will continue to run, but any that are heavily Depot‑I/O‑bound may run more slowly or see I/O errors until performance improves.  We will provide another update by 5:00PM EST or sooner if the issue is resolved.</p>
]]></description>
				<pubDate>Fri, 30 Jan 2026 15:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Cluster & Data Depot Outage]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2540</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2540</guid>
				<description><![CDATA[<p>Data Depot and clusters began experiencing issues around 8:00AM EST. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 2:00PM EST today.</p>
]]></description>
				<pubDate>Tue, 20 Jan 2026 08:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Upcoming February 5 Maintenance – Math Data Center Upgrades and Service Impact]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2523</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2523</guid>
				<description><![CDATA[<img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/images/mathrenno.png" />
On Thursday, February 5, RCAC will perform planned maintenance in the MATH data center to support cooling upgrades and capacity improvements as part of the ongoing MATH data center renovation project. This renovation will allow Purdue to better support growing AI, data‑intensive, and HPC workloads for research. When completed, MATH will see a 32% increase in floor space, a 60% increase in usable power, and a two-fold increase in cooling capacity. 
<p>During this maintenance window, several clusters will experience a temporary outage so that hardware can be safely powered down while facility work is performed:</p>
<ul>
<li>
<p>Gautschi, Gilbreth, Negishi, Bell, and Anvil cluster nodes will be powered down.</p>
</li>
<li>
<p>The Gilbreth’s legacy V100 GPUs, that are well past their lifetime, will be decommissioned.</p>
</li>
<li>
<p>Hammer (Math nodes) and Geddes: A subset of nodes will be powered down but the services will be available, unless communicated separately.</p>
</li>
</ul>
<h3>How does this maintenance impact you?</h3>
<ul>
<li>
<p>Clusters listed in this message won’t be available to run jobs during the maintenance.</p>
</li>
<li>
<p>Any jobs requesting a walltime which would take them past the start of the maintenance will not start and will remain in the queue until after the maintenance is completed.</p>
</li>
<li>
<p>Users can continue to access their data.</p>
</li>
<li>
<p>GenAI studio will remain available. This maintenance will position Purdue to support growing computational needs. Users should see long‑term benefits in system reliability and our ability to support future computing and AI resources.</p>
</li>
</ul>
<p>If you have questions about how this outage will affect your work or need support, please contact <a href="mailto:rcac%E2%80%91help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Mon, 12 Jan 2026 14:30:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Gautschi Community Cluster ranks high in international benchmark competitions]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2504</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2504</guid>
				<description><![CDATA[<p>Gautschi, Purdue University’s most powerful supercomputer, was recently ranked among the top high-performance computing (HPC) systems on two separate, international benchmarks. The community cluster ranked 20th on the IO500 benchmark in the “10 Node Production” category and 27th on the HPL-MxP benchmark. Both lists were released at this year’s international supercomputing conference, <a href="https://www.rcac.purdue.edu/news/7561">SC25</a>.</p>
<p>In late 2024, <img width="500" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/Gautschi-Ribbon-Cutting/1W5A2763-Enhanced-NR.jpg" />Purdue unveiled its newest and most powerful supercomputer to date, <a href="https://www.rcac.purdue.edu/compute/gautschi">Gautschi</a>. The Gautschi cluster was designed to provide Purdue researchers with a world-class computing resource capable of driving the university toward its next giant leap. Eponymously named in honor of Walter Gautschi, Professor Emeritus of Computer Science and Professor Emeritus of Mathematics at Purdue University, the Gautschi supercomputer consists of two partitions—a traditional HPC resource focused on providing next-generation CPUs and a dedicated AI partition containing Nvidia H100 SXM GPUs.</p>
<p>Each year, supercomputers may be entered into benchmark competitions to test different aspects of their performance. For example, when Gautschi debuted in the Fall of 2024, it ranked number 157 on the <a href="https://top500.org/">Top500</a> list of the world’s most powerful supercomputers and number 43 on the <a href="https://top500.org/lists/green500/2024/11/">Green500</a> list of the most energy-efficient supercomputers. This year, staff at the Rosen Center for Advanced Computing (RCAC) decided to measure Gautschi’s performance on the IO500 and HPL-MxP benchmarks.</p>
<p>The <a href="https://io500.org/about">IO500</a> benchmark has become the standard for measuring HPC storage performance. It consists of five separate workloads to identify performance boundaries for HPC applications. On this year’s list, Gautschi was ranked 20th in the “10 Node Production” category, with a peak read performance of 186.54 GB/second. Only three other US university machines bested it on the benchmark. Gautschi also ranked 194th overall on the full list of IO500 submissions. Gautschi’s storage system is built with <a href="https://www.ddn.com/products/lustre-file-system-exascaler/">DDN’s Exascalar</a> filesystem.</p>
<p>The <a href="https://hpl-mxp.org">HPL-MxP</a> mixed-precision benchmark is tailored for testing a machine’s ability to handle artificial intelligence (AI) workloads. It combines testing parameters from the traditional HPL framework, one of the most popular benchmarks in the world, with AI-specific constraints. The HPL-MxP results are released biannually. Gautschi was ranked 27th overall on the November 2025 list, and was the top US university machine listed.</p>
<p>“We are excited to see how Gautschi stacks up against the world’s most powerful systems”, says Preston Smith, Executive Director of the Rosen Center for Advanced Computing in Purdue IT.  “I/O performance is critical for AI computing to keep GPUs fed with data, and this benchmark score directly reflects the benefits Purdue researchers will get from using Gautschi. Using mixed-precision arithmetic allows HPC applications to leverage AI-optimized accelerators like GPUs, which use lower precision. Mixed-precision computing uses less power and allows for significant speed-ups, reducing time to science. Gautschi’s HPL-MxP score shows a more than 4x speedup on the same hardware simply by using lower precision arithmetic.”</p>
<p>The Gautschi cluster was built through a partnership with Dell, AMD, DDN, and Nvidia, thanks to support from Purdue Computes and the Institute for Physical AI (IPAI). Purdue researchers may obtain access to the Gautschi system through RCAC’s <a href="https://www.rcac.purdue.edu/services/communityclusters">Community Cluster Program</a>. For more information or to purchase access to Gautschi, please visit our <a href="https://www.rcac.purdue.edu/purchase">Purchase Page</a>.</p>
<p>RCAC operates the centrally-maintained research computing resources at Purdue University, providing access to leading-edge computational and data storage systems as well as expertise and support to Purdue faculty, staff, and student researchers. To learn more about HPC and how RCAC can help you, please visit: <a href="https://www.rcac.purdue.edu/">https://www.rcac.purdue.edu/</a></p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Wed, 17 Dec 2025 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Research Computing Holiday Break]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2483</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2483</guid>
				<description><![CDATA[<p>Research Computing personnel will observe the university winter break from 12:00am EST on 12/23/25 and will resume normal business hours on January 5th, 2026  During this time, Research Computing services will continue to be available, but all staff will be on leave.</p>
<p>Research Computing staff members will monitor the status of all computing and data resources in an effort to ensure continuous availability.</p>
<p>Research Computing staff members will monitor the ticketing system throughout the holiday period and answer critical issues and problems. Non-critical user issues and questions will be addressed beginning January 5th, 2026. There will also be no coffee hour consultations during this break.</p>
<p><strong>Scratch file purging (on community clusters with scratch space) will continue as normal during the break, so be sure to archive your files in scratch storage. This does not apply to Data Depot or home directories -- only scratch storage.</strong></p>
<p>Have a wonderful break, everyone, and we look forward to great things in the new year!</p>
]]></description>
				<pubDate>Tue, 16 Dec 2025 13:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Data Depot Outage]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2446</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2446</guid>
				<description><![CDATA[<p>The Data Depot storage system began experiencing issues starting around 4:30pm EDT today. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 9pm.</p>
]]></description>
				<pubDate>Sat, 01 Nov 2025 16:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Scheduled Gautschi Cluster Downtime]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2412</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2412</guid>
				<description><![CDATA[<p>The Gautschi cluster will be unavailable on Wednesday October 29th from 8:00 AM EDT until 5:00 PM EDT.</p>
<p>During this time, we will be performing a battery of performance benchmarks against the Gautschi filesystems and the AI partition.</p>
<p>Any Slurm jobs requesting a walltime which would take them past Wednesday, October 29, 2025 at 8:00am EDT will not start and will remain in the queue until after the maintenance is completed.</p>
]]></description>
				<pubDate>Wed, 29 Oct 2025 08:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Data Depot outage]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2414</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2414</guid>
				<description><![CDATA[<p>Edit:</p>
<p>The Data Depot file system has returned to full service and scheduling has resumed on all clusters.</p>
<hr />
<p>The Data Depot storage system began experience issues starting around 9am EDT this morning. Engineers are currently diagnosing the issue and are working to identify a fix. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 12pm (noon).</p>
]]></description>
				<pubDate>Fri, 17 Oct 2025 09:00:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Unscheduled Data Depot outage]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2406</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2406</guid>
				<description><![CDATA[<p>Edit:</p>
<p>Data Depot functionality has been restored.</p>
<hr />
<p>The Data Depot file system began experiencing issues with writes around 2:30pm EDT. The data migration process currently ongoing from Data Depot 2 to Data Depot 3 ran into an unexpected problem. Engineers have identified the problem and are correcting it. Users may have seen &quot;no space left on device&quot; for approximately 30 minutes. Job scheduling has been paused while this issue is being addressed.</p>
<p>We will provide an update by 5 PM.</p>
]]></description>
				<pubDate>Wed, 15 Oct 2025 14:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Gautschi Cluster Scheduled Maintenance]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2289</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2289</guid>
				<description><![CDATA[<p>The Gautschi cluster will be unavailable Wednesday, July 16, 2025 from 8:00am - 5:00pm EDT for scheduled maintenance. The cluster will return to full production by Wednesday, July 16, 2025 at 5:00pm EDT.</p>
<p>During this time, Gautschi will have its operating system patched and a maintenance upgrade performed on the storage filesystems.</p>
<p>Any Slurm jobs requesting a walltime which would take them past Wednesday, July 16, 2025 at 8:00pm EDT will not start and will remain in the queue until after the maintenance is completed.</p>
]]></description>
				<pubDate>Wed, 16 Jul 2025 08:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Incorrect Account Email]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2303</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2303</guid>
				<description><![CDATA[<p>Today, RCAC user management systems sent incorrect email messages to many faculty partners and their resource managers. Please ignore any recent email about expirations or removals. You may verify who has access to your resources through our site <a href="http://www.rcac.purdue.edu">www.rcac.purdue.edu</a>, at any time, or email <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a> if you have concerns..</p>
<p>Thank you!</p>
]]></description>
				<pubDate>Tue, 15 Jul 2025 14:30:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Gautschi Cluster Open OnDemand Maintenance (June 30)]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2287</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2287</guid>
				<description><![CDATA[<h3>When will it happen?</h3>
<p>The Open OnDemand service for Gautschi will be unavailable <strong>from Monday, June 30 at 9:00am EDT, 2025 to Monday, June 30 at 5:00pm EDT, 2025</strong>.</p>
<h3>What is being upgraded?</h3>
<p>During the maintenance, RCAC team will perform a reconfiguration to the Open OnDemand dashboard for Gautschi which include a brand new design of the dashboard with new features listed below.</p>
<h3>What’s New on the dashboard?</h3>
<ul>
<li>
<strong>CPU/GPU Balance and Usage:</strong> Monitor your group usages and remaining balance on Gautschi.</li>
<li>
<strong>Disk Usage:</strong> Monitor your storage utilization across Gautschi’s file systems.</li>
<li>
<strong>Job Queue:</strong> View and manage your running and queued jobs on Gautschi.</li>
<li>
<strong>News Feed:</strong> Stay updated with the latest Gautschi news, outages, and announcements.</li>
<li>
<strong>Partition Status:</strong> Monitor the current state of partitions/queues on Gautschi.</li>
<li>
<strong>My Jobs Page:</strong> Re-designed page to show detailed job information for your jobs and jobs in your group(s) as well as job management.</li>
<li>
<strong>Performance Metrics Page:</strong> Analyze your job performance and resource utilization patterns over time.</li>
</ul>
<h3>What will impact you?</h3>
<ul>
<li>All Slurm jobs on Gautschi (including jobs that have already submitted through Open OnDemand before this maintenance) will continue and <strong>NOT</strong> be impacted.</li>
<li>All functions related to Open OnDemand including login will be unavailable during the maintenance.</li>
</ul>
<p>Gautschi Open OnDemand service will return to full production by Monday, June 30 at 5:00pm EDT, 2025.</p>
<p>Please submit a ticket through RCAC Help Desk (<a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>) if you have any questions or suggestions.</p>
]]></description>
				<pubDate>Mon, 30 Jun 2025 09:00:00 -0400</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Gautschi Support Hour now available for system users]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2286</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2286</guid>
				<description><![CDATA[<p>The Rosen Center for Advanced Computing (RCAC) is excited to announce special-purpose support and training times for Gautschi system users, the Gautschi Support Hour. Much like our Anvil Support Hour, the Gautschi Support Hour will be a designated, open-door meeting time specifically dedicated to helping Gautschi users with any issues they may be facing.</p>
<h3>Gautschi Support Hour:</h3>
<p><strong>When:</strong> Every <img width="375" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/Gautschi-Ribbon-Cutting/1W5A2763-Enhanced-NR.jpg" />Friday from 2–3:30 p.m., beginning June 27 and running through August 22</p>
<p><strong>Where:</strong> Hall of Data Science and AI (DSAI), Room 1012</p>
<p><strong>Why:</strong> To provide Gautschi users with face-to-face support and training from our AI and HPC experts, helping users to take full advantage of everything the cutting-edge system has to offer.</p>
<p>The Gautschi supercomputer was designed to support artificial intelligence (AI) research at Purdue in addition to traditional HPC workloads. Taking advantage of Gautschi Support Hour will help ensure users maximize their workflow efficiency, learning best practices and technical skills that can be immediately implemented into their research. This is especially pertinent for Gautschi users, as this system has the largest proportion of “non-traditional HPC users” of any RCAC resource. Some examples of what Gautschi Support Hour can help with:</p>
<ul>
<li>Enable use of Gautschi's powerful Nvidia GPUs.</li>
<li>Understand changes to the Slurm scheduling configuration from previous clusters.</li>
<li>Understand workflow automation tooling for data-centric applications.</li>
<li>Understand interactive computing with our Gateway (OnDemand) portal.</li>
<li>Or any question related to running scientific workloads on Gautschi.</li>
</ul>
<p>Gautschi Support Hours not only give users the opportunity to tackle any issues they may have, but also allows RCAC to collect feedback surrounding the system. This feedback will be indispensable for making adjustments to the cluster and improving overall user experience and satisfaction.</p>
<p>Gautschi Support Hour is open to all Gautschi users. Beginning on June 27, this office-hour style support session will take place every Friday from 2–3:30 p.m. in the Hall of Data Science and AI (DSAI), Room 1012. Users can come and speak directly with RCAC’s AI and HPC experts, who will be on-site and ready to offer assistance and guidance. For now, Gautschi Support Hour is scheduled to run for two months, through August 22. If demand is high and the support sessions prove to be a success, this timeline may be extended further. If you have any questions about Gautschi Support Hour, please contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>. Otherwise, we look forward to seeing you on Fridays!</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Fri, 20 Jun 2025 11:23:26 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
			</channel>
</rss>