<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/">
	<channel>
		<title>RCAC - Announcements</title>
		<link>https://rcac.purdue.edu/index.php/news/rss/Announcements</link>
		<description><![CDATA[news::news.feed description]]></description>
		<atom:link href="https://rcac.purdue.edu/index.php/news/rss/Announcements" rel="self" type="application/rss+xml" />
		<language>en</language>
		<lastBuildDate>Tue, 07 Apr 2026 08:56:02 EDT</lastBuildDate>
					<item>
				<title><![CDATA[Scheduled RCAC Maintenance – April 22–23 (All Systems and Research Network Unavailable)]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7655</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7655</guid>
				<description><![CDATA[<p>Dear Research Computing Community,</p>
<p>As part of the ongoing effort to upgrade data center capacity for future computing needs, maintenance is scheduled from Wednesday, April 22 at 6 AM through Thursday, April 23 at 5 PM. This will affect all RCAC systems and the Research Network.</p>
<p>During this window, <strong>all RCAC systems and research network services will be unavailable</strong>, including:</p>
<ul>
<li>All Computing Clusters including Bell, Negishi, Gautschi, Scholar, Rowdy, Gilbreth, Hammer, Anvil.</li>
<li>All Data Storage Systems including Data Depot, Fortress, Anvil Ceph storage, scratch and home storage on clusters, Research Network and ScienceDMZ</li>
<li>Gateway services including Hubzero, GenAI Studio, Anvil GPT</li>
<li>
<a href="http://www.rcac.purdue.edu">www.rcac.purdue.edu</a>
</li>
<li>Geddes</li>
<li>Globus</li>
</ul>
<p><strong>How does this maintenance impact you?</strong></p>
<ul>
<li>Any Slurm jobs requesting a walltime that would extend past the start of the maintenance will not start and will remain in the queue until after maintenance is complete.</li>
<li>All active sessions and jobs running on affected systems will be preempted at the start of the outage
any queued jobs will not begin until services are restored.</li>
<li>Access to login nodes, storage systems, and web portals will be unavailable throughout the downtime.</li>
<li>Automated data workflows that rely on affected systems (e.g., rsync, data pipelines, archive processes) will not function until systems are back online.</li>
<li>Globus transfers may time out during the maintenance window.</li>
</ul>
<p><strong>To prepare for this maintenance, we suggest to:</strong></p>
<ul>
<li>Download any needed data or scripts before 6:00 AM on April 22.</li>
<li>Prepare instrumentation devices for the Data Depot to be unavailable and</li>
</ul>
<p>We appreciate your understanding and cooperation as we complete these necessary upgrades to improve reliability and performance across RCAC infrastructure. For assistance, questions, or concerns contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
<p>Best regards,
Rosen Center for Advanced Computing (RCAC) / Purdue IT</p>
]]></description>
				<pubDate>Wed, 22 Apr 2026 06:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Bioinformatics and genomics modules available for Anvil users]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7643</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7643</guid>
				<description><![CDATA[<p>Anvil users <img width="400" style="padding:10px;" class="float-right" alt="Bioinformatics image containing DNA strand and circuit board lines" src="https://www.rcac.purdue.edu/files/RCAC-Stories/2025-Bionformatics-Review/Bioinfo_resize.jpg" />working in bioinformatics and genomics have access to over <strong>750 pre-built software modules</strong> through the biocontainers framework, covering tools for sequence alignment, genome assembly, variant calling, RNA-seq analysis, and more. To get started, load the biocontainers module and explore available software with <code>module load biocontainers</code> followed by <code>module avail</code>.</p>
<p>A full catalog of available modules with usage instructions is at the <a href="https://biocontainer-doc.readthedocs.io/">Biocontainers Documentation</a> site. For step-by-step guides on running common genomics workflows on HPC systems, visit the <a href="https://rcac-bioinformatics.github.io/">RCAC Bioinformatics Tutorials</a> site, which includes documentation for tools like HiFiasm, BRAKER3, Trinity, and Nextflow, along with best practices for job submission and resource optimization. Full workshop materials are also available for <a href="https://rcac-bioinformatics.github.io/rnaseq-analysis/">RNA-seq analysis</a>, <a href="https://rcac-bioinformatics.github.io/genome-assembly/">genome assembly</a>, and <a href="https://rcac-bioinformatics.github.io/genome-annotation/">genome annotation</a>. For upcoming training sessions and community programs, including the biweekly Genomics Exchange discussion series and the <a href="https://midwestbioinformatics.org/">Midwest Bioinformatics Showcase</a> seminar series, see the <a href="https://www.rcac.purdue.edu/services/cbs">Computational Biology Services</a> page. For questions or support with bioinformatics workflows on Anvil, contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Mon, 23 Mar 2026 00:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[CMS Compute and Storage downtime]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7623</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7623</guid>
				<description><![CDATA[<p>On Wednesday afternoon March 4th 2026, the '2550' Data-Center, which hosts the CMS Tier-2, will undergo a planned full power outage, needed for upgrades.</p>
<p>This will affect the following services:</p>
<ul>
<li>CMS Front-End (aka 'Login') nodes;</li>
<li>EOS storage;</li>
<li>XCache;</li>
<li>CVMFS;</li>
<li>Hammer cluster SLURM jobs;</li>
<li>Analysis Facility will operate in reduced capacity:
<ul>
<li>fewer CPUs, RAM and GPUs will be available;</li>
<li>CVMFS and EOS mounts will not be available;</li>
<li>XCache will not be usable.</li>
</ul>
</li>
</ul>
<p>Given the scale of the event, we have scheduled a 'full downtime' for our computing resources.</p>
<p>Users will be able to do some interactive work in the AF and the Community Clusters (other than 'Hammer'), as home-directories and Depot storage are not expected to be affected (they are hosted in the MATH Data-Center), but are strongly advised to not depend on the reliability of any services during the power outage period.</p>
<p>The current plan is to have all machines powered down before the end of day (5pm) on Wed. March 4th, and then have them powered back up by start of business (9am) on Thu. March 5th.</p>
<p>Please - plan your use of the resources accordingly.
CMS Support Team</p>
]]></description>
				<pubDate>Mon, 02 Mar 2026 08:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Purdue AI Research Showcase Student Poster Session Callout]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7618</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7618</guid>
				<description><![CDATA[<p>The Institute for Physical AI, in partnership with the Rosen Center for Advanced Computing, is organizing the <a href="https://www.rcac.purdue.edu/news/7612">Purdue AI Research Showcase</a>. This is a campus-wide event designed to bring together AI experts, practitioners, and learners to showcase the groundbreaking work emerging at Purdue, as well as sharing innovations from industry partners and exploring new opportunities for collaboration to solve real-world challenges.</p>
<p>During the Purdue AI Research Showcase, IPAI and RCAC will be hosting a poster session. We are calling on all Purdue researchers to <a href="https://purdue.ca1.qualtrics.com/jfe/form/SV_9LUIiPI82uebGaW">submit your posters</a> today!</p>
<h3>Poster Session Callout:</h3>
<p><strong>Poster Topic:</strong> AI <img width="400" style="padding:10px;" class="float-right" alt="Woman scientist presenting research at poster session" src="https://www.rcac.purdue.edu/files/cisymposium/IPAI-Purdue-AI-Research-Showcase-April-2026/1W5A2806.jpg" />
research and applications across Purdue—new models, systems, and interdisciplinary breakthroughs solving real‑world challenges.  Submissions can include early-stage ideas, works in progress, group projects, or class assignments, not only completed research.</p>
<p><strong>Poster presentation date:</strong> April 14-15, 2026; Exact Date and Time: TBD</p>
<p><strong>Poster presentation location:</strong> Envision Center, Purdue University West Lafayette Campus</p>
<p><strong>Participation eligibility:</strong> Posters authored or co-authored by undergraduate or graduate students, faculty, and staff from any Purdue campus</p>
<p><strong>Awards:</strong> Best poster will receive travel expense coverage of up to $1,000 to a conference of their choice!</p>
<p><strong>Parameters:</strong> Posters must be 30&quot; wide by 40&quot; long or smaller. Posters may be vertically or horizontally oriented, but the width must be less than 30&quot;. More information about wide printing at the Purdue West Lafayette campus can be found here: <a href="https://it.purdue.edu/facilities/instructionallabs/printing/wide_format_printing.php">https://it.purdue.edu/facilities/instructionallabs/printing/wide_format_printing.php</a></p>
<p>Make sure you <a href="https://purdue.ca1.qualtrics.com/jfe/form/SV_9LUIiPI82uebGaW">submit this survey</a> to ensure your spot in this event! Once we receive your submission you will receive a follow up email with additional details for the event. Submissions are due no later than Monday, April 13th.</p>
<p>We look forward to your participation and learning more about your research in AI, and don't forget to save the date for the <a href="https://www.rcac.purdue.edu/news/7612">Purdue AI Research Showcase</a>!</p>
<div class="my-3 text-center"><img width="650" alt="Purdue AI Research Showcase Save the Date Infographic" src="https://www.rcac.purdue.edu/files/cisymposium/IPAI-Purdue-AI-Research-Showcase-April-2026/Save_the_date.pdf" /></div>
]]></description>
				<pubDate>Mon, 23 Feb 2026 00:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Upcoming February 5 Maintenance – Math Data Center Upgrades and Service Impact]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7588</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7588</guid>
				<description><![CDATA[<img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/images/mathrenno.png" />
On Thursday, February 5, RCAC will perform planned maintenance in the MATH data center to support cooling upgrades and capacity improvements as part of the ongoing MATH data center renovation project. This renovation will allow Purdue to better support growing AI, data‑intensive, and HPC workloads for research. When completed, MATH will see a 32% increase in floor space, a 60% increase in usable power, and a two-fold increase in cooling capacity. 
<p>During this maintenance window, several clusters will experience a temporary outage so that hardware can be safely powered down while facility work is performed:</p>
<ul>
<li>
<p>Gautschi, Gilbreth, Negishi, Bell, and Anvil cluster nodes will be powered down.</p>
</li>
<li>
<p>The Gilbreth’s legacy V100 GPUs, that are well past their lifetime, will be decommissioned.</p>
</li>
<li>
<p>Hammer (Math nodes) and Geddes: A subset of nodes will be powered down but the services will be available, unless communicated separately.</p>
</li>
</ul>
<h3>How does this maintenance impact you?</h3>
<ul>
<li>
<p>Clusters listed in this message won’t be available to run jobs during the maintenance.</p>
</li>
<li>
<p>Any jobs requesting a walltime which would take them past the start of the maintenance will not start and will remain in the queue until after the maintenance is completed.</p>
</li>
<li>
<p>Users can continue to access their data.</p>
</li>
<li>
<p>GenAI studio will remain available. This maintenance will position Purdue to support growing computational needs. Users should see long‑term benefits in system reliability and our ability to support future computing and AI resources.</p>
</li>
</ul>
<p>If you have questions about how this outage will affect your work or need support, please contact <a href="mailto:rcac%E2%80%91help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Mon, 12 Jan 2026 14:30:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Research Computing Holiday Break]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7572</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7572</guid>
				<description><![CDATA[<p>Research Computing personnel will observe the university winter break from 12:00am EST on 12/23/25 and will resume normal business hours on January 5th, 2026  During this time, Research Computing services will continue to be available, but all staff will be on leave.</p>
<p>Research Computing staff members will monitor the status of all computing and data resources in an effort to ensure continuous availability.</p>
<p>Research Computing staff members will monitor the ticketing system throughout the holiday period and answer critical issues and problems. Non-critical user issues and questions will be addressed beginning January 5th, 2026. There will also be no coffee hour consultations during this break.</p>
<p><strong>Scratch file purging (on community clusters with scratch space) will continue as normal during the break, so be sure to archive your files in scratch storage. This does not apply to Data Depot or home directories -- only scratch storage.</strong></p>
<p>Have a wonderful break, everyone, and we look forward to great things in the new year!</p>
]]></description>
				<pubDate>Tue, 16 Dec 2025 13:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Gilbreth Cluster Modernization]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7275</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7275</guid>
				<description><![CDATA[<p>As part of an ongoing effort to utilize modern features in the Slurm scheduler and to streamline the process of usage reporting for research groups--something that is often requested by various PIs, the scheduler configurations on the Gilbreth cluster will be modified <a href="https://www.rcac.purdue.edu/news/7274">in an upcoming maintenance</a> . <strong>Users will be required to update their job scripts</strong> to conform to the guidelines described below.</p>
<ul>
<li>All jobs on the cluster will be required to explicitly specify a partition and an account (i.e. your group's name) at submission time. You can find the names of the available partitions and accounts from the <code>showpartitions</code> and <code>slist</code> commands respectively. Any job that does not specify an account <em>and</em> a partition will be rejected at submission time.</li>
<li>All jobs on the cluster must include an explicit memory request through use of the <code>--mem</code> option. Jobs that do not make an explicit memory request will be rejected and a reasonable default will be suggested.</li>
<li>Accounts will be renamed to no longer include a suffix designating the type of resource the account contains. This means that groups with multiple types of resources broken into multiple accounts will be consolidated into a single group.</li>
<li>
<strong>GPU type will be requested through specifying a partition containing the GPU type rather than submission to an account with a suffix designating that partition.</strong> i.e. jobs previously submitted to <code>-A mylab-k </code> will now be specified as <code>-A mylab -p a100-40gb</code>
</li>
<li>The output of <code>slist</code> and the default output of <code>squeue</code> will be modified to reflect the new scheduler design.</li>
<li>All &quot;shared accounts&quot; such as <code>standby</code> that represent resources outside of your typical &quot;group accounts&quot; will continue to exist but will require a different request syntax.
<ul>
<li>Standby will become a Quality of Service (QoS) and jobs that previously ran under the &quot;standby&quot; account, will now be submitted to your &quot;group account&quot; and be tagged with the standby QoS. I.e. if your job previously used the <code>-A standby</code> option, you would now use <code>-A mylab -q standby</code>
</li>
</ul>
</li>
</ul>
<table> <caption>Summary of Changes</caption>
<thead>
<tr>
<th scope="col">Use Case</th>
<th scope="col">Old Syntax</th>
<th scope="col">New Syntax</th>
<th scope="col">What Changed</th>
</tr>
</thead>
<tbody>
<tr>
<td>Submit a job to your group's account</td>
<td><code>sbatch -A mygroup</code></td>
<td><code>sbatch -A mygroup -p &lt;appropriate partition&gt; --mem=50G</code></td>
<td>The partition and memory must be specified.</td>
</tr>
<tr>
<td>Submit a standby job</td>
<td><code>sbatch -A standby</code></td>
<td><code>sbatch -A mygroup -q standby -p &lt;appropriate partition&gt; --mem=50G</code></td>
<td><code>standby</code> is now a QoS instead of an account</td>
</tr>
</tbody>
</table>
<p><strong>How will this affect you</strong>:</p>
<ol>
<li>You will need to change your jobscripts and your method of invocation to include the required options outlined above.</li>
<li>If you have any scripts or tooling that rely on the current output of <code>slist</code> or <code>squeue</code>, those scripts will need to be modified to use the new formatted output.</li>
</ol>
<p>You can prepare for this maintenance by reviewing the new Slurm organization in our user guide's <a href="https://www.rcac.purdue.edu/knowledge/gilbreth/run/slurm/queues">Queues Page</a>.</p>
<p>If you have any questions about these upcoming changes, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Tue, 19 Aug 2025 08:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Announcing the Fall 2025 AI/LLM Training Series]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7286</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7286</guid>
				<description><![CDATA[<h1>Fall 2025 AI/LLM Training Series Preview</h1>
<p>Get ready for a 16-week hands-on training series focused on artificial intelligence, machine learning, and large language models (LLMs). Led by expert instructors, each weekly session builds practical skills for research, development, and deployment.</p>
<p><strong>Weekly Topics &amp; Dates:</strong></p>
<pre><code>Aug 29 – Intro to Python, Numpy, and Pandas (Ashish)
Sep 5 – Python Data Visualization (Ashish)
Sep 12 – Intro to R: Tidyverse, ggplot2, dplyr (Mihir)
Sep 19 – AI Day: Prompt Engineering, RAG, Agents, Finetuning (All)
Sep 26 – ML Fundamentals I: Supervised Learning (Haniye)
Oct 3 – ML Fundamentals II: Unsupervised Learning (Haniye)
Oct 10 – Intro to PyTorch &amp; TensorFlow (Christina)
Oct 17 – Intro to NLP + Hugging Face Transformers (Mihir)
Oct 24 – Intro to LangChain for LLM Apps (Haniye)
Oct 31 – Real-Time LLM Apps with Streamlit &amp; Gradio (TBD)
Nov 7 – Evaluating LLMs: Benchmarks &amp; Metrics (TBD)
Nov 14 – Vector Databases 101: Weaviate, FAISS, Pinecone (Mihir)
Nov 18 – Using LLMs in Scientific Research (TBD)
Nov 22 – Building Research Chatbots (Non-RAG) (TBD)
Dec 5 – Ethics &amp; Governance in Generative AI (Ashish)
Dec 12 – From Data to Deployment: MLOps for LLMs (TBD)
</code></pre>
<p>Bonus Module (Pre-recorded):</p>
<pre><code>Intro to Purdue GenAI Studio UI &amp; API – Hands-on demo and custom RAG training
</code></pre>
<p>Registration and connection details will be announced soon.</p>
<p>Stay tuned for updates and secure your spot in this exciting series!</p>
]]></description>
				<pubDate>Mon, 11 Aug 2025 08:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Negishi Scheduler Modernization]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7245</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7245</guid>
				<description><![CDATA[<p>As part of an ongoing effort to utilize modern features in the Slurm scheduler and to streamline the process of usage reporting for research groups--something that is often requested by various PIs, the scheduler configurations on the Bell cluster will be modified <a href="https://www.rcac.purdue.edu/news/7231">in an upcoming maintenance</a> . <strong>Users will be required to update their job scripts</strong> to conform to the guidelines described below.</p>
<ul>
<li>All jobs on the cluster will be required to explicitly specify a partition and an account (i.e. your group's name) at submission time. You can find the names of the available partitions and accounts from the <code>showpartitions</code> and <code>slist</code> commands respectively. Any job that does not specify an account <em>and</em> a partition will be rejected at submission time.</li>
<li>The output of <code>slist</code> and the default output of <code>squeue</code> will be modified to reflect the new scheduler design.</li>
<li>All &quot;shared accounts&quot; such as <code>standby</code>, <code>highmem</code>, etc. that represent resources outside of your typical &quot;group accounts&quot; will continue to exist but will require a different request syntax.
<ul>
<li>Standby will become a Quality of Service (QoS) and jobs that previously ran under the &quot;standby&quot; account, will now be submitted to your &quot;group account&quot; and be tagged with the standby QoS. I.e. if your job previously used the <code>-A standby</code> option, you would now use <code>-A mylab -q standby</code>
</li>
<li>The <code>highmem</code> and <code>gpu</code> shared accounts will become partitions and jobs that previously ran under these accounts will now be submitted to your &quot;group account&quot; and should be submitted to the appropriate partition, i.e., <code>-A highmem</code> will become <code>-A mylab -p highmem</code>
</li>
<li>Groups with access to the <code>interactive</code> account will now submit to their &quot;group account&quot; and the interactive partition. i.e. <code>-A interactive</code> will become <code>-A mylab -p interactive</code>
</li>
</ul>
</li>
</ul>
<table> <caption>Summary of Changes</caption>
<thead>
<tr>
<th scope="col">Use Case</th>
<th scope="col">Old Syntax</th>
<th scope="col">New Syntax</th>
<th scope="col">What Changed</th>
</tr>
</thead>
<tbody>
<tr>
<td>Submit a job to your group's account</td>
<td><code>sbatch -A mygroup</code></td>
<td><code>sbatch -A mygroup -p cpu</code></td>
<td>The <code>cpu</code> partition must be specified.</td>
</tr>
<tr>
<td>Submit a standby job</td>
<td><code>sbatch -A standby</code></td>
<td><code>sbatch -A mygroup -q standby -p cpu</code></td>
<td><code>standby</code> is now a QoS instead of an account</td>
</tr>
<tr>
<td>Submit a highmem job</td>
<td><code>sbatch -A highmem</code></td>
<td><code>sbatch -A mygroup -p highmem</code></td>
<td><code>highmem</code> is now a partition instead of an account</td>
</tr>
<tr>
<td>Submit a gpu job</td>
<td><code>sbatch -A gpu</code></td>
<td><code>sbatch -A mygroup -p gpu</code></td>
<td><code>gpu</code> is now a partition instead of an account</td>
</tr>
<tr>
<td>Submit an interactive job</td>
<td><code>sbatch -A interactive</code></td>
<td><code>sbatch -A mygroup -p interactive</code></td>
<td><code>interactive</code> is now a partition instead of an account</td>
</tr>
</tbody>
</table>
<p><strong>How will this affect you</strong>:</p>
<ol>
<li>You will need to change your jobscripts and your method of invocation to include the required options outlined above.</li>
<li>If you have any scripts or tooling that rely on the current output of <code>slist</code> or <code>squeue</code>, those scripts will need to be modified to use the new formatted output.</li>
</ol>
<p>You can prepare for this maintenance by reviewing the new Slurm organization in our user guide's <a href="https://www.rcac.purdue.edu/knowledge/negishi/run/slurm/new-queues">Queues Page</a>.</p>
<p>If you have any questions about these upcoming changes, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Tue, 29 Jul 2025 08:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Bell Scheduler Modernization]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7228</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7228</guid>
				<description><![CDATA[<p>As part of an ongoing effort to utilize modern features in the Slurm scheduler and to streamline the process of usage reporting for research groups--something that is often requested by various PIs, the scheduler configurations on the Bell cluster will be modified <a href="https://www.rcac.purdue.edu/news/7231">in an upcoming maintenance</a> . <strong>Users will be required to update their job scripts</strong> to conform to the guidelines described below.</p>
<ul>
<li>All jobs on the cluster will be required to explicitly specify a partition and an account (i.e. your group's name) at submission time. You can find the names of the available partitions and accounts from the <code>showpartitions</code> and <code>slist</code> commands respectively. Any job that does not specify an account <em>and</em> a partition will be rejected at submission time.</li>
<li>The output of <code>slist</code> and the default output of <code>squeue</code> will be modified to reflect the new scheduler design.</li>
<li>All &quot;shared accounts&quot; such as <code>standby</code>, <code>highmem</code>, etc. that represent resources outside of your typical &quot;group accounts&quot; will continue to exist but will require a different request syntax.
<ul>
<li>Standby will become a Quality of Service (QoS) and jobs that previously ran under the &quot;standby&quot; account, will now be submitted to your &quot;group account&quot; and be tagged with the standby QoS. I.e. if your job previously used the <code>-A standby</code> option, you would now use <code>-A mylab -q standby</code>
</li>
<li>The <code>highmem</code> and <code>gpu</code> shared accounts will become partitions and jobs that previously ran under these accounts will now be submitted to your &quot;group account&quot; and should be submitted to the appropriate partition, i.e., <code>-A highmem</code> will become <code>-A mylab -p highmem</code>
</li>
</ul>
</li>
</ul>
<table> <caption>Summary of Changes</caption>
<thead>
<tr>
<th scope="col">Use Case</th>
<th scope="col">Old Syntax</th>
<th scope="col">New Syntax</th>
<th scope="col">What Changed</th>
</tr>
</thead>
<tbody>
<tr>
<td>Submit a job to your group's account</td>
<td><code>sbatch -A mygroup</code></td>
<td><code>sbatch -A mygroup -p cpu</code></td>
<td>The <code>cpu</code> partition must be specified.</td>
</tr>
<tr>
<td>Submit a standby job</td>
<td><code>sbatch -A standby</code></td>
<td><code>sbatch -A mygroup -q standby -p cpu</code></td>
<td><code>standby</code> is now a QoS instead of an account</td>
</tr>
<tr>
<td>Submit a highmem job</td>
<td><code>sbatch -A highmem</code></td>
<td><code>sbatch -A mygroup -p highmem</code></td>
<td><code>highmem</code> is now a partition instead of an account</td>
</tr>
<tr>
<td>Submit a gpu job</td>
<td><code>sbatch -A gpu</code></td>
<td><code>sbatch -A mygroup -p gpu</code></td>
<td><code>gpu</code> is now a partition instead of an account</td>
</tr>
<tr>
<td>Submit a multigpu job</td>
<td><code>sbatch -A multigpu</code></td>
<td><code>sbatch -A mygroup -p multigpu</code></td>
<td><code>multigpu</code> is now a partition instead of an account</td>
</tr>
</tbody>
</table>
<p><strong>How will this affect you</strong>:</p>
<ol>
<li>You will need to change your jobscripts and your method of invocation to include the required options outlined above.</li>
<li>If you have any scripts or tooling that rely on the current output of <code>slist</code> or <code>squeue</code>, those scripts will need to be modified to use the new formatted output.</li>
</ol>
<p>You can prepare for this maintenance by reviewing the new Slurm organization in our user guide's <a href="https://www.rcac.purdue.edu/knowledge/bell/run/slurm/new-queues">Queues Page</a>.</p>
<p>If you have any questions about these upcoming changes, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Tue, 29 Jul 2025 08:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Gilbreth Maintenance Test Environment Available to Users]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7056</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7056</guid>
				<description><![CDATA[<p>As part of <a href="https://www.rcac.purdue.edu/news/6965">next week's Gilbreth's maintenance</a>, we will be upgrading the operating system from CentOS 7 to Rocky Linux 9 which requires us to rebuild the applications hosted on Gilbreth. As part of this application rebuild, <a href="https://www.rcac.purdue.edu/news/7050">we will be removing old and unused versions of our applications and upgrading many of them to newer versions.</a></p>
<p>In order to aid in that transition, we have made a test environment available to users so they can begin re-building their applications under the new toolchains and exploring the new software stack. We expect that this environment will not change much ahead of the maintenance next week, but we may end up making small changes to the application stack before then.</p>
<p>This test environment can be accessed by using the command <code>ssh gilbreth-fe05</code> from any Gilbreth's front end nodes. The Slurm queues you are used to should be present and will submit to test nodes. Wait times may be larger for these test nodes due to their only being a limited number of nodes available for testing.</p>
<p>If you encounter issues or have questions about this maintenance, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Tue, 25 Feb 2025 08:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Gilbreth Maintenance Application Version Changes]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7050</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7050</guid>
				<description><![CDATA[<p>As part of Gilbreth's <a href="https://www.rcac.purdue.edu/news/6965">upcoming maintenance</a>, we will be upgrading the operating system from CentOS 7 to Rocky Linux 9 which requires us to rebuild the applications hosted on Gilbreth. As part of this application rebuild, we will be removing old and unused versions of our applications and upgrading many of them to newer versions. We have compiled a list of those affected software below.</p>
<p>If you have any questions about the upcoming maintenance, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
<hr />
<table>
<thead>
<tr>
<th scope="col">Application</th>
<th scope="col">Current Versions</th>
<th scope="col">New Versions</th>
</tr>
</thead>
<tbody>
<tr>
<td>Abaqus</td>
<td>2017, 2018, 2019, 2020, 2021, 2022</td>
<td>2023, 2024</td>
</tr>
<tr>
<td>Alphafold</td>
<td>2.1.1, 2.2.0, 2.3.0, 2.3.1, 2.3.2</td>
<td>2.3.2</td>
</tr>
<tr>
<td>Amber</td>
<td>16, 24</td>
<td>24</td>
</tr>
<tr>
<td>Anaconda</td>
<td>2024.02-py311</td>
<td>Removed. *Note: The anaconda module has been replaced with the <code>conda</code> module *</td>
</tr>
<tr>
<td>Ansys</td>
<td>19.2, 2019R3, 2020R1, 2021R2, 2022R1, 2023R1, 2023R2</td>
<td>2023R2, 2024R2</td>
</tr>
<tr>
<td>Archivemount</td>
<td>0.8.12</td>
<td>0.8.12, 0.9.1</td>
</tr>
<tr>
<td>AWS-CLI</td>
<td>2.4.15</td>
<td>2.24.5</td>
</tr>
<tr>
<td>Boost</td>
<td>1.79</td>
<td>1.79.0, 1.85.0</td>
</tr>
<tr>
<td>BZip2</td>
<td>1.0.8</td>
<td>1.0.8</td>
</tr>
<tr>
<td>Cactus</td>
<td>2.2.3-gpu, 2.4.0-gpu</td>
<td>2.2.3-gpu, 2.4.0-gpu</td>
</tr>
<tr>
<td>CDO</td>
<td>1.9.5</td>
<td>1.9.5, 2.3.0</td>
</tr>
<tr>
<td>Chroma</td>
<td>2018-cuda9.0-ubuntu16.04-volta-openmpi, 2020.06, 2021.04</td>
<td>2021.04</td>
</tr>
<tr>
<td>Cmake</td>
<td>3.15.4, 3.20.6</td>
<td>3.26.5, 3.30.2</td>
</tr>
<tr>
<td>Conda</td>
<td>2024.09</td>
<td>2024.09</td>
</tr>
<tr>
<td>Cuda</td>
<td>8.0.61, 9.0.176, 10.0.130, 10.2.89, 11.0.3, 11.2.0, 11.7.0, 12.1.1</td>
<td>12.1.1, 12.6.0</td>
</tr>
<tr>
<td>Cuda-quantum</td>
<td>0.4.0, 0.8.0</td>
<td>0.8.0, 0.9.1</td>
</tr>
<tr>
<td>Cudnn</td>
<td>cuda-8.0_6.0, cuda-8.0_7.1, cuda-9.0_7.3, cuda-9.0_7.4, cuda-10.0_7.5, cuda-10.2_8.0, cuda-11.0_8.0, cuda-11.2_8.1, cuda-11.7_8.6, cuda-12.1_8.9</td>
<td>cuda-12.1_8.9, cuda-12_9.2</td>
</tr>
<tr>
<td>Cuquantum-appliance</td>
<td>23.03</td>
<td>23.03, 24.08</td>
</tr>
<tr>
<td>Curl</td>
<td>7.79.0</td>
<td>8.11.1</td>
</tr>
<tr>
<td>Duckdb</td>
<td>1.0.0</td>
<td>1.1.3</td>
</tr>
<tr>
<td>Envi</td>
<td>5.5.2</td>
<td>Removed</td>
</tr>
<tr>
<td>ffmpeg</td>
<td>4.2.1</td>
<td>6.1.1</td>
</tr>
<tr>
<td>fftw</td>
<td>3.3.10</td>
<td>3.3.10</td>
</tr>
<tr>
<td>Gamess</td>
<td>17.09-r2-libcchem</td>
<td>17.09-r2-libcchem</td>
</tr>
<tr>
<td>Gaussian16</td>
<td>A.03, B.01-gpu</td>
<td>Removed</td>
</tr>
<tr>
<td>Gaussview</td>
<td>6.0.16</td>
<td>Removed</td>
</tr>
<tr>
<td>gcc</td>
<td>4.8.5,6.3.0,9.3.0,12.3</td>
<td>11.5, 13.3.0</td>
</tr>
<tr>
<td>gdal</td>
<td>2.4.2, 3.5.3-grib, 3.5.3</td>
<td>3.9.2</td>
</tr>
<tr>
<td>geos</td>
<td>3.7.2, 3.9.4</td>
<td>3.13.0</td>
</tr>
<tr>
<td>gettext</td>
<td>0.20.1</td>
<td>0.23.1</td>
</tr>
<tr>
<td>gmake</td>
<td>4.2.1</td>
<td>4.3</td>
</tr>
<tr>
<td>gmp</td>
<td>6.1.2</td>
<td>6.1.2, 6.3.0</td>
</tr>
<tr>
<td>gmt</td>
<td>5.4.4</td>
<td>5.4.4, 6.5.0</td>
</tr>
<tr>
<td>gnuplot</td>
<td>5.2.7</td>
<td>6.0.2</td>
</tr>
<tr>
<td>grads</td>
<td>2.2.1</td>
<td>2.2.1</td>
</tr>
<tr>
<td>Gromacs</td>
<td>2018.2, 2020.2, 2021, 2021.3, 2024.1</td>
<td>2024.3, NGC:2023.02</td>
</tr>
<tr>
<td>gsl</td>
<td>2.4</td>
<td>2.7.1</td>
</tr>
<tr>
<td>Hadoop</td>
<td>2.7.7</td>
<td>Removed</td>
</tr>
<tr>
<td>hdf</td>
<td>4.2.14</td>
<td>4.2.15</td>
</tr>
<tr>
<td>hdf5</td>
<td>1.14.1</td>
<td>1.14.3</td>
</tr>
<tr>
<td>Hyper-shell</td>
<td>1.8.3, 2.0.2, 2.4.0, 2.5.1</td>
<td>2.5.2</td>
</tr>
<tr>
<td>idl</td>
<td>8.7</td>
<td>9.1</td>
</tr>
<tr>
<td>impi</td>
<td>2017.1.132, 2019.5.281</td>
<td>2021.12</td>
</tr>
<tr>
<td>intel</td>
<td>17.0.1.132, 19.0.5.281</td>
<td>2024.2.1</td>
</tr>
<tr>
<td>Jax</td>
<td>0.4.31</td>
<td>0.5.0</td>
</tr>
<tr>
<td>Julia</td>
<td>v1.5.0, v2.4.2, 1.7.1, 1.8.5, 1.9.3</td>
<td>1.11.1, v2.4.2</td>
</tr>
<tr>
<td>Jupyterhub</td>
<td>2.0.0</td>
<td>5.2.1</td>
</tr>
<tr>
<td>Lammps</td>
<td>10Feb2021, 15Jun2020, 24Oct2018, 29Oct2020</td>
<td>patch_15Jun2023, Aug292024</td>
</tr>
<tr>
<td>libiconv</td>
<td>1.16</td>
<td>1.17</td>
</tr>
<tr>
<td>libtiff</td>
<td>4.0.10</td>
<td>4.6.0</td>
</tr>
<tr>
<td>libxml2</td>
<td>2.9.9</td>
<td>2.10.3</td>
</tr>
<tr>
<td>Mathematica</td>
<td>11.3, 12.1, 12.3, 13.1, 14.1</td>
<td>14.1, 14.2</td>
</tr>
<tr>
<td>Matlab</td>
<td>R2017a, R2018a, R2019a, R2020a, R2022a, R2023a</td>
<td>R2024a</td>
</tr>
<tr>
<td>milc</td>
<td>quda0.8-patch4Oct2017</td>
<td>quda1.1.0-November2022</td>
</tr>
<tr>
<td>mpc</td>
<td>1.1.0</td>
<td>1.3.1</td>
</tr>
<tr>
<td>mpfr</td>
<td>3.1.6</td>
<td>4.2.1</td>
</tr>
<tr>
<td>namd</td>
<td>2.13-multinode, 2.13-singlenode, 2.13, 3.0-alpha3-singlenode</td>
<td>3.0</td>
</tr>
<tr>
<td>ncl</td>
<td>6.4.0</td>
<td>6.6.2</td>
</tr>
<tr>
<td>nco</td>
<td>4.6.7</td>
<td>5.2.4</td>
</tr>
<tr>
<td>ncurses</td>
<td>6.1</td>
<td>6.5</td>
</tr>
<tr>
<td>netcdf-c</td>
<td>4.5.0, 4.7.0, 4.9.2</td>
<td>4.9.2</td>
</tr>
<tr>
<td>netcdf-cxx4</td>
<td>4.3.0, 4.3.1</td>
<td>4.3.1</td>
</tr>
<tr>
<td>netcdf-fortran</td>
<td>4.4.4, 4.5.2, 4.6.1</td>
<td>4.6.1</td>
</tr>
<tr>
<td>netlib-lapack</td>
<td>3.6.0, 3.8.0</td>
<td>3.11.0</td>
</tr>
<tr>
<td>nvhpc</td>
<td>20.7, 20.9, 20.11, 21.5, 21.9, 22.7, 23.5</td>
<td>23.5, 24.7</td>
</tr>
<tr>
<td>Openblas</td>
<td>0.2.20, 0.3.7, 0.3.21</td>
<td>0.3.21, 0.3.27</td>
</tr>
<tr>
<td>Openmpi</td>
<td>3.1.5-gpu-cuda10, 2.1.6-gpu-cuda11, 4.1.5-gpu-cuda11, 4.1.5-gpu-cuda12</td>
<td>4.1.6, 5.0.5</td>
</tr>
<tr>
<td>Panoply</td>
<td>4.11.0</td>
<td>5.0.5</td>
</tr>
<tr>
<td>Parabricks</td>
<td>4.0.0-1</td>
<td>4.4.0-1</td>
</tr>
<tr>
<td>Paraview</td>
<td>5.9.0</td>
<td>5.11.0</td>
</tr>
<tr>
<td>proj</td>
<td>5.2.0, 8.2.1</td>
<td>9.4.1</td>
</tr>
<tr>
<td>protobuf</td>
<td>3.0.2</td>
<td>25.6</td>
</tr>
<tr>
<td>PyTorch</td>
<td>20.02-py3, 20.03-py3, 20.06-py3, 20.11-py3, 20.12-py3, 21.06-py3, 21.09-py3</td>
<td>25.01</td>
</tr>
<tr>
<td>qemu</td>
<td>2.10.1</td>
<td>Removed</td>
</tr>
<tr>
<td>qmcpack</td>
<td>3.5.0</td>
<td>3.16.0</td>
</tr>
<tr>
<td>qt</td>
<td>5.12.5</td>
<td>5.15.15</td>
</tr>
<tr>
<td>Quantum-Espresso</td>
<td>v6.6a1, v6.7, v7.1</td>
<td>7.3.1</td>
</tr>
<tr>
<td>R</td>
<td>3.6.1, 3.6.3, 4.0.0, 4.1.2, 4.2.2, 4.3.1</td>
<td>4.4.1</td>
</tr>
<tr>
<td>Rapidsai</td>
<td>0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 21.06, 21.10</td>
<td>23.06</td>
</tr>
<tr>
<td>Relion</td>
<td>2.1.b1, 3.0.8, 3.1.0, 3.1.2, 3.1.3, 4.0.1, 5.0.0</td>
<td>5.0.0</td>
</tr>
<tr>
<td>RStudio</td>
<td>1.2.1335, 1.3.959, 2021.09, 2022.07, 2023.06</td>
<td>2024.12</td>
</tr>
<tr>
<td>SAS</td>
<td>9.4</td>
<td>9.4</td>
</tr>
<tr>
<td>segalign</td>
<td>0.1.2</td>
<td>0.1.2</td>
</tr>
<tr>
<td>spark</td>
<td>2.4.4</td>
<td>Removed</td>
</tr>
<tr>
<td>sqlite</td>
<td>3.30.1</td>
<td>3.46.0</td>
</tr>
<tr>
<td>tar</td>
<td>1.3.2</td>
<td>1.34</td>
</tr>
<tr>
<td>tcl</td>
<td>8.6.8</td>
<td>8.6.12</td>
</tr>
<tr>
<td>TensorFlow</td>
<td>20.02-tf1-py3, 20.02-tf2-py3, 20.03-tf1-py3, 20.03-tf2-py3, 20.06-tf1-py3, 20.06-tf2-py3, 20.11-tf1-py3, 20.11-tf2-py3, 20.12-tf1-py3, 20.12-tf2-py3, 21.06-tf1-py3, 21.06-tf2-py3, 21.09-tf1-py3, 21.09-tf2-py3</td>
<td>24.10-tf2-py3, 24.11-tf2-py3, 24.12-tf2-py3, 25.01-tf2-py3</td>
</tr>
<tr>
<td>tecplot</td>
<td>360-2017-R3</td>
<td>260-2017-R3, 360-2024-R1</td>
</tr>
<tr>
<td>texlive</td>
<td>20200406</td>
<td>20220321</td>
</tr>
<tr>
<td>tk</td>
<td>8.6.8</td>
<td>8.6.11</td>
</tr>
<tr>
<td>torchani</td>
<td>2021.04</td>
<td>2021.04</td>
</tr>
<tr>
<td>totalview</td>
<td>2017.0.12, 2018.2.6, 2019.1.4, 2021.4.10</td>
<td>2024.4</td>
</tr>
<tr>
<td>ucx</td>
<td>1.13.0</td>
<td>1.17.0</td>
</tr>
<tr>
<td>udunits</td>
<td>2.2.28</td>
<td>2.2.28</td>
</tr>
<tr>
<td>valgrind</td>
<td>3.13.0</td>
<td>3.23.0</td>
</tr>
<tr>
<td>vlc</td>
<td>3.0.9.2</td>
<td>3.0.21</td>
</tr>
<tr>
<td>vmd</td>
<td>1.9.3</td>
<td>1.9.3</td>
</tr>
<tr>
<td>vscode</td>
<td>1.56, 1.59</td>
<td>1.97.2</td>
</tr>
<tr>
<td>xalt</td>
<td>1.1.2</td>
<td>3.1.1</td>
</tr>
<tr>
<td>xz</td>
<td>5.2.4</td>
<td>5.6.4</td>
</tr>
<tr>
<td>zlib</td>
<td>1.2.11</td>
<td>1.2.11, 1.3.1</td>
</tr>
</tbody>
</table>
]]></description>
				<pubDate>Tue, 25 Feb 2025 08:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Bell Maintenance Test Environment Available to Users]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7060</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7060</guid>
				<description><![CDATA[<p>As part of <a href="https://www.rcac.purdue.edu/news/6966">Bell's maintenance on March 4-5</a>, we will be upgrading the operating system from CentOS 7 to Rocky Linux 8, which requires us to rebuild the applications hosted on Bell. As part of this application rebuild, <a href="https://www.rcac.purdue.edu/news/7059">we will be removing old and unused versions of our applications and upgrading many of them to newer versions.</a></p>
<p>In order to aid in that transition, we have made a test environment available to users so they can begin rebuilding their applications under the new toolchains and exploring the new software stack. We expect that this environment will not change much ahead of the maintenance next week, but we may end up making small changes to the application stack before then.</p>
<p>This test environment can be accessed by using the combination Slurm options <code>-A debug --reservation=rocky8test</code> in your current job submission with <code>sbatch</code> or <code>sinteractive</code> from any Bell’s front end nodes. This will make sure to submit to test nodes. Wait times may be larger for these test nodes due to their only being a limited number of nodes available for testing. The limitation on the <code>debug</code> queue is: <strong>1 concurrent running job and 4 job submissions for each user with a 30-minute’s max walltime.</strong></p>
<p>If you encounter issues or have questions about this maintenance, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Fri, 21 Feb 2025 10:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Bell Maintenance Application Version Announcement]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7059</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7059</guid>
				<description><![CDATA[<p>As part of Bell's <a href="https://www.rcac.purdue.edu/news/6966">upcoming maintenance</a>, we will be upgrading the operating system from CentOS 7 to Rocky Linux 8 which requires us to rebuild the applications hosted on Bell. As part of this application rebuild, we will be removing old and unused versions of our applications and upgrading many of them to newer versions. We have compiled a list of those affected software below.</p>
<p>If you have any questions about the upcoming maintenance, please reach out to us at <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
<hr />
<table>
<thead>
<tr>
<th scope="col">Application</th>
<th scope="col">Current Versions (Ref)</th>
<th scope="col">New Versions</th>
</tr>
</thead>
<tbody>
<tr>
<td>abaqus</td>
<td>2019;2020;2021;2022;2023;</td>
<td>2024</td>
</tr>
<tr>
<td>anaconda</td>
<td>2019.10-py27;2020.02-py37;2020.11-py38;2024.02-py311;</td>
<td>Removed. *Note: The anaconda module has been replaced with the conda module *</td>
</tr>
<tr>
<td>ansys</td>
<td>2019R3;2020R1;2021R2;2022R1;2022R2;2023R1;2023R2;</td>
<td>2023R1, 2024R2</td>
</tr>
<tr>
<td>ansysem</td>
<td>2020r1;2021r2;</td>
<td>Removed.</td>
</tr>
<tr>
<td>aocc</td>
<td>2.1.0;2.1;</td>
<td>Removed.</td>
</tr>
<tr>
<td>aws-cli</td>
<td>2.4.15;</td>
<td>2.9.7</td>
</tr>
<tr>
<td>bbftp</td>
<td>3.2.1;</td>
<td>Removed.</td>
</tr>
<tr>
<td>biocontainers</td>
<td>default;</td>
<td>No change.</td>
</tr>
<tr>
<td>boost</td>
<td>1.68.0;1.70.0;</td>
<td>1.85.0</td>
</tr>
<tr>
<td>bzip2</td>
<td>1.0.8;</td>
<td>1.0.8</td>
</tr>
<tr>
<td>cdo</td>
<td>1.9.5;</td>
<td>2.4.4</td>
</tr>
<tr>
<td>cmake</td>
<td>3.18.2;3.20.6;3.30.1;</td>
<td>3.30.5</td>
</tr>
<tr>
<td>comsol</td>
<td>5.3a;5.4;5.5_b359;5.6;6.0;6.1;6.2;</td>
<td>6.2</td>
</tr>
<tr>
<td>conda</td>
<td>2024.09;</td>
<td>2025.02</td>
</tr>
<tr>
<td>cplex</td>
<td>12.8.0;</td>
<td>Removed.</td>
</tr>
<tr>
<td>curl</td>
<td>7.63.0;7.79.0;8.7.1;</td>
<td>8.10.1</td>
</tr>
<tr>
<td>duckdb</td>
<td>1.0.0;</td>
<td>1.1.2</td>
</tr>
<tr>
<td>envi</td>
<td>5.5.2;</td>
<td>Removed.</td>
</tr>
<tr>
<td>ffmpeg</td>
<td>4.2.2;</td>
<td>7.0.2</td>
</tr>
<tr>
<td>fftw</td>
<td>3.3.8;</td>
<td>3.3.10</td>
</tr>
<tr>
<td>gamess</td>
<td>18.Aug.2016.R1;30.Jun.2019.R1;</td>
<td>Removed.</td>
</tr>
<tr>
<td>gaussian</td>
<td>(09)E.01;(16)B.01;</td>
<td>gaussian16/B.01</td>
</tr>
<tr>
<td>gaussview</td>
<td>5.0.8;6.0.16;</td>
<td>6.0.16</td>
</tr>
<tr>
<td>gcc</td>
<td>4.8.5;6.3.0;9.3.0;10.2.0;12.3.0;</td>
<td>9.3.0, 11.1.0, 14.2.0</td>
</tr>
<tr>
<td>gdal</td>
<td>2.4.2;3.4.2;3.5.3;3.5.3_sqlite3;</td>
<td>3.10.0</td>
</tr>
<tr>
<td>gdb</td>
<td>11.1;</td>
<td>15.2</td>
</tr>
<tr>
<td>geos</td>
<td>3.8.1;3.9.4;</td>
<td>3.13.0</td>
</tr>
<tr>
<td>gmp</td>
<td>6.1.2;6.2.1;6.3.0;</td>
<td>6.3.0</td>
</tr>
<tr>
<td>gmt</td>
<td>5.4.4;</td>
<td>6.4.0</td>
</tr>
<tr>
<td>gnuplot</td>
<td>5.2.8;</td>
<td>6.0.0</td>
</tr>
<tr>
<td>grads</td>
<td>2.2.1;</td>
<td>2.2.3</td>
</tr>
<tr>
<td>gsl</td>
<td>2.4;</td>
<td>2.8</td>
</tr>
<tr>
<td>gurobi</td>
<td>10.0.1;9.0.1;9.5.1;</td>
<td>11.0.3</td>
</tr>
<tr>
<td>hadoop</td>
<td>2.7.7;</td>
<td>3.4.0</td>
</tr>
<tr>
<td>hdf</td>
<td>4.2.15;</td>
<td>4.2.15;</td>
</tr>
<tr>
<td>hdf5</td>
<td>1.10.6;1.8.21;</td>
<td>1.14.5</td>
</tr>
<tr>
<td>hspice</td>
<td>2017.12;2019.06;2020.12;</td>
<td>2020.12</td>
</tr>
<tr>
<td>hyper-shell</td>
<td>1.8.3;2.0.2;2.4.0;2.5.1;</td>
<td>2.6.5</td>
</tr>
<tr>
<td>idl</td>
<td>8.7;</td>
<td>Removed.</td>
</tr>
<tr>
<td>intel</td>
<td>17.0.1.132;19.0.5.281;2024.1;</td>
<td>2024.2.0</td>
</tr>
<tr>
<td>intel-mkl</td>
<td>2017.1.132;2019.5.281;2024.1;</td>
<td>2024.2.2</td>
</tr>
<tr>
<td>impi</td>
<td>2017.1.132;2019.5.281;2021.12</td>
<td>2021.14</td>
</tr>
<tr>
<td>intel-rt</td>
<td>2024.1.0;</td>
<td>Removed.</td>
</tr>
<tr>
<td>jdk</td>
<td>12.0.2_10;</td>
<td>Removed.</td>
</tr>
<tr>
<td>julia</td>
<td>1.7.1;1.8.1;1.9.3;</td>
<td>1.9.3</td>
</tr>
<tr>
<td>jupyterhub</td>
<td>2.0.0;</td>
<td>Removed. *Note: use jupyter instead.</td>
</tr>
<tr>
<td>learning</td>
<td>conda-2020.11-py38-cpu;</td>
<td>Removed.</td>
</tr>
<tr>
<td>libiconv</td>
<td>1.16;1.17;</td>
<td>1.17</td>
</tr>
<tr>
<td>libszip</td>
<td>2.1.1;</td>
<td>2.1.1</td>
</tr>
<tr>
<td>libtiff</td>
<td>4.0.10;4.6.0;</td>
<td>4.7.0</td>
</tr>
<tr>
<td>libxml2</td>
<td>2.10.3;2.9.9;</td>
<td>2.13.4</td>
</tr>
<tr>
<td>mathematica</td>
<td>11.3;12.1;12.3;13.1;14.1;</td>
<td>14.1</td>
</tr>
<tr>
<td>matlab</td>
<td>R2019a;R2020a;R2020b;R2021b;R2022a;R2023a;</td>
<td>R2024b</td>
</tr>
<tr>
<td>mpc</td>
<td>1.1.0;</td>
<td>1.1.0, 1.3.1</td>
</tr>
<tr>
<td>mpfr</td>
<td>3.1.6;4.2.1;</td>
<td>3.1.6, 4.2.1</td>
</tr>
<tr>
<td>ncl</td>
<td>6.4.0;</td>
<td>6.6.2</td>
</tr>
<tr>
<td>nco</td>
<td>4.6.7;</td>
<td>5.2.4</td>
</tr>
<tr>
<td>ncview</td>
<td>2.1.7;</td>
<td>2.1.9</td>
</tr>
<tr>
<td>netcdf</td>
<td>4.5.0;4.7.4;</td>
<td>Removed.</td>
</tr>
<tr>
<td>netcdf-cxx4</td>
<td>4.3.0;4.3.1;</td>
<td>4.3.1</td>
</tr>
<tr>
<td>netcdf-fortran</td>
<td>4.4.4;4.5.3;</td>
<td>4.6.1</td>
</tr>
<tr>
<td>netlib-lapack</td>
<td>3.8.0;</td>
<td>3.11.0</td>
</tr>
<tr>
<td>nextflow</td>
<td>22.10.4;</td>
<td>24.10.0</td>
</tr>
<tr>
<td>nf-core</td>
<td>2.8;</td>
<td>2.11.1</td>
</tr>
<tr>
<td>oclfpga</td>
<td>2024.1.0;</td>
<td>Removed.</td>
</tr>
<tr>
<td>octave</td>
<td>4.4.1;</td>
<td>9.1.0</td>
</tr>
<tr>
<td>openblas</td>
<td>0.3.21;0.3.8;</td>
<td>0.3.27</td>
</tr>
<tr>
<td>panoply</td>
<td>4.11.6;</td>
<td>Removed.</td>
</tr>
<tr>
<td>parallel</td>
<td>20220522;</td>
<td>20240822</td>
</tr>
<tr>
<td>proj</td>
<td>5.2.0;8.1.0;8.2.1;</td>
<td>9.4.1</td>
</tr>
<tr>
<td>protobuf</td>
<td>3.11.4;</td>
<td>3.28.2</td>
</tr>
<tr>
<td>qemu</td>
<td>2.10.1;4.1.0;</td>
<td>9.1.0</td>
</tr>
<tr>
<td>qt</td>
<td>5.12.5;</td>
<td>5.15.15</td>
</tr>
<tr>
<td>quantumatk</td>
<td>2020.09;</td>
<td>2020.09</td>
</tr>
<tr>
<td>r</td>
<td>3.6.3;4.0.0;4.1.2;4.2.2;4.3.1;4.4.1;</td>
<td>4.4.1</td>
</tr>
<tr>
<td>rocm</td>
<td>5.2.0;</td>
<td>6.2.2</td>
</tr>
<tr>
<td>rocmcontainers</td>
<td>default;</td>
<td>No change.</td>
</tr>
<tr>
<td>rstudio</td>
<td>1.3.1073;1.3.959;2021.09;2022.07;2023.06;2023.12;</td>
<td>2024.12</td>
</tr>
<tr>
<td>sas</td>
<td>9.4;</td>
<td>9.4</td>
</tr>
<tr>
<td>sentaurus</td>
<td>2017.09;2019.03;</td>
<td>2022.03</td>
</tr>
<tr>
<td>spark</td>
<td>2.4.4;</td>
<td>3.5.1</td>
</tr>
<tr>
<td>sqlite</td>
<td>3.30.1;3.46.0;</td>
<td>3.46.0</td>
</tr>
<tr>
<td>stata</td>
<td>18;</td>
<td>18</td>
</tr>
<tr>
<td>stata-mp</td>
<td>17;18;</td>
<td>18</td>
</tr>
<tr>
<td>subversion</td>
<td>1.12.2;</td>
<td>Removed.</td>
</tr>
<tr>
<td>tbb</td>
<td>2021.12;</td>
<td>Removed. *Note: use intel-oneapi-tbb instead.</td>
</tr>
<tr>
<td>tcl</td>
<td>8.6.12;8.6.8;</td>
<td>8.6.12</td>
</tr>
<tr>
<td>tecplot</td>
<td>360-2017-R3;360-2021-R1;</td>
<td>360-2024-R1</td>
</tr>
<tr>
<td>texinfo</td>
<td>6.7;</td>
<td>7.1</td>
</tr>
<tr>
<td>texlive</td>
<td>20220321;</td>
<td>Removed.</td>
</tr>
<tr>
<td>thermocalc</td>
<td>2019b;2020a;2021b;</td>
<td>2022b</td>
</tr>
<tr>
<td>tk</td>
<td>8.6.11;</td>
<td>8.6.11</td>
</tr>
<tr>
<td>totalview</td>
<td>2020.2.6;2021.4.10;</td>
<td>2024.3</td>
</tr>
<tr>
<td>udunits</td>
<td>2.2.24;</td>
<td>2.2.28</td>
</tr>
<tr>
<td>valgrind</td>
<td>3.15.0;</td>
<td>3.23.0</td>
</tr>
<tr>
<td>vim</td>
<td>8.1.2141;</td>
<td>9.1.0437</td>
</tr>
<tr>
<td>vmd</td>
<td>1.9.3;</td>
<td>Removed.</td>
</tr>
<tr>
<td>vscode</td>
<td>1.56;1.59;</td>
<td>1.79.2</td>
</tr>
<tr>
<td>xalt</td>
<td>1.1.2;</td>
<td>3.0.2</td>
</tr>
<tr>
<td>zlib</td>
<td>1.2.11;1.2.13;</td>
<td>1.3.1</td>
</tr>
</tbody>
</table>
]]></description>
				<pubDate>Fri, 21 Feb 2025 08:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Gautschi EUP Scheduled Maintenance]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7048</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7048</guid>
				<description><![CDATA[<p>The Gautschi cluster will be unavailable on Tuesday, Febuary 11th between 8:00am-5:00pm EDT for the early user period's scheduled Tuesday maintenance. During this time, we will be performing networking work on the cluster.</p>
]]></description>
				<pubDate>Tue, 11 Feb 2025 08:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Gautschi EUP Scheduled Maintenance]]></title>
				<link>https://rcac.purdue.edu/index.php/news/7040</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/7040</guid>
				<description><![CDATA[<p>The Gautschi cluster will be unavailable on Tuesday, Febuary 4th between 8:00am-5:00pm EDT for the early user period's scheduled Tuesday maintenance. During this time, we will be updating the scratch file system, however this will not affect user files.</p>
]]></description>
				<pubDate>Tue, 04 Feb 2025 08:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[New Resource "Rossmann", to Support New NIH Genomic Data Sharing (GDS) policy and other restricted data]]></title>
				<link>https://rcac.purdue.edu/index.php/news/6970</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/6970</guid>
				<description><![CDATA[<h1>Overview</h1>
<p>NIH has issued an implementation update for data management and access practices under the Genomic Data Sharing (GDS) Policy (see Guide Notice <a href="https://grants.nih.gov/grants/guide/notice-files/NOT-OD-24-157.html">NOT-OD-24-157</a>). This introduces updated security standards for Approved Users of controlled-access data shared under the NIH GDS Policy (<a href="https://grants.nih.gov/grants/guide/notice-files/not-od-14-124.html">NOT-OD-14-124</a>) and for repositories and/or systems storing or providing access to these data.  (FAQs)  The updates will take effect on January 25, 2025, and apply only to new projects started after that date and recompetes of continuing projects.</p>
<p>A list of data repositories can be found <a href="https://sharing.nih.gov/accessing-data/NIH-security-best-practices#:~:text=Effective%20on%20January%2025%2C%202025,Use%20Certifications%20or%20similar%20agreements">HERE</a>.</p>
<p>The updates require PIs working with data in these repositories to attest to NIH that the system storing their project’s human genomic data is compliant with NIST SP 800-171.</p>
<p>Today, NIH genomic data from these repositories is stored in a variety of IT resources at Purdue. To provide a single centrally managed and appropriately secured resource to support faculty working with these data, the Office of Research and Purdue IT are working together to deploy “Rossmann,” a new computing and storage resource to support these and other restricted data uses, fully implementing the NIST 800-171 standard.</p>
<p>New DUAs with requests for NIH GDS data will be assigned to work within the Rossmann system.</p>
<h1>Frequently Asked Questions</h1>
<p><strong>Q:</strong> I have an existing NIH dbGaP dataset that I work with, am I subject to these new requirements? Do I need to move my project into “Rossmann” ?
<strong>A:</strong> No, only new or renewal requests will be expected to secure data according to the updated NIH Security Best Practices.</p>
<p><strong>Q:</strong> I work with my NIH GDS data on my laptop, does that mean that my laptop is required to be aligned with the updated NIH Security Best Practices?
<strong>A:</strong> Yes. However, NIST 800-171 endpoints are not currently supported by Purdue IT.</p>
<p><strong>Q:</strong> I have a 3rd party system that I use to work with NIH GDS data, can I continue to do so?
<strong>A:</strong> Yes, but the PI is responsible for ensuring and attesting to the compliance of whatever IT system or cloud provider is utilized. To make it easier for faculty and to lower risk, we recommend the use of Purdue-managed resources.</p>
<p><strong>Q:</strong> Do the updated NIH Security Best Practices mean that NIH data now requires CMMC certification? Is genomic data now Controlled Unclassified Information (CUI)?
<strong>A:</strong> No, while they all use the same NIST 800-171 cybersecurity standard, NIH Security Best Practices are not subject to CMMC, nor are genomic data considered CUI.</p>
<p><strong>Q:</strong> Who can I talk to for more information?
<strong>A:</strong> For cybersecurity questions, please contact Purdue IT Information Assurance (<a href="mailto:pss-ia@purdue.edu">pss-ia@purdue.edu</a>). For high-performance computing and workflow questions, contact the Rosen Center for Advanced Computing (<a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>).</p>
]]></description>
				<pubDate>Mon, 27 Jan 2025 00:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Research Computing Holiday Break]]></title>
				<link>https://rcac.purdue.edu/index.php/news/6935</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/6935</guid>
				<description><![CDATA[<p>Research Computing personnel will observe the university winter break from 12:00am EST on Monday, December 23rd, 2024, and will resume normal business hours on Thursday, January 2nd, 2025.  During this time, Research Computing services will continue to be available, but all staff will be on leave.</p>
<p>Research Computing staff members will monitor the status of all computing and data resources in an effort to ensure continuous availability.</p>
<p>Research Computing staff members will monitor the ticketing system throughout the holiday period and answer critical issues and problems. Non-critical user issues and questions will be addressed beginning Thursday, January 2nd, 2025. There will also be no coffee hour consultations during this break.</p>
<p><strong>Scratch file purging (on community clusters with scratch space) will continue as normal during the break, so be sure to archive your files in scratch storage or check for any upcoming purge warning emails during the break. This does not apply to Data Depot or home directories -- only scratch storage.</strong></p>
<p>Have a wonderful break, everyone, and we look forward to great things in the new year!</p>
]]></description>
				<pubDate>Mon, 23 Dec 2024 00:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Staff Availability and Coffee-Hour Schedule]]></title>
				<link>https://rcac.purdue.edu/index.php/news/6925</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/6925</guid>
				<description><![CDATA[<p>Research Computing personnel, including a significant number of the support staff, will be attending the SC24 conference on the week of November 18th. There are no planned reductions in service and staff will continue to monitor the ticketing system throughput this period.</p>
<p>However, due to reduced staff availability during this period we will be canceling our regularly scheduled <em>Coffee-Hour</em> consultations session for that week. This includes both the Monday in-person consulting hours at the Convergence Center as well as the Tuesday virtual sessions.</p>
]]></description>
				<pubDate>Mon, 18 Nov 2024 00:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[In-person Coffee-Hour Consultation Location Change]]></title>
				<link>https://rcac.purdue.edu/index.php/news/6447</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/6447</guid>
				<description><![CDATA[<p>Our in-person consulting hours (a.k.a. &quot;coffee hour&quot;) location will be changing effectively immediately.</p>
<p>RCAC staff have offered in-person consulting at the Convergence Center on campus downstairs in the lobby (and/or cafe area) for more than a year now. We will still be located at the Convergence Center on Mondays, but will hold the session upstairs in <strong>CONV 3301</strong>, a conference room on the third floor in our office area just off the elevators.</p>
<p>Why the change? Convenience, consistency, privacy and noise, and other reasons. The lobby has nice tables that offer a public space for this sort of informal help session that felt inline with the coffee-shop meeting locations around campus from years prior to the pandemic. Unfortunately, with limited space available during the semester at these locations we moved to Convergence but have had difficulty on occassion with the lobby as other events sometimes interfere and building staff often direct visitors to our third-floor offices anyway. Moving forward, we think this will offer more consistency and resources to our visitors.</p>
]]></description>
				<pubDate>Mon, 10 Jun 2024 00:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
			</channel>
</rss>