<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/">
	<channel>
		<title>RCAC - Announcements, Outages and Maintenance, Outages, Maintenance, Outages, Maintenance, Science Highlights</title>
		<link>https://rcac.purdue.edu/index.php/news/rss/2,1,6,7,3,Anvil</link>
		<description><![CDATA[news::news.feed description]]></description>
		<atom:link href="https://rcac.purdue.edu/index.php/news/rss/2,1,6,7,3,Anvil" rel="self" type="application/rss+xml" />
		<language>en</language>
		<lastBuildDate>Tue, 07 Apr 2026 08:50:42 EDT</lastBuildDate>
					<item>
				<title><![CDATA[Scheduled RCAC Maintenance – April 22–23 (All Systems and Research Network Unavailable)]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2633</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2633</guid>
				<description><![CDATA[<p>Dear Research Computing Community,</p>
<p>As part of the ongoing effort to upgrade data center capacity for future computing needs, maintenance is scheduled from Wednesday, April 22 at 6 AM through Thursday, April 23 at 5 PM. This will affect all RCAC systems and the Research Network.</p>
<p>During this window, <strong>all RCAC systems and research network services will be unavailable</strong>, including:</p>
<ul>
<li>All Computing Clusters including Bell, Negishi, Gautschi, Scholar, Rowdy, Gilbreth, Hammer, Anvil.</li>
<li>All Data Storage Systems including Data Depot, Fortress, Anvil Ceph storage, scratch and home storage on clusters, Research Network and ScienceDMZ</li>
<li>Gateway services including Hubzero, GenAI Studio, Anvil GPT</li>
<li>
<a href="http://www.rcac.purdue.edu">www.rcac.purdue.edu</a>
</li>
<li>Geddes</li>
<li>Globus</li>
</ul>
<p><strong>How does this maintenance impact you?</strong></p>
<ul>
<li>Any Slurm jobs requesting a walltime that would extend past the start of the maintenance will not start and will remain in the queue until after maintenance is complete.</li>
<li>All active sessions and jobs running on affected systems will be preempted at the start of the outage
any queued jobs will not begin until services are restored.</li>
<li>Access to login nodes, storage systems, and web portals will be unavailable throughout the downtime.</li>
<li>Automated data workflows that rely on affected systems (e.g., rsync, data pipelines, archive processes) will not function until systems are back online.</li>
<li>Globus transfers may time out during the maintenance window.</li>
</ul>
<p><strong>To prepare for this maintenance, we suggest to:</strong></p>
<ul>
<li>Download any needed data or scripts before 6:00 AM on April 22.</li>
<li>Prepare instrumentation devices for the Data Depot to be unavailable and</li>
</ul>
<p>We appreciate your understanding and cooperation as we complete these necessary upgrades to improve reliability and performance across RCAC infrastructure. For assistance, questions, or concerns contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
<p>Best regards,
Rosen Center for Advanced Computing (RCAC) / Purdue IT</p>
]]></description>
				<pubDate>Wed, 22 Apr 2026 06:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[RCAC and IPAI faculty seminar series on AI proves successful]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2628</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2628</guid>
				<description><![CDATA[<p>The Rosen Center for Advanced Computing (RCAC), in collaboration with <a href="https://ipai.research.purdue.edu">Purdue’s Institute for Physical Artificial Intelligence (IPAI)</a>, has successfully introduced a new seminar series dedicated to advancing AI at the university. The new series, titled “AI Hubs Faculty Seminar Series: AI Across Purdue,” is open to anyone at Purdue and takes place in the Hall of Data Science and AI.</p>
<p>The “AI Across <img width="400" style="padding:10px;" class="float-right" alt="Faculty presenting at seminar to room full of attendees" src="https://www.rcac.purdue.edu/files/RCAC-Stories/AI-Across-Purdue-Seminar-Series/IMG_0495.jpg" />Purdue” seminar series seeks to bring together Purdue faculty and researchers to explore AI opportunities and build connections across campus and NSF-supported programs. Each session features two 20-minute faculty talks on how they are using AI at Purdue, followed by 20 minutes of Q&amp;A and networking. Since the first meeting in early Spring of 2026, the series has connected more than 100 Purdue faculty and researchers, highlighting the importance of dedicated discussions that explore all aspects of AI in research and education.</p>
<p>Unlike traditional training and workshops, this series focuses on advancing knowledge through the sharing of lived experiences and open discussion amongst those at the forefront of AI. Thus far, the talks have covered a wide variety of subjects: materials, mechanical, and nano/quantum engineering, agriculture, biology, and language and culture, as well as some of the AI infrastructure that the researchers used, such as the <a href="https://www.rcac.purdue.edu/compute/gautschi">Gautschi Community Cluster</a> and the <a href="https://www.rcac.purdue.edu/anvil">Anvil supercomputer</a>. By participating in the “AI Across Purdue” seminar series, attendees not only develop their own AI skillset but also help shape the future of AI at Purdue.</p>
<p>While the “AI Across Purdue” faculty seminar series has been tremendously successful thus far, there’s still time this semester for you to join the conversation. The past and remaining schedule for the series is as follows:</p>
<ul>
<li>
<strong>February 10th:</strong> Professor Nikhilesh Chawla (Materials Engineering), Professor Alexandra Boltasseva (Electrical and Computer Engineering)</li>
<li>
<strong>March 3rd:</strong> Professor Yan Gu (Mechanical Engineering), Professor Jinha Jung (Civil and Construction Engineering)</li>
<li>
<strong>March 10th:</strong> Professor D. Marshall Porterfield (Agricultural and Biological Engineering), Professor Yaguang Zhang (Agricultural and Biological Engineering)</li>
<li>
<strong>March 24th:</strong> Professor Vanesa Cañete Jurado (College of Liberal Arts), Professor Maurice Tetne (College of Liberal Arts)</li>
<li>
<strong>April 7th:</strong> Professor Edwin Garcia (Materials Engineering), Professor Romit Maulik (Mechanical Engineering)</li>
<li>
<strong>April 21st:</strong> Professor Prianka Baloni (Health Sciences), Professor Mia Liu (Physics and Astronomy)</li>
<li>
<strong>April 28th:</strong> Professor Mustafa Abdallah (Computer and Information Technology), Professor Rua M. Williams (Applied and Creative Computing)</li>
</ul>
<p>Each session is attended by RCAC and IPAI organizers in order to answer questions and assist with any AI challenges faced by the attendees. They also acquaint participants with the numerous AI services available through Purdue and the NSF, including the <a href="https://nairrpilot.org/">NAIRR Pilot</a>, <a href="https://access-ci.org/">ACCESS Program</a>, <a href="https://genesis.energy.gov/">Genesis Mission</a>, <a href="https://www.rcac.purdue.edu/rse">Purdue Center for Research Software Engineering</a>, and <a href="https://www.rcac.purdue.edu/services/datascience">RCAC AI Support and Expertise</a>.</p>
<p>More information about the “AI Hubs Faculty Seminar Series: AI Across Purdue,” can be found on our <a href="https://www.rcac.purdue.edu/news/7644">Seminar Series Event page</a>.</p>
<p>To stay apprised of upcoming AI-related news and training, please <a href="https://mailimages.purdue.edu/Subscribe/Form.ashx?l=1007143&amp;p=a2944715-aeb3-428a-8027-624f47f870ee">subscribe to our RCAC newsletter</a> and our <a href="https://lists.purdue.edu/scripts/wa.exe?SUBED1=IPAI-CONTACT&amp;A=1">IPAI newsletter</a>.</p>
<p>RCAC operates the centrally-maintained research computing resources at Purdue University, providing access to leading-edge computational and data storage systems as well as expertise and support to Purdue faculty, staff, and student researchers. To learn more about HPC and how RCAC can help you, please visit <a href="https://www.rcac.purdue.edu/">https://www.rcac.purdue.edu/</a> or reach out to <a href="https://www.rcac.purdue.edu/rcac-help@purdue.edu">https://www.rcac.purdue.edu/rcac-help@purdue.edu</a> to request consultation.</p>
<p>The Institute for Physical Artificial Intelligence (IPAI) is Purdue’s hub for faculty collaboration at the intersection of AI and the physical world. IPAI connects researchers across disciplines to develop secure, robust, and deployable AI systems. Spanning models, hardware, robotics, autonomy, and ethics, IPAI helps apply these capabilities to shared research challenges and emerging opportunities across industry and government domains. To learn more, please visit <a href="https://ipai.research.purdue.edu/">https://ipai.research.purdue.edu/</a> or email <a href="https://www.rcac.purdue.edu/ipai@purdue.edu">https://www.rcac.purdue.edu/ipai@purdue.edu</a>.</p>
<p>Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Mon, 30 Mar 2026 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Advancing AI at Purdue: 2025 in Review]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2626</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2626</guid>
				<description><![CDATA[<p>Throughout 2025, the Rosen Center for Advanced Computing (RCAC) made a concerted effort to expand the artificial intelligence (AI) resources available to Purdue at large. By enhancing the infrastructure, support, and training for AI and generative AI applications, RCAC is pushing the university to the forefront of innovation in AI and helping Purdue in its persistent pursuit of the next giant leap.</p>
<p>Artificial intelligence is changing the world. Whether it be in research, education, business, or pleasure, there’s hardly a facet of life that AI hasn’t touched. In research, AI and machine learning are making an impact in almost every scientific field, reducing time-to-science and enabling breakthroughs at a rate previously unimagined. Those who harness the power of AI will be positioned to make world-changing discoveries—which is precisely why RCAC set out to provide Purdue with an AI ecosystem designed to enable excellence at scale. To achieve this, RCAC focused its efforts throughout 2025 on three crucial areas: hardware, software, and expertise.</p>
<h3>Next-generation GPUs for AI research</h3>
<p>AI and machine learning tools and methodologies can revolutionize the research and education landscape, but require the appropriate hardware in order to work; i.e., advanced GPUs. Step one for RCAC was to build and deploy a supercomputer laden with top-tier GPUs and designed to enhance AI workflows in research. Enter the <a href="https://www.rcac.purdue.edu/compute/gautschi">Gautschi Community Cluster</a>.</p>
<p>Gautschi is a <img width="400" style="padding:10px;" class="float-right" alt="Gautschi Worldwide Rankings Graphic" src="https://www.rcac.purdue.edu/files/RCAC-Stories/2025-AI-Review/Gaitschi_AI%202.png" />state-of-the-art community cluster eponymously named in honor of Walter Gautschi, Professor Emeritus of Computer Science and Professor Emeritus of Mathematics at Purdue University. The supercomputer is a dual-partition system: Gautschi CPU, optimized for traditional, tightly-coupled science and engineering applications; and <a href="https://www.rcac.purdue.edu/news/6937">Gautschi-AI</a>, designed specifically to enhance AI research at Purdue. Thanks to support from <a href="https://www.purdue.edu/computes/">Purdue Computes</a> and the <a href="https://ipai.research.purdue.edu">Institute for Physical AI (IPAI)</a>, Gautschi-AI was built with cutting-edge H100 GPUs, which utilize NVIDIA’s Hopper architecture and a Transformer Engine in order to provide training and speeds that are four times faster than previous generation models. Gautschi-AI has eight total H100 GPUs with 80 GB of RAM, providing Purdue researchers with a whopping 10.7 PetaFLOPS of peak performance—ample processing power for even the largest AI jobs.</p>
<p>RCAC’s deployment of the Gautschi Community Cluster quickly proved successful—the system’s impact on AI research at Purdue has been astounding:</p>
<ul>
<li>45 Principal Investigators (PIs) from 18 departments have purchased allocations (11 new PIs added in Q4, including 2 new departments)</li>
<li>18 IPAI affiliate PIs applied for matching allocations</li>
<li>700k GPU hours delivered in 2025 (Would cost on average $8.7M to procure at commercial cloud prices)</li>
<li>IO500 benchmark for worldwide ranking of Gatuschi’s storage capabilities ranked at #20 on 10-node production system</li>
<li>Completed mixed-precision LINPACK benchmark for another worldwide ranking of Gautschi, ranked #27 in the world</li>
</ul>
<p>While Gautschi is Purdue’s most powerful supercomputer to date—and the major hardware focus for 2025—it isn’t the only one helping researchers in their AI efforts. Gilbreth is another community cluster available at Purdue. The system received its third expansion in 2024 and now has a total of 411 GPUs, nearly four times its original capacity. Gilbreth is designed as a high-performance computing (HPC) resource specifically optimized for “throughput” applications, supporting general-purpose, medium-scale simulations as well as large numbers of smaller AI jobs. Even though Gilbreth’s resources are primarily comprised of the previous generation NVIDIA GPUs, the system has had a major impact on AI research at Purdue. Between Gautschi and Gilbreth, Purdue AI supercomputers provided 3 million GPU hours in 2025, which would have cost Purdue faculty over $15M to procure on commercial clouds.</p>
<p>To purchase access to the Gautschi Community Cluster today, please visit RCAC’s <a href="https://www.rcac.purdue.edu/purchase">Cluster Access Purchase</a> page. IPAI is currently offering a matching program for Gautschi-AI, with IPAI matching a one-year allocation of one GPU for each GPU purchased (up to 8 GPUs). To take advantage of the matching program, all researchers need to do is provide a <a href="https://purdue.ca1.qualtrics.com/jfe/form/SV_e4Mrn5wJKpxNwiO">written description of their project</a> and how it relates to physical AI alongside their purchase order.</p>
<h3>Software infrastructure for AI workflows</h3>
<p>With the AI-specific hardware needs being met by Gautschi and Gilbreth, RCAC’s next step was to ensure the underlying software infrastructure was ready to support AI workloads and researchers. Accomplishing this task required a myriad of software developments, implementations, and solutions, but one key project took center stage for the organization—Purdue GenAI Studio.</p>
<p>Purdue GenAI Studio is an LLM service that makes open-source LLM models accessible to anyone at Purdue. Developed in collaboration with IPAI, the intent behind the tool was to provide an LLM service that researchers could easily access and use, and which negates the concern of leaking intellectual property or proprietary data. Unlike other LLM services, Purdue GenAI Studio is hosted entirely on-premises using resources within Purdue’s supercomputers, providing researchers with more democratized access to LLMs, as well as more control. No documents or contexts are uploaded into commercial cloud-hosted AI services, nor are any chats, documents, or models shared between users or used for training.</p>
<p>The team at RCAC worked hard throughout 2025 to iteratively develop and improve Purdue GenAI Studio. The LLM service currently has 30+ language models spanning multiple model families (Llama, Mistral, Phi, Gemma, and specialized domain models), and has been integrated with advanced features, including retrieval-augmented generation (RAG), web search capabilities via SearXNG, and document parsing with Docling. Unsurprisingly, given the need for such a tool within research contexts, Purdue GenAI Studio has been met with great success:</p>
<ul>
<li>2500+ active users across research, instruction, and administrative units at Purdue</li>
<li>Over 22,000 messages delivered by chat UI</li>
<li>More than 2 Million messages, 1.7 Billion  tokens delivered by LLM API in January of 2026 alone</li>
</ul>
<p>Purdue GenAI Studio is helping users across every discipline to take the next giant leap in their research. The LLM service has been so successful that RCAC has been solicited by peer institutions to provide consultation on establishing similar infrastructure at their respective locations.</p>
<p>To learn more about Purdue GenAI Studio or access the tool, please visit: <a href="https://www.rcac.purdue.edu/knowledge/genaistudio?all=true">https://www.rcac.purdue.edu/knowledge/genaistudio?all=true</a></p>
<h3>Expertise for Supporting AI Applications</h3>
<p>The final step in RCAC’s plan to thrust Purdue into the vanguard of AI innovation is vitally important, yet easy to overlook. The hardware is in the data center. The software is under the hood. RCAC provided all the tools researchers would need for their AI endeavors. Now the center had to ensure the researchers would know how to use them.</p>
<p>Throughout 2025, RCAC—already known for its world-class HPC support and expertise—set out on the ambitious task of supplying Purdue with as much knowledge on AI for research as possible. The plan was multi-pronged, involving collaborative engagement across multiple teams within the organization. From training and outreach events to consultations and collaborations with its Research Software Engineering Center, RCAC made AI a key focus for its support services.</p>
<p>Training was arguably one of the largest pieces of the AI support services puzzle, with the overarching strategy needing to account for both breadth and depth in order to help Purdue researchers across the board. To advance AI research, knowledge dissemination was crucial. Training efforts focused on ensuring researchers, educators, and students could confidently use AI tools while understanding their limitations, ethical implications, and best practices. The cumulative impact of these efforts was impressive. In the fall of 2025 alone, RCAC delivered 30 training sessions, partly arranged as four thematic series, with more than 1000 people participating in the sessions. RCAC also led multiple signature AI training events throughout the year, including AI Day and summer camps, which saw an additional 150+ participants in extended thematic workshops. A full list of the 2025 trainings, as well as upcoming training sessions in 2026, can be found here: <a href="https://www.rcac.purdue.edu/training">https://www.rcac.purdue.edu/training</a></p>
<div class="my-3 text-center"><img width="650" alt="AI Training Statistics Graphic" src="https://www.rcac.purdue.edu/files/RCAC-Stories/2025-AI-Review/2025_training_impact.png" /></div> 
<p>Aside from training events, RCAC enabled AI research and education through its <a href="https://www.rcac.purdue.edu/rse">Research Software Engineering (RSE) Center</a>. The RSE Center provides end-to-end research software support, from early proposal design and data engineering to development, deployment, visualization, and long-term sustainability. In 2025, the RSE Center provided AI consultation and project support to over 15 faculty and research groups across colleges, including Engineering, Science, Liberal Arts, Health &amp; Human Sciences, Pharmacy, and the Mitch Daniels School of Business. Furthermore, the center contributed to multiple funded and pending grant proposals advancing AI applications in education, research infrastructure, and scientific discovery. The RSE Center is a dedicated AI and research computing resource, ready to help Purdue researchers take their next giant leap. To leverage the RSE Center’s team of AI research scientists and collaborate on a project, please visit: <a href="https://www.rcac.purdue.edu/rse">Purdue Center for Research Software Engineering</a></p>
<p>RCAC support services for AI extend beyond training events and RSE partnerships. Other AI support efforts conducted in 2025 include:</p>
<ul>
<li>Hosted <a href="https://datasetdocs.readthedocs.io/en/latest/">hundreds of terabytes of datasets</a> locally to accelerate research on Purdue infrastructure.</li>
<li>Provided various self-guided AI training materials, detailed notebooks, example code, datasets, and deployment guides, all publicly available through RCAC repositories for asynchronous learning.</li>
<li>Created extensive documentation for Purdue GenAI Studio covering AI model selection, API access, responsible use guidelines, fine-tuning workflows, and deployment best practices.</li>
<li>Integrated AI into Purdue curriculum for multiple courses through custom applications. Support for the classroom included generating AI-assisted testing, AI-enhanced simulation, AI-assisted evaluation, creation of practice materials, and AI-enhanced VR networking training systems.</li>
</ul>
<h3>National AI Resource Provider</h3>
<p>RCAC’s main AI focus in 2025 was to provide Purdue with a computing ecosystem that would allow the university to become a leader in AI innovation; however, the center also stepped up to fulfill a role in doing the same for the nation.</p>
<p>Anvil, Purdue’s <img width="400" style="padding:10px;" class="float-right" alt="Picture of Anvil supercomputer with Anvil AI partition" src="https://www.rcac.purdue.edu/files/RCAC-Stories/2025-AI-Review/Anvil_AI.jpg" />powerful National Science Foundation (NSF)-funded supercomputer, was expanded to support the National Artificial Intelligence Research Resource (NAIRR) Pilot and ACCESS programs. The supercomputer received $5M in supplemental funding to secure advanced GPUs and provide AI support to researchers throughout the United States. Anvil-specific AI upgrades and support efforts in 2025 include:</p>
<ul>
<li>Procured and deployed Anvil AI, a new partition housing 84 Nvidia H100 SXM GPUs</li>
<li>Created a 1PB all-flash object storage for Anvil to support emerging AI storage needs.</li>
<li>Deployed AI-powered Anvil Notebook service that provides an interactive notebook portal to support instructional and/or lightweight research needs</li>
<li>Continually developed AnvilGPT, the “Purdue GenAI Studio” available for Anvil users nationwide.</li>
</ul>
<p>Anvil and Anvil AI are available to researchers via two pathways: 1) the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>, and 2) the <a href="https://www.rcac.purdue.edu/anvil/anvilnairr">NAIRR allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>.</p>
<h3>Looking Ahead</h3>
<p>While 2025 was an extraordinarily productive year for AI and generative AI initiatives at RCAC, there is no intent to slow down. The Spring 2026 semester will feature the continuation of monthly AI training workshops running January through May. Also, multiple major grant proposals are pending that would significantly expand AI research infrastructure, educational applications, and cross-institutional collaborations.</p>
<p>Planned initiatives for 2026 include expanding GenAI Studio's model catalog, deploying advanced agentic AI capabilities, launching the SciAgents research program for AI-assisted scientific discovery, scaling course-integrated AI applications to thousands of additional students, and continuing to build Purdue's national leadership in responsible, institution-hosted generative AI infrastructure. Beyond these initiatives, RCAC will also continue to educate students and researchers by delivering numerous AI-focused trainings and workshops, including the <a href="https://www.rcac.purdue.edu/workshop">NSF NAIRR Regional AI Workshop</a>, happening May 20-22 in Indianapolis, Indiana, as well as the <a href="https://www.rcac.purdue.edu/news/7612">Purdue AI Research Showcase</a>, hosted in conjunction with Purdue’s Institute for Physical AI and taking place April 14–15, 2026 on the Purdue University West Lafayette campus.</p>
<p>To stay apprised of upcoming AI-related news and training, please <a href="https://mailimages.purdue.edu/Subscribe/Form.ashx?l=1007143&amp;p=a2944715-aeb3-428a-8027-624f47f870ee">subscribe to our RCAC newsletter</a>. For more information on our AI and data science services, please visit our <a href="https://www.rcac.purdue.edu/services/datascience">AI and Data Science</a> page.</p>
<p>RCAC operates the centrally-maintained research computing resources at Purdue University, providing access to leading-edge computational and data storage systems as well as expertise and support to Purdue faculty, staff, and student researchers. To learn more about HPC and how RCAC can help you, please visit: <a href="https://www.rcac.purdue.edu/">https://www.rcac.purdue.edu/</a> or reach out to <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a> to request consultation.</p>
<p>Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Thu, 26 Mar 2026 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Bioinformatics and genomics modules available for Anvil users]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2625</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2625</guid>
				<description><![CDATA[<p>Anvil users <img width="400" style="padding:10px;" class="float-right" alt="Bioinformatics image containing DNA strand and circuit board lines" src="https://www.rcac.purdue.edu/files/RCAC-Stories/2025-Bionformatics-Review/Bioinfo_resize.jpg" />working in bioinformatics and genomics have access to over <strong>750 pre-built software modules</strong> through the biocontainers framework, covering tools for sequence alignment, genome assembly, variant calling, RNA-seq analysis, and more. To get started, load the biocontainers module and explore available software with <code>module load biocontainers</code> followed by <code>module avail</code>.</p>
<p>A full catalog of available modules with usage instructions is at the <a href="https://biocontainer-doc.readthedocs.io/">Biocontainers Documentation</a> site. For step-by-step guides on running common genomics workflows on HPC systems, visit the <a href="https://rcac-bioinformatics.github.io/">RCAC Bioinformatics Tutorials</a> site, which includes documentation for tools like HiFiasm, BRAKER3, Trinity, and Nextflow, along with best practices for job submission and resource optimization. Full workshop materials are also available for <a href="https://rcac-bioinformatics.github.io/rnaseq-analysis/">RNA-seq analysis</a>, <a href="https://rcac-bioinformatics.github.io/genome-assembly/">genome assembly</a>, and <a href="https://rcac-bioinformatics.github.io/genome-annotation/">genome annotation</a>. For upcoming training sessions and community programs, including the biweekly Genomics Exchange discussion series and the <a href="https://midwestbioinformatics.org/">Midwest Bioinformatics Showcase</a> seminar series, see the <a href="https://www.rcac.purdue.edu/services/cbs">Computational Biology Services</a> page. For questions or support with bioinformatics workflows on Anvil, contact <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Mon, 23 Mar 2026 00:00:00 -0400</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Power Outage Impacting Multiple Clusters — Recovery Underway]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2613</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2613</guid>
				<description><![CDATA[<p>At approximately 6:00 AM EDT, a power outage impacted systems in the Math Data Center. Most services have now been restored.</p>
<p>Due to the outage, some jobs on Gilbreth did not requeue automatically. Users should check the status of any jobs that were running early this morning and resubmit them if needed.</p>
]]></description>
				<pubDate>Wed, 18 Mar 2026 06:00:00 -0400</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Scientific workflow management system, Pegasus, available on Anvil]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2609</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2609</guid>
				<description><![CDATA[<p>Pegasus, an NSF-funded scientific workflow management system, is now available for use on Purdue's Anvil supercomputer. With the addition of Pegasus, Anvil users can define, manage, and execute complex, multi-step computational tasks with ease through a web-based interface, reducing researcher workload and enabling faster time-to-discovery.</p>
<p>Pegasus is a <img width="400" style="padding:10px;" class="float-right" alt="Pegasus Software Logo" src="https://www.rcac.purdue.edu/files/anvil/Pegasus-Announcement/pegasusfront-black-reduced.png" />tool to help workflow-based applications function in various environments, including desktops, cloud, and high-performance computing (HPC) systems. It was designed to allow scientists to construct workflows in abstract terms and remove the need to understand the underlying execution environment. Pegasus has been used successfully in a number of scientific fields: astronomy, bioinformatics, earthquake science, gravitational-wave physics, ecology, and cryo-EM, amongst others. A workflow in Pegasus consists of multiple tasks with defined dependencies, and Pegasus handles job submission, data staging, execution ordering, and failure recovery. Some beneficial features of Pegasus include:</p>
<ul>
<li>Data Management: Pegasus handles data transfers, input data selection, and output registration.</li>
<li>Automated Error Recovery and Reliability: Errors are automatically addressed by retrying tasks, workflow-level checkpointing, re-mapping, and trying alternative data sources for data staging.</li>
<li>Adaptability and Reuse: Pegasus works in a variety of distributed computing environments, and workflows can easily be run in different environments without alteration.</li>
<li>Scalability: Pegasus can scale both the size of the workflow and the resources the workflow is distributed over without impacting performance.</li>
</ul>
<p>Pegasus is deployed on Anvil through the <a href="https://notebook.anvilcloud.rcac.purdue.edu/hub/oauth_login?next=">Anvil Notebook Service</a>, which provides browser-based access to Jupyter Notebooks running on Anvil infrastructure. The Pegasus Notebook environment includes the Pegasus workflow management system, HTCondor for workflow execution management, and preconfigured integration with Anvil’s SLURM scheduler. This environment allows users to develop and debug workflows interactively using the Pegasus Python API or command-line tools, submit workflows to Anvil’s batch system using their allocations, and monitor workflow execution and logs directly from the notebook interface. No additional Pegasus installation or configuration is required by the user.</p>
<p>To learn more about Pegasus and how to access it on Anvil, please visit: <a href="https://www.rcac.purdue.edu/knowledge/anvil/anvil-notebook-service/pegasus">Pegasus on Anvil</a></p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States. Anvil also supports advanced artificial intelligence research as an official resource provider of the <a href="https://nairrpilot.org">National Artificial Intelligence Research Resource (NAIRR) Pilot</a>.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a> or through the <a href="https://www.rcac.purdue.edu/anvil/anvilnairr">NAIRR allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Tue, 10 Mar 2026 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[MATH Datacenter Cooling issue - Job scheduling paused on Anvil/Gautschi]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2605</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2605</guid>
				<description><![CDATA[<p>The MATH datacenter started experiencing issues with cooling systems around 12pm. Job scheduling on the Anvil and Gautschi clusters was paused shortly after and scheduling resumed at 1:30pm.</p>
]]></description>
				<pubDate>Wed, 04 Mar 2026 12:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Purdue research team uses Anvil to secure position as finalist in NASA competition]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2600</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2600</guid>
				<description><![CDATA[<p>A research group from Purdue University used the Anvil supercomputer to compete in NASA's <em>Beyond the Algorithm Challenge</em>, a nationwide competition aimed at improving flood analysis with emerging technologies. The team from the <a href="https://secquoia.github.io">SECQUOIA</a> (Systems Engineering via Classical and Quantum Optimization for Industrial Applications) research group was recognized as one of nine finalists in the competition, thanks to their innovative framework that combines artificial intelligence (AI) techniques with quantum computing technologies.</p>
<p>The <em>Beyond the Algorithm Challenge</em> was designed by the NASA Earth Science Technology Office (ESTO) to propel scientific discovery for complex Earth Science problems—in this case, rapid flood analysis—by encouraging the exploration of unconventional and innovative computing methods. Specifically, the ESTO wanted participants to utilize technologies such as quantum computing, quantum machine learning, neuromorphic computing, or in-memory computing, which have all shown promise in overcoming limitations of conventional computing methods. By testing these novel computing methods, the <em>Beyond the Algorithm Challenge</em> paves the way for transforming how Earth Science problems are solved, potentially improving the lives and safety of the American people.</p>
<p>The SECQUOIA group is <img width="400" style="padding:10px;" class="float-right" alt="Group photo of research team at NASA competition" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/QUAFFLE-David-Bernal/SECQUOIA.png" />a Purdue University research organization within the <a href="https://engineering.purdue.edu/ChE">Davidson School of Chemical Engineering</a>. Led by Dr. David Bernal, Assistant Professor of Chemical Engineering, the SECQUOIA group focuses on designing and implementing optimization algorithms using hybrid and cutting-edge hardware technologies, including quantum computing technologies. Upon learning about the <em>Beyond the Algorithm Challenge</em>, Bernal felt that the competition aligned well with SECQUOIA’s work and immediately began assembling a team. Team members for the challenge included: Dr. Bernal, Yirang Park, PhD student in Chemical Engineering; Alan Yi, sophomore in Computer Science; and Daniel Anoruo, senior in Computer Science with a cybersecurity focus from Towson University.</p>
<p>Over the course of 10 weeks, the group designed and refined QUAFFLE (Quantum U-Net Assisted Federated Flood Learning and Estimation). QUAFFLE is a hybrid modeling framework that combines Quantum U-Nets for image segmentation with federated learning, a machine learning approach that decentralizes the training process. To understand the reasoning behind QUAFFLE requires a rudimentary understanding of these architectures and techniques.</p>
<p>U-Net architecture is a tried-and-true convolutional neural network (CNN) used for pixel-level image segmentation. The name stems from the fact that when drawn, the architecture takes the shape of a “U.” U-Nets take images and identify specific objects within those images. The resulting accuracy of the U-Net model correlates with how well it was trained.</p>
<p>Federated learning is a technique in which a global model is collaboratively trained across multiple devices or servers, each of which has its own local model. One of the benefits of federated learning is that each local model can handle a specific type of data—ideal for tasks that involve analyzing dissimilar data. The performance of the global model is improved in this scenario by producing higher-quality training results on smaller, distributed datasets rather than relying on less robust results from one large, centralized dataset.</p>
<p>For the <em>Beyond the Algorithm Challenge</em>, the SECQUOIA group wanted to create a system that was capable of producing accurate flood maps. The group theorized that harnessing the power of quantum computing combined with federated learning would allow for this while improving speed, security, and efficiency, compared to traditional computing methods.</p>
<p>A major obstacle for the group was mismatched datasets. The flood maps would need to be based on all available imagery, which includes images of differing regions, sizes, and sources (LiDAR, drone, satellite, weather radar, etc).</p>
<p>“One of the main challenges we had with this specific application was that there's a lot of heterogeneity in the data,” says Yirang Park. “To overcome this, we implemented federated learning under a heterogeneous-client setting, where each client trained locally on a random subset of the data and contributed model updates to a shared QUAFFLE model, improving speed and accuracy.”</p>
<p>Another issue <img width="500" style="padding:10px;" class="float-right" alt="Grpahical illustration of QUAFFLE Unet architecture" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/QUAFFLE-David-Bernal/Screenshot%202026-02-21%20122204.png" />
the group faced in this challenge is the computational intensity required for flood detection. The very large, heterogeneous datasets needed for the task means that there is a significant amount of training parameters. More training parameters equals more computing power and longer computing times. To combat this, the group decided to replace the bottleneck layers in the U-Net architecture (the layers forming the bottom of the “U”) with quantum layers. The idea was that this would help reduce the number of training parameters required, thus reducing the training time and increasing learning efficiency.</p>
<p>“We theorized that if we needed fewer training parameters, we could speed up the training process,” says Daniel Anoruo. “Replacing the bottleneck with quantum-based architecture allowed us to do that while simultaneously improving feature extraction.”</p>
<p>The final challenge for the group was one of access and scarcity. For now, quantum computers are rare and few researchers are allocated computing time on the machines. The SECQUOIA group used the Anvil supercomputer to solve this problem by simulating two types of quantum computers: a gate-based system (with PennyLane software) and a photonic-based system (with ORCA-SDK software). The benefits of using a powerful supercomputer like Anvil to simulate a quantum computing system were manyfold: the researchers tested and refined QUAFFLE on a computing system they had access to, validated their approach for potential future use on different types of quantum systems, and bypassed the long process of obtaining an allocation on a quantum computer just to test an unproven (at the time) software framework.</p>
<p>“Running these simulations on Anvil gave us an advantage in the sense that we know QUAFFLE is hardware agnostic,” says Park. “There are multiple types of quantum computers, and no one knows which one will be the system, but we do know that QUAFFLE can adapt to different hardware architectures.”</p>
<p>Park continues, “Having a working code that has been proven in simulations and can adapt to various quantum systems has also allowed us to de-risk the approach. We know that we haven’t built something only to find that we’ve wasted time and resources after implementing it on precious quantum resources.”</p>
<p>The SECQUOIA group was thrilled with Anvil’s performance.</p>
<p>“Anvil really saved us,” says Alan Yi. “We tried testing these simulations on our local computers, and they would run for two days and not be done. But with Anvil GPUs, the simulation would finish really quickly, sometimes even less than an hour.”</p>
<p>After completing their work, the group had demonstrated that QUAFFLE was a success—it required 6% fewer parameters and outperformed a centralized quantum U-Net in accuracy when combining different data sources. Their innovative approach led to them securing a position as a finalist in the <em>Beyond the Algorithm Challenge</em>. While they did not ultimately receive the grand prize in the competition, the team’s work stood out for its innovation and real-world potential. QUAFFLE earned recognition from the judges as a promising solution, and the project gained valuable support from industry leaders, including Rigetti, Orca UK Computing, Flower, and IBM. The team plans to continue expanding QUAFFLE, and hopes to someday test it on an actual quantum system.</p>
<p>For more information about the SECQUOIA group, please visit: <a href="https://secquoia.github.io">https://secquoia.github.io</a>. The group’s presentation given to NASA for the <em>Beyond the Algorithm Challenge</em> can be viewed here: <a href="https://www.nasa-beyond-challenge.org/project-gallery/secquoia">https://www.nasa-beyond-challenge.org/project-gallery/secquoia</a></p>
<p>To learn more about High-Performance Computing and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Tue, 24 Feb 2026 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[February 5 Maintenance – Math Data Center Upgrades and Service Impact]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2537</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2537</guid>
				<description><![CDATA[<p>On Thursday, February 5, RCAC will perform planned maintenance in the MATH data center to support cooling upgrades and capacity improvements as part of the ongoing MATH datacenter renovation project.</p>
<p>During this maintenance window, several clusters will experience a temporary outage so that hardware can be safely powered down while facility work is performed:</p>
<ul>
<li>
<p>Gautschi, Gilbreth, Negishi, Bell, and Anvil cluster nodes will be powered down.</p>
</li>
<li>
<p>The Gilbreth’s legacy V100 GPUs, that are well past their lifetime, will be decommissioned.</p>
</li>
<li>
<p>Hammer (Math nodes) and Geddes: A subset of nodes will be powered down but the services will be available, unless communicated separately.</p>
</li>
</ul>
<h3>How does this maintenance impact you?</h3>
<ul>
<li>
<p>Clusters listed in this message won’t be available to run jobs during the maintenance.</p>
</li>
<li>
<p>Any jobs requesting a walltime which would take them past the start of the maintenance will not start and will remain in the queue until after the maintenance is completed.</p>
</li>
<li>
<p>Users can continue to access their data.</p>
</li>
<li>
<p>GenAI studio will remain available. This maintenance will position Purdue to support growing computational needs. Users should see long‑term benefits in system reliability and our ability to support future computing and AI resources.</p>
</li>
</ul>
<p>If you have questions about how this outage will affect your work or need support, please contact <a href="mailto:rcac%E2%80%91help@purdue.edu">rcac-help@purdue.edu</a>.</p>
]]></description>
				<pubDate>Thu, 05 Feb 2026 07:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Network Slowness Notice]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2547</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2547</guid>
				<description><![CDATA[<p>We are currently investigating network performance issues affecting network traffic.</p>
<p>Impact To You:
At this time, you may notice latency or brief disruptions when accessing certain on-campus or external resources, especially during peak usage periods.</p>
<p>We appreciate your patience while we work to fully resolve the underlying problem and restore normal network performance. We will provide an update by 5:00PM EST today or sooner.</p>
]]></description>
				<pubDate>Mon, 02 Feb 2026 15:00:00 -0500</pubDate>
									<category>Outages and Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Globus access to Depot degraded; slow Depot logins and Depot access on clusters]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2574</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2574</guid>
				<description><![CDATA[<p>Users of Data Depot on RCAC clusters are currently experiencing degraded performance, and some Globus transfers to and from Depot are failing or running slowly.  In addition, some users may see slow Globus logins or be temporarily unable to log in to Globus when accessing Depot collections.</p>
<p>System monitoring has identified an issue where heavy job activity was overloading the Data Depot filesystem used by the clusters and Globus.</p>
<p>You may see the following impacts:</p>
<ul>
<li>Globus transfers to and from Depot collections may fail, stall, or run much more slowly than usual.</li>
<li>Globus logins may be slow or occasionally fail when accessing Depot endpoints.</li>
<li>Jobs on RCAC clusters that read from or write to Depot may experience slow file access, delayed directory listings, or timeouts.</li>
</ul>
<p>Our engineers are investigating the high load from a large number of concurrent jobs and are working to reduce the impact on Depot, Globus, and cluster workloads.  Existing jobs will continue to run, but any that are heavily Depot‑I/O‑bound may run more slowly or see I/O errors until performance improves.  We will provide another update by 5:00PM EST or sooner if the issue is resolved.</p>
]]></description>
				<pubDate>Fri, 30 Jan 2026 15:00:00 -0500</pubDate>
									<category>Outages</category>
							</item>
					<item>
				<title><![CDATA[Anvil used to study dark matter and early universe formation]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2573</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2573</guid>
				<description><![CDATA[<p>Purdue University’s Anvil supercomputer was used by researchers from the University of California, Los Angeles (UCLA) to study the effects of dark matter on galaxy formation in the early universe. This research, part of the <a href="https://www.astro.ucla.edu/~snaoz/TheSupersonicProject/index.html">Supersonic Project</a>, aims to provide a more precise understanding of the galaxy formation process by accounting for a previously overlooked but important factor—the stream velocity.</p>
<p>Dark matter is elusive. We don’t know what it is or what it is composed of. This mysterious material scoffs at the adage “Seeing is believing”—it does not interact with the electromagnetic force, meaning it neither absorbs, reflects, nor emits light of any kind. We literally cannot see it, yet we know it is there. Dark matter has mass, thereby exerting the effects of gravity on visible matter. It is only by observing these gravitational effects that scientists know dark matter exists. In fact, dark matter accounts for roughly 85% of all matter in the universe, serving as a cosmic scaffolding that organizes galaxies at scale. Without it, galaxies would have long ago been torn asunder by their own rotational velocities, lacking the necessary gravitational pull required to hold together.</p>
<p>As one can imagine, studying a material that can’t be seen but whose effects must be observed through a telescope can be tricky. For decades, scientists have tackled this problem by running cosmological simulations that include dark matter and comparing them to what is actually seen in the universe. If the end result of a simulation matches the physical reality seen through the telescope, then that’s a good sign that the scientists are on the right track with their theories. If not, the theory must be altered or dismissed entirely. Recent technological advances have enabled scientists to study dark matter in greater depth than ever before. High-performance computing (HPC) systems provide an astonishing amount of computing power, while the new James Webb Space Telescope (JWST) gives astronomers an unprecedented view of the universe, enabling observations of the first stars and formation of the first galaxies after the Big Bang. This boost in data-gathering ability and computing performance lies at the heart of the dark matter research being conducted at UCLA.</p>
<div class="my-3 text-center"><img width="650" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Claire-Williams-Dark-Matter-Stream-Velocity/JWST%20MoM-z14.png" /></div> 
<p>Claire Williams is a PhD student in the <a href="http://www.astro.ucla.edu/">Astronomy and Astrophysics Division</a> of the <a href="https://www.pa.ucla.edu/">UCLA Department of Physics and Astronomy</a>. Williams’s focus is on theoretical astrophysics. She is part of the Supersonic Project, a collaboration that studies how stream velocity and dark matter affected galaxy formation in the early universe. In this instance, stream velocity refers to the relative velocity of baryons and dark matter during the early formation stages of the universe. The stream velocity has been largely neglected in traditional simulations of galaxy formation. However, recent findings show that the stream velocity was supersonic, which had major implications for how the baryons and dark matter were distributed. Williams’s, and the rest of the research group’s, goal is to improve our understanding of the galaxy formation process by including the stream velocity as a factor in their cosmological simulations.</p>
<p>“So our specific studies are trying to gain a more precise understanding of the process by including effects that previously nobody included,” says Williams. “People already had dark matter, they already had gas, but they were missing the stream velocity. It has been largely ignored because it is challenging to get right in simulations. But neglecting the fact that material was moving past the dark matter at five times the speed of sound inevitably leads to a different result. What our group has done is to run simulations that correctly include the relative motion of dark matter and ordinary matter at early times in the universe.”</p>
<p>Williams and her research group utilize the Anvil supercomputer to run high-resolution AREPO hydrodynamics simulations for a number of different studies. The common theme across these studies is that the group runs theoretical simulations both with and without stream velocity as a factor, and that the results are, or will soon be, compared with JWST observations. The size of the regions being simulated are, quite literally, astronomical, ranging upwards of two megaparsecs. This equates to a volume slightly larger than the Milky Way and Andromeda galaxies combined. The simulations are also incredibly detailed, with each individual particle representing an area roughly 200 times the mass of our sun. For comparison, that’s a single grain of sand on the beach. Simulations this large require a massive amount of computing power and would be impossible without HPC resources like Anvil.</p>
<p>“So we're simulating a region larger than the whole Milky Way, but our individual pieces that are moving around are only a couple 100 times bigger than our own sun,” says Williams. “This is why we need Anvil, because you couldn't run this on your laptop. This takes a couple of weeks to run on the cluster.”</p>
<p>Running the simulations is only the first part of the process; HPC resources are further needed to actually analyze the data. Williams continues:</p>
<p>“Then, at the end of the day, when you finish your simulation run, you basically have a bunch of imaginary particles in an imaginary box. But you have to figure out, ‘How would these particles translate to light that the telescope would see?’ So you need to post-process the simulations, which involves extensive data analysis and specialized algorithms to convert the resulting particles into light in space. We need Anvil for this data analysis as well.”</p>
<p>The end <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Claire-Williams-Dark-Matter-Stream-Velocity/JWST%20Dark%20Matter%20Map%203.png" />result of the group’s computational work is a theoretical picture of what the universe should look like to us today, as viewed through the JWST. Dark matter was dispersed throughout the universe soon after the Big Bang, unaffected by the forces of electricity and magnetism. The gravitational pull of dark matter led to clumps of particles, which eventually formed into galaxies. And the precise placement of these galaxies was likely influenced directly by the stream velocity. At least, that’s Williams’s hypothesis. Now, the research group must wait to see if it proves true.</p>
<p>“So one of the things that we have found with our studies,” says Williams, “is that the stream velocity should cause some very faint galaxies to shine very brightly for a brief period of time at the beginning of the early universe, because it causes them to form a bunch of stars all at once. Without the stream velocity factored in, you wouldn’t expect to see this happen. And now they're starting to make observations with the JWST that should show what we predict to see. So, hopefully, in the next few years, we can get confirmation from the telescope that this effect is happening.”</p>
<p>Williams continues, “One thing that's kind of cool is that if they don't see that effect, then it poses a big problem for dark matter in general, because all of our models so far are dependent on how we think dark matter should work. So if we make this prediction and the telescope doesn't see it, then we know we've messed up our collective understanding of dark matter along the way and may need to make changes to things we thought we had a grasp on in our cosmology.”</p>
<p>For more information about William’s research, please visit her <a href="https://www.astro.ucla.edu/~clairewilliams/">UCLA Bio Page</a>. More details on the Supersonic Project can be found here: <a href="https://www.astro.ucla.edu/~snaoz/TheSupersonicProject/index.html">The Supersonic Project</a></p>
<p>Interested in leveraging the latest advancements in computing to bolster your research? Please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page to learn more about High-Performance Computing and how it can help you.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Thu, 29 Jan 2026 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[RCAC Student Spotlight : Elian Rieza]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2538</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2538</guid>
				<description><![CDATA[<p><strong>Name:</strong> Elian Rieza <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/RCAC-Stories/Student-Spotlights/Elian%20Rieza.jpeg" /></p>
<p><strong>Year:</strong> Sophomore</p>
<p><strong>Major:</strong> Electrical Engineering</p>
<p><strong>Position:</strong> Assistant Computational Researcher</p>
<p><strong>Can you introduce yourself and share a little about who you are?</strong>
Hello! My name is Elian and I’m an Assistant Researcher!</p>
<p><strong>What are some of your main interests or passions?</strong>
Some of my interests include Linux, servers, and drinking lots of coffee.</p>
<p><strong>Can you tell us about your role at RCAC? What does your job entail?</strong>
I am an Assistant Researcher handling tickets from researchers and the entire user base of multiple RCAC clusters, including Anvil and Gautschi. I also help the Apps team to cover issues facing the clusters.</p>
<p><strong>What do you enjoy most about working at RCAC?</strong>
Working at RCAC allowed me to handle servers on a day-to-day basis and, other than the fact that I'm a huge server nerd, it allowed me to learn more about Linux systems in a very friendly work environment.</p>
<h3>Tell us more about your favorite project you like to show off!</h3>
<p><strong>Project title:</strong>  Handling Apps tickets at RCAC</p>
<p><strong>Project description:</strong> I handle tickets from the wide range of users that Purdue's multiple clusters cover and support. Whenever issues arise, I would be one of the first people to handle the ticket then I'd handle their issues, noting whatever arises in RCAC's database.</p>
<p><strong>What did you learn from this project?</strong>  A lot of patience, especially from (slightly, and understandably) upset researchers who thought they had lost their life's work (thankfully they hadn’t!).</p>
]]></description>
				<pubDate>Thu, 15 Jan 2026 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Anvil used to study how trade can reduce volatilities in crop supply]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2521</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2521</guid>
				<description><![CDATA[<p>A researcher from Purdue University used the Anvil supercomputer to study climate-induced volatility in crop production and identify the role of potential adaptation strategies for reducing future risk. The results of this research, notably that international trade can reduce volatility, are crucial for global food security as well as regional resilience.</p>
<p>Dr. Iman Haqiqi <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Iman-Haqiqi/erfsad7d12f4_hr.jpg" />is Lead Research Economist in the Department of Agricultural Economics at Purdue University. His research leverages high-performance computing (HPC) resources to study international trade, environmental, and resource economics, with a focus on global change and sustainability. Recently, Haqiqi utilized Anvil, one of Purdue’s most powerful supercomputers, to explore how strategic trade partnerships can buffer the risk of crop market volatility stemming from increased heat stress.</p>
<p>Heat stress is a significant concern for crop production. When plants are exposed to excessive heat for prolonged periods, they can exhibit numerous negative health effects, including inhibited growth, reduced photosynthesis rates, sunscald, wilting, and even death. Different crops exhibit varying levels of sensitivity to heat stress, but corn—a staple crop for billions of people—is particularly vulnerable. As extreme heat stress events increase in frequency and intensity, national and global food security is put at risk. Facing this challenge and understanding the effectiveness of alternative strategies to overcome it is precisely what drove Haqiqi to pursue his research.</p>
<p>Climate impact on average crop production has been researched to no end. There are many studies that look at the effects of heat stress or other extreme weather events on average corn yield. The problem, according to Haqiqi, is that looking at the average can be misleading and neglects a large part of the risk.</p>
<p>“A lot of other studies have looked at this problem and determined that, on average, crop production will be a little bit lower,” says Haqiqi. “But I find that looking at the average is misleading. Mixing extreme highs with extreme lows, for example, means that, on average, everything might be fine. What we need to do is study the volatility, because that’s where the real risk is. If we want to prepare, we have to measure the volatility, not just the average.”</p>
<p>While a small decrease in average annual corn yields may not be considered problematic, increased volatility is. Volatility always has been and always will be a factor in crop production. Some years will be worse than others. But as the risk of extreme weather events decimating a crop supply increases, so too does the chance that any particular season will cause the global supply of food, not to mention the agricultural market, to implode. Haqiqi’s goal was to investigate future volatility and risk in corn production associated with increased heat stress, as well as evaluate the effectiveness of two different adaptation strategies—irrigation and market integration.</p>
<p>Irrigation is a tried-and-true method of reducing crop vulnerability during periods of extreme heat. Not only does it cool the temperature of the plant, it also maintains appropriate soil moisture levels, which improves nutrient uptake, photosynthesis rate, and biochemical efficiency. The problem is that wide-scale adoption of irrigation as an adaptation strategy would further deplete an already strained resource—water. This concern over groundwater depletion has led to a growing interest in trade as an alternative option for offsetting crop volatility risk.</p>
<p>International trade partnerships between regions with differing climate patterns could reduce the risk of substantial losses to the national corn supply, but trade as an adaptation strategy had only been discussed in theory. Haqiqi wanted to measure the strategy’s effectiveness quantitatively. To begin, he needed to predict how corn yields would be affected by potential changes to the climate patterns. Haqiqi used a statistical panel model to estimate corn yield response to heat stress and then combined those results with NEX-GDDP-CMIP6 climate data to project future production volatility and risks of substantial yield losses. To assess overall volatility, Haqiqi needed to calculate the extreme heat levels (i.e., not the average) of each day for millions of fields worldwide, aggregate this for each growing season in every region that produces corn, and then aggregate this to determine global corn supply. Haqiqi then converted these from daily to yearly calculations and determined year-on-year changes in volatility. These results were then used to determine the risk of substantial loss of production for each region. Haqiqi also assessed the relative volatility of each region compared to the global market. Once these baseline results were obtained, Haqiqi could simulate multiple scenarios to analyze irrigation and market integration for their ability to reduce these future risks.</p>
<p>The results of <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Iman-Haqiqi/erfsad7d12f8_hr-2.jpg" />Haqiqi’s research were clear: 1) corn yields will experience higher volatility due to increased heat stress; 2) irrigation expansion can offset this risk; 3) trade can also buffer the risk, but without depleting the groundwater supply. The third point is salient—according to the numbers, irrigation in the US will need to rise from 15% of farm land to 50-75% in order to maintain historical risk levels, which is unsustainable.</p>
<p>“So the whole idea of this paper was to show that, yes, there are some temporary solutions, like irrigation, but they are not sustainable,” says Haqiqi. “Something else, like international trade, which is a solution from an economic perspective, can have a similar effect in terms of reducing volatility and risk. But also, it has benefits because you don't need to have a lot of unsustainable use of resources.”</p>
<p>Haqiqi’s research required a massive amount of computing power, and for that, he relied on Anvil. The supercomputer was used for all computational tasks involving yield projection, variability analysis, and risk assessment.</p>
<p>“Without Anvil, this paper would be just a conceptual framework that, hey, you know, trade could be a good thing compared to irrigation,” says Haqiqi. “But we didn't have numerical evidence to support that claim. Now, thanks to having access to Anvil, we could provide that evidence.”</p>
<p>Haqiqi went on to note that the support he received from the Anvil team was exceptional and that because of the quick, comprehensive responses to his support tickets, he was able to rapidly move past any issues he had.</p>
<p>The results of Haqiqi’s research were published in <em>Environmental Research: Food Systems</em>. To view the publication and learn more about the study, please visit: <a href="https://iopscience.iop.org/article/10.1088/2976-601X/ad7d12">Trade can buffer climate-induced risks and volatilities in crop supply.</a></p>
<p>To learn more about High-Performance Computing and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the National Science Foundation (NSF), Anvil supports scientific discovery by providing resources through the NSF’s Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS), a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Tue, 30 Dec 2025 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Research Computing Holiday Break]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2481</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2481</guid>
				<description><![CDATA[<p>Research Computing personnel will observe the university winter break from 12:00am EST on 12/23/25 and will resume normal business hours on January 5th, 2026  During this time, Research Computing services will continue to be available, but all staff will be on leave.</p>
<p>Research Computing staff members will monitor the status of all computing and data resources in an effort to ensure continuous availability.</p>
<p>Research Computing staff members will monitor the ticketing system throughout the holiday period and answer critical issues and problems. Non-critical user issues and questions will be addressed beginning January 5th, 2026. There will also be no coffee hour consultations during this break.</p>
<p><strong>Scratch file purging (on community clusters with scratch space) will continue as normal during the break, so be sure to archive your files in scratch storage. This does not apply to Data Depot or home directories -- only scratch storage.</strong></p>
<p>Have a wonderful break, everyone, and we look forward to great things in the new year!</p>
]]></description>
				<pubDate>Tue, 16 Dec 2025 13:00:00 -0500</pubDate>
									<category>Announcements</category>
							</item>
					<item>
				<title><![CDATA[Anvil Maintenance on December 11, 2025]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2476</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2476</guid>
				<description><![CDATA[<p><strong>UPDATE: December 11, 2025 05:51 PM ET</strong>
As of 5:51pm ET, the maintenance work on Anvil has been completed and job scheduling has been resumed. If you encounter any issues post maintenance, please contact <a href="http://support.access-ci.org/help-ticket">ACCESS Help Desk</a>.</p>
<p><strong>Original Post:</strong></p>
<p>The Anvil system will be unavailable on <strong>December 11th, 2025, from 7:00 AM to 6:00 PM ET</strong> for scheduled maintenance. During this maintenance, we will perform NVIDIA driver upgrades and rebuild Slurm with PMIx.</p>
<p>Any Slurm jobs requesting a walltime that would extend past Thursday, December 11th at 7:00 AM ET will not start and will remain in the queue until after maintenance is completed.</p>
<p>If you have any questions, please submit a ticket through the <a href="http://support.access-ci.org/help-ticket">ACCESS Help Desk</a>.</p>
]]></description>
				<pubDate>Thu, 11 Dec 2025 07:00:00 -0500</pubDate>
									<category>Maintenance</category>
							</item>
					<item>
				<title><![CDATA[Anvil and AI used to solve for best taxation strategies]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2480</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2480</guid>
				<description><![CDATA[<p>A researcher from the University of Nebraska-Omaha used Purdue’s Anvil supercomputer to develop a new artificial intelligence (AI) technique that can derive optimal taxation strategies for governments. This new method leveraged Anvil’s advanced GPUs to factor in household differences across families within a population in order to determine how taxes should be applied for the best possible outcome.</p>
<p>Dr. Zhigang Feng is a professor in the Department of Economics at the University of Nebraska-Omaha. He, along with his collaborators hailing from multiple institutions, combined machine learning techniques with economic theory to tackle everyone’s favorite economic subject—taxes.</p>
<p>Taxation is an <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Zhigang-Feng-Optimized-Taxation/AdobeStock_485082141.jpeg" />oft-debated subject for governments worldwide, with different opinions and theories as to what works best for individual countries or locales. How differing taxation strategies affect the economic choices of households in a population is an extraordinarily complex problem to solve. Many models have been developed to try and understand and predict the effects of taxes on the economy, with varying levels of success. Most models fail to account for household heterogeneity in the context of dynamic economic fluctuations. This shortcoming is precisely what Feng and his colleagues set out to remedy.</p>
<p>Research has long shown that household heterogeneity needs to be factored in to accurately model economic behavior and therefore design optimized fiscal policies. However, heterogeneity takes an already complex mathematical problem and adds in an infinite-dimensional object. Feng’s goal was to develop a novel machine learning-based approach that successfully factored in household differences. To do this required a massive amount of computing power due to the curse of dimensionality problem, which is why he and his collaborators turned to Anvil.</p>
<p>&quot;This problem isn’t something traditional numerical methods in the standard economist's toolbox can handle—even with a handful of CPUs using MPI, let alone an average computer,&quot; says Feng. &quot;We needed multiple GPUs running in parallel to harness the optimization power of modern AI techniques, and we needed them on demand. We also required a machine with massive memory to store the state of every simulated individual. Thankfully, Anvil was able to provide us with both.&quot;</p>
<p>The group utilized both CPUs and the advanced GPUs on Anvil to create a Markov decision process in Wasserstein space. They combined deep neural networks for equilibrium function approximation, a histogram-based distribution approximation, an analytically derived distribution transition kernel, and a modified value and policy iteration with an augmented Lagrangian method, all of which together allowed them to address the problem of infinite dimensions. After developing the new approach, the group also needed to run the model simulations for multiple scenarios, showing the cause-and-effect of different taxation strategies.</p>
<p>Overall the group was very happy with Anvil’s performance. The queue for the GPUs was short, allowing them the access they needed to quickly conduct their research. Feng also noted that anytime the team hit any snags or had issues, they reached out to the Anvil support team and received help promptly. All of this combined enabled the group to efficiently proceed with a project that otherwise would not have been possible.</p>
<p>“To solve these models, we needed Anvil,” says Feng. “There’s no question—without it, this is not something we would have been able to achieve.”</p>
<p>Though the research publication is in its preliminary stages, it shows promising results and will have important implications for policymakers and researchers wanting to design effective fiscal policies. The novel machine learning method developed by Feng and his colleagues is also scalable and can be applied to a wide range of other economic models.</p>
<p>For more information about this project, as well as other research conducted by Dr. Feng, please visit his <a href="https://sites.google.com/site/zfeng202/research">Research Page</a>.</p>
<p>To learn more about High-Performance Computing and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the National Science Foundation (NSF), Anvil supports scientific discovery by providing resources through the NSF’s Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS), a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<h4>Publications utilizing Anvil</h4>
<ol>
<li>Chen C, Feng Z, Gu J. HEALTH, HEALTH INSURANCE, AND INEQUALITY. <em>International Economic Review.</em> Published online July 4, 2024. doi:https://doi.org/10.1111/iere.12722</li>
<li>Feng, Zhigang and Han, Jiequn and Zhu, Shenghao, Optimal Taxation with Incomplete Markets–An Exploration Via Reinforcement Learning. Available at SSRN: <a href="http://dx.doi.org/10.2139/ssrn.4758552">http://dx.doi.org/10.2139/ssrn.4758552</a>
</li>
</ol>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Mon, 08 Dec 2025 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Purdue participates in prestigious international conference, SC25]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2478</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2478</guid>
				<description><![CDATA[<p>Purdue University made an impact at the 2025 International Conference for High Performance Computing, Networking, Storage and Analysis (SC25). For more than 20 years, Purdue has participated in SC by showcasing the people and computing resources that make Purdue a leader in HPC and research in higher education. This year saw the continuation of that legacy with captivating presentations at the Purdue exhibitor's booth, fun alumni networking events, workforce development opportunities, and more!</p>
<p>SC25 is an annual conference where the brightest minds in computing and technology from around the world gather in one location for a week of communication, collaboration, and innovation. The conference took place in St. Louis, Missouri, this year, with 16,500+ attendees and a record-breaking 559 exhibitors. Purdue’s exhibitor booth, hosted by the Rosen Center for Advanced Computing (RCAC), did not disappoint, engaging with a steady stream of attendees who dropped by to speak with our HPC experts, listen to presentations, and participate in demonstrations.</p>
<p>The central theme for the Purdue booth this year was to promote the <a href="https://www.purdue.edu/computes/">Purdue Computes</a> initiative. To help achieve this goal, Purdue provided the conference with <a href="https://www.rcac.purdue.edu/sc2025">booth presentations</a> throughout the week from experts within multiple departments. Purdue staff also participated in numerous workshops, Birds-of-a-feather sessions (BOFs), and panel discussions outside of the booth exhibits, all highlighting the university’s contributions to research computing and HPC in higher education. A full list of SC25 papers and presentations given by Purdue affiliates is as follows:</p>
<ul>
<li>
<strong>Haniye Kashgarani, LJ Lumas, Emma Zheng, and Brendan Swanson:</strong> <em>AnvilOps: Increasing Accessibility of Kubernetes with Automated Builds and Deployments</em>
</li>
<li>
<strong>Paul Jiang:</strong> <em>A Formal Characterization of Non-Monotonicity in Tensor Cores</em>
</li>
<li>
<strong>Richie Tan and Guangzhen Jin:</strong> <em>A Modular, Responsive, and Accessible HPC Dashboard Built upon Open OnDemand</em>
</li>
<li>
<strong>Mithuna Thottethodi, Sree Charan Gundabolu, and Vijaykumar T. N.:</strong> <em>BLAZE: Exploiting Hybrid Parallelism and Size-Customized Kernels to Accelerate BLASTP on GPUs</em>
</li>
<li>
<strong>Elham Sarbijan, FNU Ashish, Christina Joslin, and David Burns:</strong> <em>Generating Frequently Asked Questions from Technical Support Tickets using Large Language Models</em>
</li>
<li>
<strong>David F. Gleich:</strong> <em>KVMSR+UDWeave: Extreme-Scaling with Fine-grained Parallelism on the UpDown Graph Supercomputer</em>
</li>
<li>
<strong>Petros Drineas and Vasileios Georgiou:</strong> <em>Randomized Numerical Linear Algebra in HPC: Toward a Sustainable, Scalable Software Ecosystem</em>
</li>
</ul>
<p>Aside from hosting <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/RCAC-Stories/SC25-Post-Event/IMG_6076.jpg" />a booth and giving presentations, Purdue assisted directly with making SC25 a success. Purdue staff and affiliates volunteered for the SC25 Planning Committee (the organizing body for SC25), SCinet (the collaborative group that builds the infrastructure and network for the conference), and the Student Cluster Competition (a 48-hour HPC competition). Thanks to these volunteers, Purdue lent its expertise towards building and running the entire conference. Support for SC25 wasn’t limited to employees, however. Purdue also offered hardware for a training session to help ensure the best conference possible.</p>
<p>Anvil, one of Purdue’s most powerful supercomputers, was the main resource used to host an all-day, student-focused workshop at SC25. The workshop consisted of lectures combined with self-paced hands-on activities on HPC, AI, and quantum computing. Each student created their own ACCESS account in order to utilize Anvil, and as a bonus for participating, they will have continual access to the supercomputer for a full year. The exercises mainly focused on accelerated code (CUDA) with both C++ and PyTorch, for which the students used all 84 of Anvil’s cutting-edge H100 GPUs for the entirety of the day. In total, 100 students from multiple institutions took part in the workshop.</p>
<p>SC25 also provided <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/RCAC-Stories/SC25-Post-Event/SC25SCC-35.jpg" />an opportunity for Purdue students to shine. Throughout the week, graduate and undergraduate students from the university were involved in numerous workforce development activities, including giving presentations at the Purdue booth, conducting workshops, and taking part in poster sessions. Two Purdue students competed as part of the <a href="https://www.rcac.purdue.edu/news/7440">INpack team</a> in the 2025 IndySCC, a world-renowned supercomputing competition, while four of the eight <a href="https://www.rcac.purdue.edu/news/7449">Anvil REU students</a> were able to present on the work they conducted during the summer program. Outside of gaining presentation experience, the students were also able to attend different informational sessions and learn about the latest advances in HPC, as well as network and develop connections with people within the community. Providing students with opportunities such as these ties in directly with Purdue’s goal of developing the HPC workforce of the future.</p>
<p>To cap off the fantastic week for Purdue, the new HPL-MXP mixed-precision benchmark list and IO500 lists were released at SC25. Purdue University’s newest supercomputing community cluster, <a href="https://www.rcac.purdue.edu/compute/gautschi">Gautschi</a>, was ranked 27th on the HPL-MXP list and 20th on the IO500 list in the 10 Node Production category. This is an amazing achievement and a testament to the value of Purdue’s continued investment in HPC.</p>
<p>Overall, SC25 was a tremendous success for the university. If you or someone in your department would like to be involved with SC26, please contact  <a href="mailto:rcac-help@purdue.edu">rcac-help@purdue.edu</a>.</p>
<p>For more information regarding HPC and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the NSF’s <a href="https://access-ci.org/">Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States.</p>
<p>Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p>RCAC operates all centrally-maintained research computing resources at Purdue University, providing access to leading-edge computational and data storage systems as well as expertise and support to Purdue faculty, staff, and student researchers. To learn more about HPC and how RCAC can help you, please visit: <a href="https://www.rcac.purdue.edu/">https://www.rcac.purdue.edu/</a></p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Tue, 02 Dec 2025 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[RCAC hosts successful Anvil REU Summer 2025 program]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2461</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2461</guid>
				<description><![CDATA[<p>Over the summer, the Rosen Center for Advanced Computing (RCAC) hosted its annual 11-week hands-on internship, the Anvil Research Experience for Undergraduates (REU) Summer program.</p>
<p>Eight students from across the nation gathered at Purdue’s campus in West Lafayette, Indiana, for this year’s Anvil REU program. The students enrolled in this internship program to learn about high-performance computing (HPC) and to work on projects related to the operations of the NSF-funded Anvil supercomputer at Purdue. During the program, which is supported by the National Science Foundation (NSF), the students obtained the knowledge and skills necessary to build and support advanced research computing systems and scientific applications on these systems.</p>
<p>The Anvil REU program is a paid summer internship open to undergraduate students in the United States, regardless of their background. Due to a massive influx of applications—over 600 total—the application window closed early in mid-January of this year. This was a significant increase in applicants from 2024. The Anvil REU mentors—eight RCAC staff members who led the projects that the students would work on during the summer—along with the Anvil executive team, took this list of 600+ applicants and distilled it down to eight students. The eight participants of the Anvil REU program were:</p>
<ul>
<li>
<strong>Abigale Tucker</strong>, Computer Science major, Middle Tennessee State University</li>
<li>
<strong>Randy Alejo</strong>, Computer Science major, Stony Brook University</li>
<li>
<strong>Brendan Swanson</strong>, Computer Science major, North Carolina State University</li>
<li>
<strong>Emma Zheng</strong>, Computer Science major, Purdue University</li>
<li>
<strong>Abigail Lin</strong>, Computer Science major, University of Florida</li>
<li>
<strong>Sadra Williams</strong>, Computer Science major, North Carolina State University</li>
<li>
<strong>Christina Joslin</strong>, Data Science and Applied Statistics major, Purdue University</li>
<li>
<strong>David Burns</strong>, Computer Science major, University of Wisconsin–Madison</li>
</ul>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2025/REU-Photos/1W5A1588.jpg" /></div>
<p>The Anvil REU program consisted of four separate projects, with two students pairing together to tackle each one. These projects were chosen with real-world applicability in mind—the students would not only gain experience with HPC and learn new skill sets, but would simultaneously increase Anvil’s capabilities. Each project also had two mentors working with the students to help them achieve their goals.</p>
<h4>Project 1:</h4>
<p>The first Anvil REU project for 2025 focused on building a data warehouse to store and manage logs from data centers and compute systems, integrating data sources, and creating visual dashboards. Two students, Abigale Tucker and Randy Alejo, teamed up to take on this project under the supervision of their mentors, Sam Weekly and Patrick Finnegan, as well as Anvil Executive Team member Preston Smith. In this project, Tucker and Alejo built a data warehouse and several data pipelines that collect, transform, store, and enable the querying of data. The Anvil supercomputer supports over 12,000 users throughout the U.S. These users generate massive volumes of data encompassing a variety of scientific domains. Managing such a large amount of data is a difficult task, especially when it needs to be easily obtained at any point in the research process. Tucker and Alejo tackled this problem by designing a system to efficiently manage, process, and store this data, making it accessible, organized, and ready for analysis when it’s needed most.​ Their pipeline was developed using a tech stack that included Apache Kafka, ClickHouse, Apache Iceberg, Grafana, and Apache Superset. They tested their system by creating a testing environment that simulated the real architecture but used fake data, allowing them to validate and troubleshoot without risking the security of real researcher data or interrupting system processes that were already in place. Once they were pleased with the functionality and performance of their pipeline, they were able to connect it to real-world data on the Anvil system.</p>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2025/REU-Photos/1W5A0901.jpg" /></div>
<h4>Project 2:</h4>
<p>The second Anvil REU project worked on developing a dynamic web interface for building and deploying container workloads on the Anvil Composable Subsystem. Brendan Swanson and Emma Zheng worked on this project under the supervision of their mentors, LJ Lumas and Haniye Kashgarani, and Anvil Executive Team member Erik Gough. The Anvil Composable Subsystem is a Kubernetes-based private cloud that provides a platform for creating composable infrastructure on demand. This cloud-style flexibility provides researchers the ability to self-deploy and manage persistent services to complement HPC workflows and run container-based data analysis tools and applications. The problem is that deploying applications to Kubernetes can be really difficult, especially for beginners. To combat this issue, Swanson and Zheng developed <em><a href="https://anvilops.rcac.purdue.edu">AnvilOps</a></em>, a user-friendly web interface that automates the deployment of applications to Anvil Composable without writing Kubernetes manifests. Thanks to their hard work throughout the summer, <em>AnvilOps</em> features seamles Git integration, the ability to monitor and deployments roll back to previous versions if needed, and supports a wide variety of languages and frameworks so users can connect their GitHub repository as-is. All of this allows for Anvil users of any experience level to deploy applications at the click of a button.</p>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2025/REU-Photos/1W5A1118.jpg" /></div>
<h4>Project 3:</h4>
<p>The third Anvil REU project focused creating easy-to-use bioinformatics workflow templates. Abigail Lin and Sadra Williams worked on this project with their mentors, Nannan Shan and Arun Seetharam, as well as Anvil Executive Team member Arman Pazouki. Genomics research that utilizes HPC resources has been accelerating in the past few years, which has been great for discovery in the field. However, biologists often lack a deeper understanding of computing and computational workflows, which can severely hinder (or altogether halt) their research projects. To aid in this issue, Lin and Williams developed four Bioinformatics Workflow Templates for genomics analyses, each tailored for Purdue’s Anvil HPC platform. The templates were: RNA-seq, variant calling, genome assembly, and  general (a customizable option where users can easily create their own workflow). By completing their project, Lin and Williams have provided bioinformatics researchers with little programming knowledge a simple and easy way to conduct their science on Anvil.</p>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2025/REU-Photos/1W5A7651-Enhanced-NR.jpg" /></div> 
<h4>Project 4:</h4>
<p>For the fourth Anvil REU project, Christina Joslin and David Burns worked with their mentors, Elham Barezi, Ashish, and Anvil Executive Team member Carol Song, to add an automated document generation feature to TicketHub, a proprietary AI-enabled tool for user support staff. The scope of their project was to create a new feature that would proactively generate useful FAQs by using Natural Language Processing (NLP) and Large Language Models (LLMs) to identify and summarize common user issues from past user support requests. Maintaining accurate and up-to-date technical documentation is a time-consuming and heavily manual task for support staff at HPC facilities. By taking on this project, Joslin and Burns worked to remove some of the burden placed on Anvil’s support team. The students successfully developed this new TicketHub feature over the course of the internship, and ws able to test its performance by evaluating the generated FAQs in three key areas—clarity, accuracy, and relevance. The FAQs rated high in clarity, and had room for improvement in accuracy and clarity; however, the new feature proved to be very promising and work is ongoing to improve its performance and even extend its use beyond HPC.</p>
<div class="my-3 text-center"><img width="500" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-REU/Anvil-REU-2025/REU-Photos/1W5A8912-Enhanced-NR.jpg" /></div> 
<h3>A comprehensive educational experience</h3>
<p>While the Anvil REU students worked day in and day out all summer, this program was more than a temporary job—it was a completely immersive learning experience. The REU students had access to their mentors and to the RCAC staff working on-site. On top of that, the students were able to take tours of the campus, attend presentations hosted by both RCAC and the SURF (Summer Undergraduate Research Fellowship) program, participate in technical workshops aimed at developing multiple skillsets (writing research abstracts, effective technical communication, etc.), and even attend the Practice and Experience in Advanced Research Computing (PEARC) 2025 Conference! The eight REU students also gave midpoint presentations on their projects to staff from the Pittsburgh Supercomputing Center (PSC), Lawrence Berkeley National Laboratory (LBNL), National Energy Research Scientific Computing Center (NERS), and Pacific Northwest National Laboratory (PNNL).</p>
<p>Aside from technical workshops and presenation experience, the REU students were able to take part in workshops dedicated to developing more intangible skills. One such workshop was led by Syd Moore, an Academic Advisor &amp; Gallup-Certified Strengths Coach, and guided the group through the myStrengths talent assessment. Another was led by Matt Jones, Certified Master Facilitator in LEGO® SERIOUS PLAY® methods (LSP). This session introduced the students LSP, a facilitated thinking, communication and problem solving technique for use with organisations, teams, and individuals. Both of these workshops took place at the beginning of the summer and had a follow-up midway through to build on the lessons learned and further develop the students to prepare them for their future careers. Giving these students the opportunity to gain as much knowledge and experience as possible is a vital component of the Anvil REU program. In this way, RCAC can help to ensure that each one of the REU participants develops into a capable and competent cyberinfrastructure professional.</p>
<p>The Anvil REU program also scheduled ample amounts of time for socializing, fun, and relaxation. Thanks to the RCAC’s partnership with the SURF program, the REU students were able to attend multiple SURF Socials throughout the summer. This allowed the students to hang out with other undergraduates who were at Purdue for non-HPC-specific research projects, leading to new friendships and expanding their professional networks. Of course, the REU participants also socialized outside of these programmed events, but teaching them—by example—the value of having a positive work-life balance is an essential part of professional development.</p>
<h3>Mission accomplished</h3>
<p>On the final day of the Anvil REU program, the students presented their work to the Anvil team. As they demonstrated the results of their projects, each student discussed their accomplishments, obstacles, failures, and what they learned throughout the summer. The students were then asked questions and given them feedback on their presentations. To the Anvil team, it was wonderful to hear how this summer might help steer the future careers of these students, many of whom expressed a desire to continue within the field of HPC. Overall, these eight students made fantastic progress: they completed their projects, learned technical and interpersonal skills they will need in the workforce, and gained an in-depth understanding of the HPC world.</p>
<p>To learn more about the upcoming 2026 summer Anvil REU program, please visit our <a href="https://www.rcac.purdue.edu/anvil/reu">Research Experience for Undergraduates</a> webpage. Applications are now being accepted. The application deadline is February 16, 2026, but may close earlier based on the volume of submissions. Interviews for positions will begin in January of 2026.</p>
<p>For more information regarding HPC and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.</p>
<p>Anvil is one of Purdue University’s most powerful supercomputers, providing researchers from diverse backgrounds with advanced computing capabilities. Built through a $10 million system acquisition grant from the <a href="https://nsf.gov/">National Science Foundation (NSF)</a>, Anvil supports scientific discovery by providing resources through the <a href="https://access-ci.org/">NSF’s Advanced Cyberinfrastructure Coordination Ecosystem: Services &amp; Support (ACCESS)</a>, a program that serves tens of thousands of researchers across the United States. Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>. More information about Anvil is available on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil website</a>. Anyone with questions should contact <a href="mailto:anvil@purdue.edu">anvil@purdue.edu</a>. Anvil is funded under NSF award No. 2005632.</p>
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Thu, 13 Nov 2025 00:00:00 -0500</pubDate>
									<category>Science Highlights</category>
							</item>
					<item>
				<title><![CDATA[Anvil supports BigCARE 2025 Summer Workshop]]></title>
				<link>https://rcac.purdue.edu/index.php/news/2442</link>
				<guid isPermaLink="true">https://rcac.purdue.edu/index.php/news/2442</guid>
				<description><![CDATA[<p>Purdue University’s Anvil supercomputer recently supported the 2025 BigCARE Summer Workshop, a two-week course aimed at helping cancer researchers develop big data skills. This year’s workshop took place at the University of California, Irvine (UCI). Throughout the course, attendees learned to manage, visualize, analyze, and integrate a variety of omics data in cancer studies. Anvil was integral to the workshop, providing attendees with access to a high-performance computing (HPC) resource designed to have a low barrier of entry for newcomers, which is crucial for those who are inexperienced in big data science.</p>
<p>The Big Data Training for Cancer Research (BigCARE) workshop is a program funded by the <a href="https://www.cancer.gov">National Cancer Institute (NCI)</a>. It was founded in 2020 by Min Zhang, MD, PhD, a Professor of Epidemiology and Biostatistics at the University of California, Irvine’s <a href="https://publichealth.uci.edu/">Joe C. Wen School of Population &amp; Public Health</a>, and the Biostatistics Shared Resources Director for the <a href="https://cancer.uci.edu">UCI Chao Family Comprehensive Cancer Center</a>, and her collaborators Dr. Sean Davis, MD, PhD, Associate Director of Informatics and Data Science, Professor of Medicine, from University of Colorado Anschutz School of Medicine, and Dr. Dabao Zhang, PhD, Professor of Epidemiology and Biostatistics of Joe C. Wen School of Population &amp; Public Health at the University of California, Irvine. The team recognized a need for specialized HPC and Big Data training for cancer researchers and designed BigCARE to provide for that need. This year’s workshop focused on analyzing and interpreting genomic and genetic data, including microbiome analysis, metabolomics analysis, single-cell data analysis, epigenomic data analysis, mendelian randomization, and transcriptome-wide causal inference for directed gene regulations.</p>
<p>“Anvil has <img width="400" style="padding:10px;" class="float-right" alt="Image description" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/BigCare-2023-Summer-Workshop/BigCARE-2025/Eric_Ryan.png" />been extremely helpful during the previous BigCARE workshops,” says Zhang, “especially for our participants with limited computing skills. Anvil provides the essential infrastructure and computing support needed to navigate between command line and R packages for large-scale data. This year, Anvil made the implementation much smoother when we added some AI and machine learning tools for multi-omics data analysis. The Anvil platform, along with Jupyter Notebook, offered an all-in-one solution that helped participants easily and quickly switch from concept to interactive analysis of big data without obstacles.”</p>
<p>Anvil’s role in the BigCARE workshop was to provide HPC resources through Open OnDemand and Jupyter Notebooks, which limits the need for in-depth knowledge of command-line interfaces or HPC server environments. The course material was developed as Jupyter notebooks, and thanks to Open OnDemand, the researchers had direct web access to the notebooks. All of this equated to a low barrier of entry for the workshop participants.</p>
<p>Aside from providing the hardware and software needed to run the workshop, Anvil added value to BigCARE through the user support provided by the RCAC (Rosen Center for Advanced Computing) team. Before the start of the workshop, the Anvil team modified the Open OnDemand-Jupyter deployment that was customized for last year’s event. This customized deployment automatically handled all course setup and environment creation, eliminating much of the typical HPC work required by participants in such classes. Eric Adams, the Lead Research Operations Administrator for Education, and Ryan DeRue, a Senior Computational Scientist, also attended the event at UCI to present on Anvil and HPC, as well as provide support throughout the week.</p>
<p>“Supporting the BigCARE workshops is a great reminder of why we do what we do,” says Adams. “Providing a platform like Anvil that lowers the barrier to high-performance computing allows cancer researchers to focus on their science, not the technology. Seeing them apply these tools to real-world cancer data in real time is incredibly fulfilling.”</p>
<p>This year’s workshop was a huge success. Dr. Zhang and the attendees were thrilled by what they were able to accomplish during the two-week intensive, as well as how helpful both Anvil and the RCAC support team were. In a post-course survey, 18 of the participants stated that they were likely or very likely to apply for their own Anvil allocation in the future.  Dr. Zhang also indicated that she intends to continue using Anvil to support future BigCARE workshops.</p>
<p>More information about the BigCARE 2025 Summer Workshop can be found on UCI’s “<a href="https://bigcare.uci.edu">Big Data Training for Cancer Research</a>” webpage. Information about the Anvil supercomputer can be found on Purdue’s <a href="https://www.rcac.purdue.edu/anvil">Anvil Website</a>.</p>
<p>For more information regarding HPC and how it can help you, please visit our “<a href="https://www.rcac.purdue.edu/anvil/why-hpc">Why HPC?</a>” page.
Anvil is funded under NSF award No. 2005632. Researchers may request access to Anvil via the <a href="https://www.rcac.purdue.edu/knowledge/anvil/access/anvil_through_access">ACCESS allocations process</a>.</p>
<div class="my-3 text-center"><img width="550" alt="AnvilPlot" src="https://www.rcac.purdue.edu/files/anvil/Anvil-Stories/Anvil-AI-on-ACCESS/1W5A7969-Enhanced-NR.jpg" /></div> 
<p><em>Written by: Jonathan Poole, poole43@purdue.edu</em></p>
]]></description>
				<pubDate>Mon, 27 Oct 2025 00:00:00 -0400</pubDate>
									<category>Science Highlights</category>
							</item>
			</channel>
</rss>