Research Services Announcements
-
How to submit Gaussian GPU jobs on the (Andromeda 2) cluster:
EXAMPLE 1: A Large Molecule Before executing any Gaussian calculations on GPU, the Gaussian module that you load in your slurm file MUST be compatible with the GPU that you are using; for example, if you are using A100 GPUs, you must load the module 16.C02_AVX2.Linda of Gaussian (which can ONLY be run on A100 nodes). If you use V100, then you need to use 16.C01_AVX2.Linda. For a full list of modules and their compatibility, please refer to this reference: https://gaussian.com/gpu/. You can check which nodes on Andromeda are A100 or V100 nodes by ssh-ing into the terminal and typing […]
-
A1->A2 Migration Assistance
With migrations to Andromeda 2 well underway for half the cluster community, we will begin hosting a series of working sessions starting March 25th, 2025 for those looking to learn more about the new cluster or in need of assistance with their migration. The Research Help page is still the best place to submit a request for assistance. These working sessions will serve as an opportunity to ask questions about the migration process and the general use of Andromeda 2. Andromeda 2 Migration Virtual Help Session schedule: Please use this link to join the virtual session: https://meet.google.com/ypi-rekc-dts
-
Time Based Slurm Partitions
The Slurm job scheduler uses partitions to organize job submissions from researchers. Andromeda has partitions that structured to separated jobs by the amount of time they require to complete, preferring shorter running jobs to facilitate the fast paced demands of BC’s research community. The “partition priority” for each partition listed below in on of many factors the Slurm job scheduler uses to determine the priority of a submitted job in the queue. Some partitions have a small number of nodes that their jobs exclusively can be run on, while the majority of the compute nodes are accessible to all of […]
-
Andromeda 2 Migrations
We’re working with multiple project owners to migrate their workloads from Andromeda 1 to Andromeda 2 at this time. Each new group that migrates takes a bit of compute with them from Andromeda 1 to Andromeda 2, until eventually Andromeda 1 is no more and only Andromeda 2 remains. So, you may notice Andromeda 1 has fewer compute nodes. But, don’t worry, there are also fewer workloads being run there! We’re being as conservative as possible with how many resources get migrated, we don’t want to create a bottleneck for researchers on either cluster. And we’ll be adding the next […]
-
January 2025 HPC Patch Cycle
Twice a year (January and July) the HPC cluster and other systems in the data center undergo a patch cycle. We’ll be patching Andromeda 1 & 2 on 1/23/25, with down time for both clusters starting at 6am that morning. The patch cycle also serves as an opportunity to address hardware changes and other adjustments to the cluster that can’t be done while it’s online. So, an official email announcement will go out soon with details about what to expect and how long the clusters will be down.