Interactive Sessions on Andromeda

While Andromeda has 284 compute nodes available (as of June 2025), there are only two single 48-core login nodes available. This is the environment you will enter when you first connect to Andromeda via ssh, Open OnDemand, or any other remote protocol. Since the login node is a shared resource, and the first point of contact with Andromeda for many, it is important to not overwhelm it with tasks which require large amounts of compute power or file I/O. Therefore, no module (software) other than certain administrative tools is allowed to be loaded on the login node. All tasks beyond small shell scripts and filesystem navigation should be executed either in a slurm batch job or in an interactive session.

An interactive session on a compute node can be requested from the command line using interactive. If resources are available, you will be shortly transferred to a compute node. You can check which node you are currently using by typing hostname. You will see that your hostname has changed to one of the compute nodes:

[johnchris@andromeda]$ hostname
a002.m31.bc.edu

[johnchris@andromeda]$ interactive
Executing: srun --pty -N1 -n1 -c4 --mem=8g -pinteractive -t0-04:00:00  /bin/bash Press any key to continue or ctrl+c to abort.
(press any key)
cpu-bind=MASK - c020, task  0  0 [506380]: mask 0x880088000000 set
[johnchris@c020 ~]$ hostname
c020
[johnchris@c020 ~]$

You can request a larger or a smaller number of computational resources using the command line options for interactive. For a summary of the options available, use interactive -h

To summarize:

  • -t: wall time (beyond which the session will automatically end); default is 4 hours.
  • -N: number of compute nodes; default is 1 – for most purposes, this suffices.
  • -m: memory per node (in GB); default is 16GB.
  • -n: number of tasks per node; default is 1.
  • -c: number of CPU cores per task; default is 4.
  • -p: partition requested; default is “interactive”.
  • -G: number of GPUs per node; default is 0.

Note: We no longer enable X11 forwarding to compute nodes. To use interactive GUI apps on compute nodes, use the Open OnDemand service instead.

For example, to start a session for running a smaller job interactively on 16 CPUs with 32GB of memory, you could use:

interactive -c 16 -m 32

If you need access to a GPU, you will need to pass an option to -G:

interactive -G 1

Note: Each user can start at most 2 interactive sessions simultaneously, with a limit of using 16 cores and 64GB of memory in total. For more information on resource constraint for interactive sessions, please refer to the “Andromeda Slurm Partitions” section on this page.

In the rare case that multiple compute nodes are under maintenance, and the default arrangement by slurmd (the compute node daemon) cannot forward the interactive job to an available node, use sinfo -R to check the compute nodes that are under maintenance and use srun -w option to request a specific node for your interactive session.

[johnchris@a002 ~]$ sinfo -R
REASON               USER      TIMESTAMP           NODELIST
Not responding       slurm     2025-06-28T21:31:13 c057
maint_A1_mig         root      2025-06-24T18:15:46 c[148-153],g001
maint_grubcmdline    root      2025-06-28T04:36:41 c[001,005,009,019,022-024,028,030-032,035-036,038-039,043,048,055,059,065,068,070-071,075,079,085-086,091]
maint_pfault         root      2025-06-23T17:12:39 c106
maint_unreachable    root      2025-05-22T05:16:34 c[157-158]
maint_pfault         root      2025-06-28T04:25:02 c130
[johnchris@a002 ~]$ srun -N1 -n1 -c4 --mem=16g -p interactive --pty -t 0-04:00:00 -w c037 $SHELL
[johnchris@c037 ~]$

Scroll to Top