Hamilton schematic
On Linux laptops, use your package manager to install openssh.
On Windows, there are many programs, a popular one is putty.
scp myfile username@hamilton.dur.ac.uk:
scp username@hamilton.dur.ac.uk:/ddn/home/username/somedirectory/remotefile .
Most commands in linux come with a manpage manual. To get information on how to use e.g. cd
man cd
This will open up a terminal reader (scroll up and down with arrow keys, exit with q).
There are many ways to do that.
There are some standard editors available that work in a terminal:
I do this sort of thing regularly on my laptop:
mkdir /tmp/myremotedir
sshfs username@hamilton.dur.ac.uk:/ddn/home/somedirectory /tmp/myremotedir
I then have access to the remote files in /ddn/home/somedirectory as if they were on my laptop in the path /tmp/myremotedir.
To close the connection, use
fusermount -u /tmp/myremotedir
Editors such as Visual Studio Code allow to create and edit files on remote machines. For example, the Remote-SSH extension allows that.
Once the extension is installed, you can use the Remote explorer to add a SSH Target to hamilton and activate it.
A lot of software is available through module:
module av
module available
module av gcc
module li
module list
module load gcc/9.3.0
module unload gcc/9.3.0
module load gcc/9.3.0
module swap gcc/9.3.0 intel/2020.4
module purge
SLURM submission works via scripts, i.e. short textfiles that contain information on:
Hamilton provides several queues that jobs can be submitted to (see sinfo). Most useful are:
This is the content of the file job.sh:
#!/bin/bash
#SBATCH -t 5 # Request 5 minutes of time
#SBATCH -p test.q # This selects machines from the queue test.q
#SBATCH -N 1 # We want just one compute node
#SBATCH --mail-type=END # We want an email notification at the end of the job
#SBATCH --mail-user=username@durham.ac.uk # The email notification goes to this address
echo "Hello world from ${SLURM_JOB_NODELIST} at ${PWD}"
echo
# Get rid of all currently loaded modules and only load gcc 9.3
module purge
module load gcc/9.3.0
# This is the command we run
lscpu
To submit do:
sbatch job.sh
This will give you a message similar to
Submitted batch job 4489750
To check on the status of you job use squeue -u username or scontrol show job 4498750.
Stdout and Stderr streams are captured in the files slurm-4489750.out and slurm-4489750.err
By default, execution is relative to the directory where you call sbatch from. This is particularly important when you deal with input/output files.
Software enrironments are inhereted by default when using sbatch.
The compute nodes have slightly different hardware compared to the login nodes.
The image below shows how the individual nodes are connected to each other.