IFY Linux HPC cluster
How to log in
From linux/mac: ssh -X hemmer.phys.ntnu.no or hpc-2.phys.ntnu.no
From windows you may use putty
and Xming.
You have to be on the ntnu network to be able to log in.
(The password is the same as for innsida.)
Info about compute nodes and partitions
Run commands:
scontrol show node
scontrol show partition
Available Software and Modules
In addition to the standard linux software we have also installed some "module software".
To list this software use command: module spider
Most of this software is installed using EasyBuild and Lmod in
hierarchical mode.
To access this software you first need to load module "eb", command: module load eb, then try command: ml av.
Read the module manual page (command: man module).
More software will be installed on request.
Running jobs, SLURM
The SLURM
Resource manager is used on the cluster. At the moment there are five partitions available.
Partition porelab is reserved for the Porelab (SFF) group.
Command sinfo for info about the slurm installation.
Read the manual pages for sinfo, sbatch, scontrol, scancel, salloc and srun (command: man sbatch).
See file: /share/doc/examples/slurm_ex7.sh for a example job script.
Misc. info
Disk quota are enabled on the server olsen where some of you have home directory, command "quota" to see.
(The main reason for the quota is to prevent someone from accidentally filling up the disk.)
There is no quota on cpu usage or other resources.
Some info and examples can be found in folder /share/doc.
The OS on the cluster nodes is Debian GNU/Linux
version 11 (bullseye).
The provisioning of nodes is done using FAI.
Links to other sites:
From Aalto Scientific Computing:
Linux shell crash course
From HPC2N in Sweden:
Beginner's Guide to clusters
From Sigma2/NRIS:
Sigma2/NRIS documentation
Support
email: support-kongull@hpc.ntnu.no