Getting started

This tutorial should be adequate for users not familiar with the Slurm Workload Manager. All concepts will be introduced gradually. However, intermediate knowledge of the Linux command line is required to fully understand the described concepts.

Logging in into the cluster

The cluster contains a single point of entry, that is the entropy login node.

The server is accessible from the Faculty’s internal network and from the Internet.

Warning

As the login node is exposed to the wild wild Internet, a prohibitive firewall is configured. Additionally, any excessive login attempts will be blocked by a specialized software.

The only one way to log in into the cluster is by using the SSH keys; password authentication is disabled.

1ssh cluster_user@entropy.mimuw.edu.pl

Home directory and disk quota

Each user of the entropy cluster has a home directory located on the login node. The /home system is RAID5 and thus is tolerant of a failure of one HDD. Currently, there is no backup of the user’s data.

Warning

Currently, we cannot provide any backup of the /home disk array. Thus, each user should backup important files and results outside of the cluster.

Each user has a limited quota on the cluster login node. The quota is enabled for the /home filesystem.

  • The quota value for students and other users is 60 GB.

  • The quota value for Faculty’s staff is 120 GB.

Each user can check the disk usage:

 1kmwil@asusgpu0:~$ entropy_disk_quota
 2
 3 ____________
 4< Disk Quota >
 5 ------------
 6  \
 7   \   \_\_    _/_/
 8    \      \__/
 9           (oo)\_______
10           (__)\       )\/\
11               ||----w |
12               ||     ||
13
14# Blocks is the currently used space.
15
16Disk quotas for User kmwil (2000)
17Filesystem   Blocks  Quota  Limit Warn/Time    Mounted on
18/dev/sda1       28K    60G    60G  00 [------] /home
19
20---

NFS

/home partition is available on each compute node through the NFS share. All results from computations should be written here or to one of local filesystems.