Access
Access
Access to the Linux cluster
To use access to the Linux cluster, you must activate it yourself once. To do this, open Identity Management (IDM ) and select the menu item "Request access". Then click on the "Manage access to the Linux cluster" tile. Only then is it possible to connect to the cluster.
The connection to the Linux cluster is established via an SSH connection. There are currently two head nodes: You can reach the original access via its-cs1.its.uni-kassel.de. All nodes that have already been converted to the new operating system can be reached via its-cs132.its.uni-kassel.de. As soon as the changeover is complete, its-cs132 will be renamed to its-cs1. Please note that your programs will not be executed directly on the head node, but in batch mode via the SLURM job management system.
You will need a UniAccount (ukxxxxxx) and a workstation computer with SSH client software installed.
Windows
To access the Linux cluster under Windows, we recommend using the free MobaXterm software [external]. After installation, start MobaXterm and open a new session of the type "SSH". Enter its-cs1.its.uni-kassel.de as the remote host and use your UniAccount <ukxxxxxx> with the domain <@uni-kassel.de> as the user name. A VPN connection is not required for access from the internet.
Please note that no visible feedback such as asterisks is displayed when you enter your password. Simply type in your password and confirm with Enter. Once you have successfully logged in, you will be in your home directory on the cluster.
To transfer data easily, you can also connect your local computer to your home directory. For access from outside the university network, we recommend using the VPN software of the University of Kassel. Alternatively, you can also use the WinSCP software, which allows you to work without a VPN connection.
Linux
Under Linux, the SSH client software is usually already pre-installed. You can connect to the cluster with the following command:
| ssh -Y uk00123@its-cs1.its.uni-kassel.de |
You are now in your home directory. To transfer data, you can either connect your computer to your home directory as well, or alternatively use the following commands:
scp:
| scp file.txt uk00123@its-cs1.its.uni-kassel.de:/home/users/000/uk00123 |
or sftp:
sftp uk00123@its-cs1.its.uni-kassel.de Connected to its-cs1.its.uni-kassel.de. sftp> |
To save entering the entire host address when logging in under Linux, you can work with the config file in the.ssh folder of the local home directory:
| cat ~/.ssh/config Host cs1 Hostname its-cs1.its.uni-kassel.de User uk00123 ForwardX11 yes |
Now a login with the shorter command ssh cs1 is possible.
Storage space
During a job, all nodes can access the data in your home directory as it is globally available via the network. Accessing locally stored data (for example in the /local directory) is faster, but this data is only available on the respective computer node.
If there is not enough space in the home directory during a job, you can store data on /work during runtime. Please note that files in the /work and /local directories are automatically deleted after a certain retention period.
Here is an overview of the available directories:
You have 10 GB of storage space available in the home directory. It is globally available and is backed up regularly. Data remains stored there indefinitely.
The /work directory offers 8 TB of storage space, is also available globally, but is not backed up. Files are deleted after 30 days of inactivity.
The /local directory offers 1.8 TB of local storage space. There is no backup here either and files are deleted after 30 days.
The temporary directory /tmp on the nodes offers 2 GB of storage space. Files are deleted here after just two days.
Please note that you share the storage space on /work with other users. For large copy processes, check the current allocation beforehand with the command df -h /work to avoid space problems.
In the case of group resources, the storage size and retention period are individual and depend on the request made.