Getting started with Puhti
This is a quick start guide for Puhti users. It is assumed that you have previously used CSC cluster resources like Taito/Sisu. If not, you can start by looking at overview of CSC supercomputers.
Go to MyCSC to apply for access to Puhti or view your projects
and their project numbers if you already have access. You can also use the command
Connecting to Puhti
Connect using a normal ssh-client:
yourcscusername is the username you get from CSC.
There is also a web interface available at www.puhti.csc.fi, where you can log in with your CSC user name. In this interface you can manage files, launch interactive applications and list jobs, quotas and project statuses. You can use it also for graphical applications.
CSC uses the Lmod module system.
Modules are set up in a hierarchical fashion, meaning you need to load a compiler before MPI and other libraries appear.
The system comes with two compiler families installed, the Intel OneAPI and GCC compilers.
We have installed
of the Intel compilers, and for GCC 11.3, 9.4 and 8.5 are available.
High performance libraries
Puhti has several high performance libraries installed, see more information about libraries.
Currently the system has a two MPI implementations installed:
We recommend to test using
You need to have the MPI module loaded when submitting your jobs.
More information about specific applications can be found here
System Python is available by default both in Puhti and Mahti without loading any module. Python 3
(= 3.6.8) is available as
python3. The default system Python does not include any optional Python
packages. Several other Python installations are available via the module system.
python-data is a good first choice, it includes a lot of regularly used
Puhti uses the Slurm batch job system.
A description of the different Slurm partitions can be found here. Note that the GPU partitions are available from the normal login nodes.
Very important change!!
You have to specify your billing project in your batch script with the
flag. Failing to do so will cause your job to be held with the reason
srun directly also requires the flag.
- Login nodes can access the Internet
- Compute nodes can access the Internet
The project based shared storage can be found under
Note that this folder is shared by all users in a project. This folder is
not meant for long term data storage and files that have not been used for 180
days will be automatically removed.
The default quota for this folder is 1 TB. There is also a persistent
project based storage with a default quota of 50 GB. It is located
/projappl/<project>. Each user can store up to 10 GB of data in
their home directory (
You can check your current disk usage with
csc-workspaces. More detailed information about storage can be found
here and a guideline on managing data on the
/scratch disk here. See also the LUE tool
for efficiently querying how much data/files you have in a directory.