Getting started with Puhti

This is a quick start guide for Puhti users. It is assumed that you have previously used CSC cluster resources like Taito/Sisu. If not, you can start by looking here.

Go to to apply for access to Puhti or view your projects and their project numbers if you already have access. On Puhti, you can also use command csc-projects.

Connecting to Puhti

Connect via NoMachine or using a normal ssh-client:

$ ssh <csc_username>

Module system:

CSC uses the Lmod module system.

Modules are set up in a hierarchical fashion, meaning you need to load a compiler before MPI and other libraries appear.

More information about modules here.


The system comes with two compiler families installed, the Intel and GCC compilers. We have installed both the 18 and 19 versions of the Intel compiler, and for GCC 9.1, 8.3 and 7.4 are available. The pgi compiler 19.7 is available for building gpu applications.

More information about compilers here.


Currently the system has a few different MPI implementations installed:

  • hpcx-mpi
  • mpich
  • intel-mpi

We recommend to test using hpcx-mpi first, this one is from the network vendor and is based on OpenMPI.

You will need to have the MPI module loaded when submitting your jobs.

More information about building and running MPI applications.


More information about specific applications can be found here

Default python

Python is available through the python-env module. This will replace the system python call with python 3.7. The anaconda environment has a lot of regularly used packages installed by default.

Running jobs

Puhti uses the slurm batch job system.

A description of the different slurm partitions can be found here. Note that the GPU partitions are available from the normal login nodes.

Instructions on how to submit jobs can be found here and example batch job scripts are found here

Very important change!!

You have to specify your billing project in your batch script with the --account=<project> flag. Failing to do so will cause your job to be held with the reason “AssocMaxJobsLimit”. Running srun directly also requires the flag.

More information about billing here and common queuing system error messages in the FAQ.


  • Login nodes can access the Internet
  • Compute nodes can access the Internet


The project based shared storage can be found under /scratch/<project>. Note that this folder is shared by all users in a project. This folder is not meant for long term data storage and files that have not been used for 90 days will be automatically removed. The default quota for this folder is 1 TB. There is also a persistent project based storage with a default quota of 50 GB. It is located under /projappl/<project>. Each user can store up to 10 GB of data in their home directory ($HOME).

You can check your current disk usage with csc-workspaces, more detailed information about storage can be found here.

Moving data from Taito to Puhti cluster will be finally closed in June 2020. Salvage your important data.

The new Allas object storage service provides a platform that you can use to store your data that is currently in Taito. The new Puhti server, that is replacing Taito, does not provide permanent storage space for research data. Even if you would continue your work immediately in Puhti, it is good to make a more permanent copy of your data to Allas. This is done if you do the data migration from Taito to Puhti through Allas.

Linux basics Tutorial for CSC

If you are new to Linux command line or using supercomputers, please consult this tutorial section!