For an overview of the LUMI supercomputer, see the LUMI documentation.
Puhti and Mahti are CSC's supercomputers. Puhti has been available for CSC users since 2 September 2019 and Mahti has been available since 26 August 2020. LUMI is one of the pan-European pre-exascale supercomputers, located in CSC's data center in Kajaani. The CPU partition of LUMI (LUMI-C) has been available since early 2022, and the full system with the large LUMI-G partition has been available since early 2023.
Puhti contains CPU nodes with a range of memory sizes as well as a large GPU partition (Puhti AI), while Mahti contains homogeneous CPU nodes and is meant for larger jobs (minimum 128 CPU-cores). Mahti also contains a GPU partition since 2021 (Mahti AI) with Nvidia Ampere GPUs. See specifications for details on the systems and this page for an outline of differences between LUMI-C and Mahti.
CSC supercomputers use the Linux operating system and we recommend that you are familiar with basics of Linux command line usage before starting.
Accessing Puhti and Mahti
To be able to use CSC's supercomputers, you need to have a CSC user account that belongs to a computing project which has access to the respective supercomputers. CSC user accounts and projects are managed in the MyCSC portal. Further instructions are provided in the Accounts section of this user guide.
Connecting to the supercomputers
Connect using an ssh client:
This will connect you to one of the login nodes. If you need to connect to a specific login node, use the command:
ssh yourcscusername@puhti-login<number 11-12,14-15>.csc.fi
ssh yourcscusername@mahti-login<number 11-12>.csc.fi
yourcscusername is the username you get from CSC.
For more details, see the connecting page.
Don't allocate more resources to your job than it can use efficiently. This needs to be verified for each new code and job type (different input) by a scaling test. The policy is that the job should be at least 1.5 times faster when you double the resources (cores). Instructions for performing a scalability test. Please also consider other important factors related to performance.
Projects and quotas
Working in CSC supercomputers is based on projects. The computing and storage resources are allocated to projects and when you start a batch job, you must always define the project that the job belongs to.
Projects are managed in the MyCSC portal, where you can add services, resources and users to your CSC projects.
In CSC supercomputers, you can check your currently active projects with the command:
This command shows information for all your CSC projects. You can select just
one project to be reported with the
-p option. For example:
[kkayttaj@puhti ~]$ csc-projects -p project_2012345 ----------------------------------------------------------------- Project: project_2012345 Owner: Kalle Käyttäjä Title: "Ortotopology modeling" Start: 2015-12-17 End: 2022-03-16 Status: open Budget: 1174188 Used 1115284 Remain: 58904 Latest resource grant: 2019-03-04 -----------------------------------------------------------------
The command reports the owner of the project, title, start and end dates. In addition the command prints out the budgeting information for the project: how many billing units have been granted to your project, how many have been used and how many still remain.
The disk areas of your projects can be checked with the command:
Using Puhti and Mahti
- Systems: What computational resources are available
- Usage policy: Usage policy of CSC supercomputers
- Connecting: How to connect to CSC supercomputers
- Puhti web interface: How to connect to Puhti using the web interface
- Disk areas: What places are there for storing data on CSC supercomputers
- Modules: How to find the programs you need
- Applications: Application specific instructions.
- Running jobs: How to run programs on the supercomputers
- Installing and compiling your applications:
- Debugging applications: How to debug your applications
- Performance analysis: How to understand the performance of your applications