OpenFOAM (for "Open-source Field Operation And Manipulation") is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, most prominently including computational fluid dynamics (CFD), (OpenFOAM in Wikipedia). OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics (by OpenCFD Ltd).
OpenFOAM is freely available and open source, and is licensed under the GNU General Public Licence (Gnu GPL or GPL), with GPL license typical characteristics for exploitation. OpenFOAM® is registered trademark of OpenCFD Ltd, licensed to the OpenFOAM Foundation.
Different versions of OpenFOAM by OpenFOAM Foundation and OpenCFD are installed on Puhti and Mahti servers.
After login on to the server, give the command
module spider openfoam
The command lists available OpenFOAM versions on the server. To get more information about a specific version, for example about OpenFOAM Foundation's version 7, use the command
module spider openfoam/7
To launch a specific version, here version 7, give the command
module load openfoam/7
OpenCFD's versions are recognized by a version string starting with letter v, ie, to launch version v1906, give the command
module load openfoam/v1906
Example files for a batch job script are available on the servers. After giving the launch command (module load, see above), the example script is the file
where \<server> is either puhti or mahti. Copy that into your working directory for further modifications.
Use collated parallel IO method on CSC's servers
- on CSC's computing servers, use OpenFOAM's collated IO method when ever possible
- use of collated method absolutely necessary when the model size is large and lot of disk IO is necessary
The file system used in CSC's computing platforms Puhti and Mahti is Lustre, which is optimized for reading and writing small number of files. The number of output files written by OpenFOAM can easily become very large, even up to millions, if the mesh size is large, and of field variables all are written on the disk, even on every time step. Using this sort IO operation simultaneously by only a few OpenFOAM heavy users, the file system become heavily overloaded, and the whole computing platform can get jammed.
Above described, OpenFOAM's traditional parallel IO method can in the latest versions of OpenFOAM be replaced with collated parallel IO method, where the output data is written only in few files. Considering HPC platforms using parallel files systems, this has been a significant improvement to OpenFOAM.
In test runs on CSC's servers with the collated IO method, significant difference in throughput time has not been recognized compared to non-collated method. In some tests, collated method has shown to be even slightly faster. Our recommendation is to use collated method whenever it is practicable.
Mahti server one node contains 128 cores. If the total subdomain number in the model is 256, user may allocate two full nodes for the solver run. Decomposition batch job will then done using command
decomposePar -fileHandler collated -ioRanks '(0 128 256)'
and two directories
processors256_128-255 are created. For the solver run command will then be
pimpleFoam -parallel -fileHandler collated -ioRanks '(0 128 256)'
In that way, 128 processes are running on both nodes, and on each node one process is a master process. Reconstruction of the data is done in the batch job with command
In this example, the pressure results of domains 0 to 127 per time step are written into a single file. In a non-collated method, the same results would have been written into 128 separate files. This tremendous decrease of number of output files happens naturally with all the rest of the field variables also.
IO operations in the collated method are based on threaded sub-processes, and therefore do not cause any time overhead for the simulation compared to non-collated method.
Example batch job scipts for collated method usage on Mahti
The example scripts for separate batch runs for decomposition, solver and reconstruction are in folder
Notice that on Mahti decomposition and reconstruction must be done in
more info on using the interactive queue.
In a problem situation, send an email to firstname.lastname@example.org.
Last edited Mon Feb 15 2021