Skip to content

ORCA

ORCA is an ab initio quantum chemistry program package that contains modern electronic structure methods including density functional theory, many-body perturbation, coupled cluster, multireference methods, and semi-empirical quantum chemistry methods. Its main field of application is larger molecules, transition metal complexes, and their spectroscopic properties. ORCA is developed in the research group of Frank Neese. The free version is available only for academic use at academic institutions.

Available

  • Puhti: 5.0.4
  • Mahti: 5.0.4

Note that due to licensing issues every user has to install their own copy of the program.

License

ORCA users should register, agree to the EULA , download and install a private copy of the program (via the ORCA forum website). The free version is available only for academic use at academic institutions.

Usage

  • Download the ORCA 5.0.4, Linux, x86-64, shared-version,orca_5_0_4_linux_x86-64_shared_openmpi411.tar.xz
  • Move the downloaded file to your computing project's application area (/projappl/<proj>) on Puhti
  • Unpack the package, tar xf orca_5_0_4_linux_x86-64_shared_openmpi411.tar.xz

Note

Wave function-based correlations methods, both single and multireference, often create a substantial amount of disk I/O. In order to achieve maximal performance for the job and to avoid excess load on the Lustre parallel file system it is advisable to use the local disk.

Example batch script for Puhti using local disk

#!/bin/bash
#SBATCH --partition=small
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40
#SBATCH --account=<your billing project>
#SBATCH --time=0:30:00 # time as `hh:mm:ss`
#SBATCH --gres=nvme:100  # requested local disk space in GB
module purge
module load gcc/11.3.0 openmpi/4.1.4 intel-oneapi-mkl/2022.1.0
export ORCADIR=<path to your ORCA directory>/orca_5_0_4_linux_x86-64_shared_openmpi411
export LD_LIBRARY_PATH=$ORCADIR:$LD_LIBRARY_PATH

# create an mpirun script that executes srun
echo exec 'srun $(echo "${@}" | sed 's/^-np/-n/')' >./mpirun
chmod +x ./mpirun
export PATH=${SLURM_SUBMIT_DIR}:${PATH}

#Set $ORCA_TMPDIR to point to the local disk
export ORCA_TMPDIR=$LOCAL_SCRATCH
# Copy only necessary files to $ORCA_TMPDIR
cp $SLURM_SUBMIT_DIR/*.inp $ORCA_TMPDIR/
# Move to $ORCA_TMPDIR
cd $ORCA_TMPDIR

$ORCADIR/orca orca_5.0.4.inp > ${SLURM_SUBMIT_DIR}/orca_5.0.4.out
rm -f  ${SLURM_SUBMIT_DIR}/mpirun

# Copy all output to submit directory
cp -r $ORCA_TMPDIR $SLURM_SUBMIT_DIR

Example batch script for Puhti (using parallel disk and hence suitable for "standard" DFT calculations)

#!/bin/bash
#SBATCH --partition=small
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40
#SBATCH --account=<your billing project>
#SBATCH --time=0:30:00 # time as `hh:mm:ss`
module purge
module load gcc/11.3.0 openmpi/4.1.4 intel-oneapi-mkl/2022.1.0
export ORCADIR=<path to your ORCA directory>/orca_5_0_4_linux_x86-64_shared_openmpi411
export LD_LIBRARY_PATH=$ORCADIR:$LD_LIBRARY_PATH

# create an mpirun script that executes srun
echo exec 'srun $(echo "${@}" | sed 's/^-np/-n/')' >./mpirun
chmod +x ./mpirun
export PATH=${SLURM_SUBMIT_DIR}:${PATH}

$ORCADIR/orca orca_5.0.4.inp > orca_5.0.4.out
rm -f  ${SLURM_SUBMIT_DIR}/mpirun

Example batch script for Mahti

#!/bin/bash
#SBATCH --partition=test
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=128
#SBATCH --account=<your billing project>
#SBATCH --time=0:30:00 # time as `hh:mm:ss`
#SBATCH --job-name=orca-5.0.4
#SBATCH --error=jobfile.err%J
#SBATCH --output=jobfile.out%J
module purge
module load gcc/11.2.0 openmpi/4.1.2 openblas/0.3.18-omp
export ORCADIR=<path to your ORCA directory>/orca_5_0_3_linux_x86-64_shared_openmpi411
export LD_LIBRARY_PATH=$ORCADIR:$LD_LIBRARY_PATH

# create an mpirun script that executes srun
echo exec 'srun $(echo "${@}" | sed 's/^-np/-n/')' >./mpirun
chmod +x ./mpirun
export PATH=${SLURM_SUBMIT_DIR}:${PATH}

$ORCADIR/orca orca_5.0.4.inp > orca_5.0.4.out
rm -f  ${SLURM_SUBMIT_DIR}/mpirun

Note

Please remember to adjust %pal nproc in your input file according to the total number of requested MPI tasks (--nodes * --ntasks-per-node).

  • You can find a few additional example jobs in the directory:
/appl/soft/chem/orca

Note

If you come across an error related to oversubscribed resources, you need to call your calculation as

$ORCADIR/orca file.inp --oversubscribe > file.out

References

Cite your work with the following references:

The generic reference for ORCA is:

  • Neese, F. (2012) The ORCA program system, Wiley Interdiscip. Rev.: Comput. Mol. Sci., 2, 73-78.

Please do not only cite the above generic reference, but also cite in addition the original papers that report the development and ORCA implementation of the methods you have used in your studies! The publications that describe the functionality implemented in ORCA are given in the manual.

More information


Last update: February 7, 2024