next up previous contents index
Next: Running Parallel Jobs Up: How to Run TURBOMOLE Previous: Modules and Data Flow   Contents   Index


Parallel Runs

Some of the TURBOMOLE modules are parallelized using the message passing interface (MPI) for distributed and shared memory machines or with OpenMP or multi-threaded techniques for shared memory and multi-core machines.

Generally there are two hardware scenarios which determine the kind of parallelization that is possible to use:

The list of parallelized programs includes presently:

Additional keywords necessary for parallel runs with the MPI binaries are described in Chapter 17. However, those keywords do not have to be set by the users. When using the parallel version of TURBOMOLE, scripts are replacing the binaries. Those scripts prepare a usual input, run the necessary steps and automatically start the parallel programs. The users just have to set environment variables, see Sec. 3.2.1 below.

To use the OpenMP parallelization only an environment variable needs to be set. But to use this parallelization efficiently one should consider a few additional points, e.g. memory usage, which are described in Sec. 3.2.2.



Subsections
next up previous contents index
Next: Running Parallel Jobs Up: How to Run TURBOMOLE Previous: Modules and Data Flow   Contents   Index
TURBOMOLE