Differences between revisions 1 and 17 (spanning 16 versions)
Revision 1 as of 2009-12-10 13:30:11
Size: 799
Comment:
Revision 17 as of 2017-01-18 10:34:09
Size: 1641
Editor: NicoleThomas
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
= HowToParallel = #acl ClamsUserGroup:read,write,delete,revert All:read
Line 3: Line 3:
It is possible to run CLaMS on multi-processor machines using MPI or OpenMP.
Line 5: Line 4:
On the Ubuntu machines is the PGF compiler suite installed that has MPICH implemented. ## page was renamed from HowToParallel
== ParallelHowTo ==
Line 7: Line 7:
In the following a short How to: <<TableOfContents>>
Line 9: Line 9:
First it is necessary to make the path of the MPI libraries public by executing: It is possible to run CLaMS on multi-processor machines using MPI.
Line 11: Line 11:
for 64bit machines: === Intel Compiler 17.0.1 ===
Line 13: Line 13:
{{{
. /opt/pgi/linux86-64/8.0-2/mpi.sh
 The MPICH Library 3.2 is installed in directory ''/usr/nfs/local/local/mpich-3.2-ifort-17.0.1''.
<<BR>>
 Add the bin subdirectory of the installation directory of MPICH library to your path:
 {{{
PATH=/usr/nfs/local/local/mpich-3.2-ifort-17.0.1/bin:$PATH
export PATH
Line 17: Line 21:
for 32bit machines:

{{{
. /opt/pgi/linux86/8.0-2/mpi.sh
 Remove serial compiled object files:
 {{{
make distclean
Line 23: Line 26:
Then compile the needed programs by executing:

{{{
make useMPI=true
 Compile with MPI:
{{{
make useMPI=true progname
Line 29: Line 31:
Before running the CLaMS script or a single program you have to execute:

{{{
ssh-add
 Execute on <n> CPUs:
 {{{
mpiexec|mpirun -np <n> <progname>
Line 35: Line 36:
This is necessary, because the processes communicate per ssh.
Line 37: Line 37:
The programs must be called as following: === Intel Compiler 14.0.3 ===
Line 39: Line 39:
{{{
mpirun -np # program
 The MPICH Library is installed in directory ''/usr/nfs/local/local/mpich-intel''.
 Add the bin subdirectory of the installation directory of MPICH library to your path:
 {{{
PATH=/usr/nfs/local/local/mpich-intel/bin:$PATH
export PATH
}}}
Line 42: Line 46:
# - number of processors  Remove serial compiled object files:
 {{{
make distclean
}}}
Line 44: Line 51:
 Compile with MPI:
 {{{
make useMPI=true [useComp=ifc] progname
Line 45: Line 55:

 Execute on <n> CPUs:
 {{{
mpiexec|mpirun -np <n> <progname>
}}}

=== Portland Compiler 14.6 ===

 It is necessary to make the path of the MPI libraries (included in the portland compiler suite) public by executing:
 {{{
. /opt/pgi/linux86-64/14.6/mpi.sh
}}}

 Remove serial compiled object files:
 {{{
make distclean
}}}

 Compile with MPI:
 {{{
make useMPI=true [useComp=pgi] progname
}}}

 Execute on <n> CPUs:
 {{{
mpiexec|mpirun -np <n> <progname>
}}}

ParallelHowTo

It is possible to run CLaMS on multi-processor machines using MPI.

Intel Compiler 17.0.1

  • The MPICH Library 3.2 is installed in directory /usr/nfs/local/local/mpich-3.2-ifort-17.0.1.


  • Add the bin subdirectory of the installation directory of MPICH library to your path:
    PATH=/usr/nfs/local/local/mpich-3.2-ifort-17.0.1/bin:$PATH
    export PATH
    Remove serial compiled object files:
    make distclean
    Compile with MPI:
    make useMPI=true progname

    Execute on <n> CPUs:

    mpiexec|mpirun -np <n> <progname>

Intel Compiler 14.0.3

  • The MPICH Library is installed in directory /usr/nfs/local/local/mpich-intel. Add the bin subdirectory of the installation directory of MPICH library to your path:

    PATH=/usr/nfs/local/local/mpich-intel/bin:$PATH
    export PATH
    Remove serial compiled object files:
    make distclean
    Compile with MPI:
    make useMPI=true [useComp=ifc] progname

    Execute on <n> CPUs:

    mpiexec|mpirun -np <n> <progname>

Portland Compiler 14.6

  • It is necessary to make the path of the MPI libraries (included in the portland compiler suite) public by executing:
    . /opt/pgi/linux86-64/14.6/mpi.sh
    Remove serial compiled object files:
    make distclean
    Compile with MPI:
    make useMPI=true [useComp=pgi] progname

    Execute on <n> CPUs:

    mpiexec|mpirun -np <n> <progname>