MPICH2,Openmpi1.0.2 and Amber 9 Setups
1) PGI6.1 is installed at /home/local/pgi6.1 and pgi compilers are located at /home/local/pgi6.1/linux86/6.1/bin, which is linked to /home/local/bin
2) MPICH2 is compiled with GNU compilers and Openmpi is compiled with PGI6.1 compilers
3) Amber9 (parallel version) is compiled with pgf90/MPICH, g95/MPICH and ifort/MPICH2
4) All MPICH2 commands are located at /home/local/MPICH2-g77/bin linked to /home/local/bin
5) All Openmpi commands are located at /usr/local/bin and openmpi1.0.2 was compiled with ifort
6) All Amber9 executables are installed at /home/local/amber9/exe and linked to /home/local/bin
7) If you use Openmpi mpirun command then please alias it as: alias mpirun "/usr/local/bin/mpirun" in your .tcshrc or .cshrc
     file located at your $HOME directory
8) If you use MPICH2 mpirun command then please alias it as: alias mpirun "/home/local/bin/mpiexec" in your .tcshrc or .cshrc
     file located at your $HOME directory
9) Please also add following enviromental variables and path to your .tcshrc or .chsrc file to use PGI6.1/Openmpi/MPICH2/sander/sander.MPI properly

setenv TMPDIR /var/tmp
setenv PGI /home/local/pgi6.1
setenv AMBERHOME  /home/local/amber9
setenv LD_LIBRARY_PATH /usr/local/lib:/usr/lib:/usr/lib/X11:/usr/bin/X11R6:/home/local/mpich2-1.0.3/lib:/usr/etc
setenv MPI_HOME /home/local/mpich2-g77
setenv MANPATH /usr/share/man:/usr/local/man:/home/local/man
set path = (/bin /usr/local/bin /home/local/bin /home/local/bin/PGI6.1 /sbin /usr/sbin /usr/bin /usr/X11R6/bin /usr/java/j2re1.4.1_01/bin /usr/lib/perl5)
set path=(/bin $LD_LIBRARY_PATH $AMBERHOME/exe $path)


MPICH2 HOWTOS
1) README
2) FAQs
3) Web pages for MPI and MPE
4) MPICH2 Home
5) User manual
6) Commands to start, stop and check mpd on all nodes:
    Stop: mpdallexit, or
             dsh -a "
$MPI_HOME/bin/mpdallexit"
    Start:  mpdboot -n 15
               to start mpd on all node for our gibbs cluster
    Check if mpd is running on all nodes:
    mpdtrace | sort
, dsh -a "$MPI_HOME/bin/mpdtrace" | sort
    and you should see something like:

gibbs
node1
node10
node11
node12
node13
node14
node2
node3
node4
node5
node6
node7
node8
node9
or much more if 
dsh -a "
$MPI_HOME/bin/mpdtrace" | sort 
is used
PS: If a node has no mpd started then you will not be able to run mpi job on that node.

    Check how much time mpd circle the ring for a given number of times:
     mpdringtest n Where n can range from 1 to any number you like (but it would take too much time if you set it larger than 10000)
    Check if the ring can run multiprocess jobs: mpiexec -n 15 hostname | sort
   
Run a test mpi job:  mpiexec -n 8 /home/local/mpich2-1.0.3/examples/cpi
                                         mpiexec -n 8 /home/local/MPICH2-g77/examples/cpi
   
Notice that a smaller n will run faster (funny right?), because it takes time to circle through the ring and PI is very fast to compute so this one just shows you the job is
    indeed distributed to the number of processors n to compute PI. You will see on STDOUT:
Process 0 of 8 is on gibbs
Process 1 of 8 is on node1
Process 2 of 8 is on node14
Process 3 of 8 is on node13
Process 4 of 8 is on node2
Process 5 of 8 is on node12
Process 6 of 8 is on node9
Process 7 of 8 is on node10
pi is approximately 3.1415926544231247, Error is 0.0000000008333316
wall clock time = 0.017522

 
7) using machinefile option
     mpiexec -l -machinefile $HOME/mf -n 4 /home/local/MPICH2-g77/examples/cpi
    Where mf is a text file, which looks like:
node11:2
node12:2
Each line contains a machine name in the ring and number of processors. If machinefile option is provided then your job will be run on the machines that you specified in
your machine file.
The results:
0: Process 0 of 4 is on node11
1: Process 1 of 4 is on node11
2: Process 2 of 4 is on node12
3: Process 3 of 4 is on node12
0: pi is approximately 3.1415926544231239, Error is 0.0000000008333307
0: wall clock time = 0.004442

Openmpi HOWTOS
1) README
2) FAQs
3) Openmpi Home

Amber9 testings
Parallel and serial Amber9 have passed the tests provided by Amber team and the parallel test results are located at /home/local/amber9/test A detail analysis of the parallel tests is written to: /home/local/amber9/test/
TEST_FAILURES.diff

Your login file: $HOME/.tcshrc

Amber9 beanchmarks and scripts