MCSR_logo.jpg (56K)
Home My MCSR Supercomputers Software Research Education
Login
Quick Links:

FAQs
Minicamp
Chemistry
insideHPC Blog
Current Jobs
Job History
Mathematica
Training
News
Contact Us

MPI at MCSR

MPI Links MCSR MPI FAQ

MCSR MPI & PBS Workshop Links

Exercise 1
Exercise 2
Exercise 3
ENGR 692 Circuits Exercise
Solutions
MPI Quick Reference
MPI on Sequoia

General

MPI (Message Passing Interface) is the preferred method for parallel programming at MCSR. MPI is a vendor independent API for breaking programs into components that can run concurrently, each on its own processor (with its own memory), all the while coordinating their progress by the passing of messages among the processes. These messages are delivered by the library routines of the installed MPI implementation. There are several such MPI implementations on the market--some are free and some are for sale. MCSR employs the free MPICH implementation of MPI, developed by the Argonne National Laboratory and Mississippi State University, in all of our parallel computing environments: redwood and mimosa. On mimosa, PGI MPI is available as well. See the May 2005 Parallel-O-Gram article PGI Versions on Mimosa for useful tips on navigating the various PGI Versions installed on mimosa. On redwood, SGI's implementation of MPI (MPT) is available.

If you have a Fortran or C research application that has potential for taking advantage of the parallel processing capabilities of one parallel computer (such as sweetgum), and if you would like it to port easily to another parallel environment (such as mimosa), MPI is the way to go. First, you should get your application working serially in C or Fortran 77. Then, add the MPI library calls, restructuring source code as necessary, and recompile your program on the target platform, using one of the MPICH or PGI compilers. If you need to move it, say from mimosa to sweetgum, just recompile the same code on the next MPI-enabled platform, and it should be ready to run. Submit the job to PBS, requesting the desired number of processors, and using the language appropriate mpirun to run the executable. If you have problems porting your code to an MCSR MPI platform, please contact the MCSR consulting staff.

Back to Top

Other MPI Links and Sources

  • Serial and MPI C, C++, and FORTRAN compilers on mimosa Message Passing Interface Forum
  • Using MPI (book and online examples)
  • Parallel Programming with MPI( Customer Reviews and Editorial Reviews)
  • Tutorial on MPI (Argonne National Laboratory) Free MPICH implementation of MPI
  • WebCT-HPC's Introduction to MPI Online Tutorial
  • MPI: The Complete Reference

Back to Top

Frequently Asked Questions about MPI at MCSR

  1. What versions of MPI are installed where on what MCSR platforms?
  2. How do I compile and run my MPI Fotran or C/C++ program on mimosa?
  3. What's the best way to get started writing my first MPI program?
  4. E-mail us your question....

Frequently Asked Questions and Answers about MPI at MCSR

1. What versions of MPI are installed where on what MCSR platforms?

PlatformVersionPathFortranC/C++Load
Syntax
Execution
Syntax
redwoodSGI MPI 4.8 MPT 1.13/usr/bin/iforticc-lmpimpirun -np n a.out
sequoiaSGI MPI 4.8 MPT 1.13/usr/bin/iforticc-lmpimpirun -np n a.out
mimosaPGI MPI*/usr/local/apps/pgi-7.2/linux86/7.2/bin/pgf77 pgf90 pgc pgcc pgCC-Mmpi=mpich
-Mmpi=mpich
mpirun -np n a.out
(*)Those using the PGI compilers will need to include the following link options: -lmpich and -lfmpich (Fortran). You will also need to ensure that the environment variabl PGI is set to "=/usr/local/apps/pgi-7.2/" (note that this should be set by default by system login scripts unless you override this in your own login scripts, such as your .bashrc file.) Failure to have this variable set correctly may result in error messages from PBS saying that mpi.args cannot be found (if compiling from PBS).

Back to Top

3. How do I compile and run an MPI Fotran or C/C++ program?

Step 1: Choose which platform (mimosa, sequoia, or redwood) you want to use.
Step 2: Choose which compiler you want to compile with. (C, C++, Fortran 77, Fortran 90, Fortran 95).
Step 3: Compile your program using the appropriate MPI-capable compiler and using the appropriate MPI load syntax for that compiler, such as
-lmpi (for the SGI MPT C library), -lmpi -lmpi++ (for the SGI MPT C++ library) or
-Mmpi=mpich (for PGI)
Debug any compiler errors and repeat, until you have an executable a.out file.
Step 4: Write a
PBS batch script to run your program on multiple processors by invoking the mpirun program, where the "-np" option specifies the number of processors to use. Use the nodes= PBS resource option to tell PBS how many nodes to allocate. (Make sure the number you use is the same as the -np argument to mpirun.):
mpirun -np 4 a.out
Step 5: Submit your script to PBS using qsub:
qsub yourfile.pbs

Examples with PBS on Mimosa:

Fortran (example PBS script compilng a Fortran 90 source code file named "myfile.f" located in your home directory and running it on 4 processors with the PGI MPI distribution on mimosa):
#PBS -S /bin/bash
#PBS -l nodes=4
/usr/local/apps/pgi-7.2/linux86/7.2/bin/pgf90 myfile.f -lmpich -lfmpich
/usr/local/apps/bin/mpirun -np 4 a.out

C/C++ (example PBS script to compile a C source code file named "myfile.c" in your home directory and run it on 2 processors with the PGI MPI distribution on mimosa):
#PBS -S /bin/bash
#PBS -l nodes=2
/usr/local/apps/pgi-7.2/linux86/7.2/bin/pgcc myfile.c -lmpich
/usr/local/apps/bin/mpirun -np 4 a.out

To submit your job:
qsub yourPBSscriptName

To track your job in the queue:
watch qstat -u yourAccountId

PBS will create error and output files in your working directory which you may examine after your job completes.

4. What's the best way to get started writing my first MPI program?

To schedule an MPI workshop for a small group, either on the UM campus, or on your remote Mississippi campus, please e-mail us. Meanwhile, you might try the Introduction to MPI online tutorial at WebCT-HPC, sponsored by the National Computational Science Alliance Partners for Advanced Computational Services.

Back to Top


Last Modified:February 13, 2011 21:32:10.   Copyright © 1997-2012 The Mississippi Center for Supercomputing Research. All Rights Reserved.   The University of Mississippi
Valid RSS