MultiPingPong Example
The aim of this example is to present
a simple multithreaded application. Users who would like to do applications
with more than one thread have here a simple starting point. This application
can also be used to stress test the ability of WMPI to cope with multiple
threads, making concurrent calls to MPI functions.
This is also an example that uses
different arguments to different processes.
Objective
The process zero (0) creates a thread
for every process that is in the computation (even itself). Each thread
starts a ping-pong sequence with one of the processes. Each process may
choose a different number of Ping-Pong iterations for each process as well
as different buffer sizes.
Notice that this example is not designed
to make performance tests on WMPI and the time values presented at the
end of the computation have influences of the other threads and printfs.
Files
Location/Files |
Description |
Examples\MultiPingPong\multipingpong.c |
Example code file |
Examples\MultiPingPong\MultiPingPong.dsp |
VC++ project file |
Examples\MultiPingPong\Release\MultiPingPong.exe |
Release linked executable |
Examples\MultiPingPong\Release\MultiPingPong.pg |
Process Group file prototype |
Notice that to execute this example
you have to generate a Cluster
Configuration file and a Process
Group file.
How to run
This examples uses two arguments
for each process:
multipingpong [<number_of_iterations>
<size_of_buffer>]
If the user does not specify any argument,
default values are used: number of iterations is 1000 and the buffer size
is 524288 (512K).
In the Process Group file, the users
may introduce different values for each process. Note that the arguments
are not passed from the first process to the others, hence if in the Process
Group file no arguments are introduced, then the processes will run with
the default values.
Code Comments
The code of this example is very
simple and straightforward. The only remark goes to the WaitForMultipleObjects
function that the process zero (0) does at the end. It can be avoided if
the users guarantee that after the main thread starts the MPI_Finalize
function no other thread would make a MPI call. Since there is no guarantee
for this example, some kind of synchronization is necessary. Note that
the MPI standard explicitly states that the thread, which calls the MPI_Init_thread
function, is the only one which can call the MPI_Finalize.
|