Project

General

Profile

Investigation of MPICH Async IO

Problem Statement

As documented in Idea #5948, MPI performs some buffering behind-the-scenes and does not appear to be capable of truly "fire-and-forget" asynchronous communication.

MPI Throughput tests

Ron had created a simple test program: cluck:~ron/src/mpi_xfer2.c
This demonstrates MPI Send vs Isend and records different metrics on the data transfers. The program inserts random delays between the send calls, but we do not observe the receiver "catching up" during these pauses...a MPI call on the send side is required before a receive completes.

Threaded MPI

I have created a version of the test program which uses the asynchronous MPI calls and additionally has a thread running which basically just does MPI_Waitany calls as fast as it can. We've confirmed that on cluck, at least the sender thread and this "status thread" run on the same processing core, so usleep(0) calls make sure that the threads yield gracefully to each other.

Performance Testing

I have run the test program in three configurations (Synchronous MPI calls (S), Async MPI (A), and Threaded Async MPI) (T), and against 4 different "scenarios":
1. No delays (usleep calls) in either the sender or the receiver
2. 50ms delays in the receiver after each receive completes and the next MPI_Irecv call
3. 100ms delays 10% of the time in the sender after the MPI_*Send call
4. Both 2. and 3.

Results
Receive Send
S1 1144.09 1154.96
S2 77.68 77.92
S3 1068.8 1078.52
S4 143.04 143.31
A1 34.76 1217.37
A2 2.65 2366.98
A3 4.62 1256.98
A4 1.15 2574.5
T1 54.75 3164.08
T2 3 2003.47
T3 564.44 8216.83
T4 67.79 25060.87

(See attached chart)