Example Compilations: On an SGI machine at CASCV (clio): > CC -o parallel_mat_vect parallel_mat_vect.c -lmpi On an IBM machine with "wide" nodes at CASCV (control): > mpcc -qarch=pwr2 -qtune=pwr2 -o parallel_mat_vect parallel_mat_vect.c On an IBM machine with "silver" nodes at CASCV (control): > mpcc -qarch=ppc -qtune=604 -o parallel_mat_vect parallel_mat_vect.c ========================================================== clio:> mpirun -np 2 parallel_mat_vect Enter the size of the matrix (m n) as long as both m,n are divisible by the number of processors 4 4 Reading the matrix... The matrix is 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 2.0 2.0 2.0 2.0 3.0 3.0 3.0 3.0 Reading the vector... The vector is 1.0 1.0 1.0 1.0 The product is 0.0 4.0 8.0 12.0 ========================================================== clio:> mpirun -np 2 parallel_mat_vect Enter the size of the matrix (m n) as long as both m,n are divisible by the number of processors 16 4 Reading the matrix... The matrix is 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 2.0 2.0 2.0 2.0 3.0 3.0 3.0 3.0 4.0 4.0 4.0 4.0 5.0 5.0 5.0 5.0 6.0 6.0 6.0 6.0 7.0 7.0 7.0 7.0 8.0 8.0 8.0 8.0 9.0 9.0 9.0 9.0 10.0 10.0 10.0 10.0 11.0 11.0 11.0 11.0 12.0 12.0 12.0 12.0 13.0 13.0 13.0 13.0 14.0 14.0 14.0 14.0 15.0 15.0 15.0 15.0 Reading the vector... The vector is 1.0 1.0 1.0 1.0 The product is 0.0 4.0 8.0 12.0 16.0 20.0 24.0 28.0 32.0 36.0 40.0 44.0 48.0 52.0 56.0 60.0 ========================================================== clio:> mpirun -np 2 parallel_mat_vect Enter the size of the matrix (m n) as long as both m,n are divisible by the number of processors 6 4 Reading the matrix... The matrix is 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 2.0 2.0 2.0 2.0 3.0 3.0 3.0 3.0 4.0 4.0 4.0 4.0 5.0 5.0 5.0 5.0 Reading the vector... The vector is 1.0 1.0 1.0 1.0 The product is 0.0 4.0 8.0 12.0 16.0 20.0 ==========================================================