MPI_BLOCK
Posted: Fri Feb 13, 2009 10:03 am
Hello,
I have a question concerning the MPI_BLOCK variable. I am running vasp.4.6.35 on a cluster where each blade consists of 2 quad-core Opteron (Barcelona) CPUs. Blades are connected with Infiniband. When running on 2 or 4 blades (16 or 32 cores), increasing the value of MPI_BLOCK seems to help, e.g. on 4 blades using MPI_BLOCK=8000 can increase performance by about 15 %. Is it safe to use such and even higher values for MPI_BLOCK ? Does anybody have experience with this ?
Many thanks in advance.
Best regards
Roman Martonak
--
Assoc. Prof. Roman Martonak
Department of Experimental Physics
Faculty of Mathematics, Physics and Informatics
Comenius University
Mlynska dolina F2
842 48 Bratislava phone: +421 2 60295467
Slovakia e-mail: martonak at fmph.uniba.sk
I have a question concerning the MPI_BLOCK variable. I am running vasp.4.6.35 on a cluster where each blade consists of 2 quad-core Opteron (Barcelona) CPUs. Blades are connected with Infiniband. When running on 2 or 4 blades (16 or 32 cores), increasing the value of MPI_BLOCK seems to help, e.g. on 4 blades using MPI_BLOCK=8000 can increase performance by about 15 %. Is it safe to use such and even higher values for MPI_BLOCK ? Does anybody have experience with this ?
Many thanks in advance.
Best regards
Roman Martonak
--
Assoc. Prof. Roman Martonak
Department of Experimental Physics
Faculty of Mathematics, Physics and Informatics
Comenius University
Mlynska dolina F2
842 48 Bratislava phone: +421 2 60295467
Slovakia e-mail: martonak at fmph.uniba.sk