Hello all,
I have recently compiled VASP 5.2.2 and have begun the testing process to compare against VASP 4.6 results obtained earlier. I have successfully run the parallel version of VASP 5.2.2 for "small" systems, in particular, a 4 atom, wurtzite primitive unit cell. Moving on to a larger system (i.e. 72 atoms) the program segfaults, showing the following error:
running on 4 nodes
distr: one band on 1 nodes, 4 groups
vasp.5.2.2 15Apr09 complex
POSCAR found : 2 types and 72 ions
LDA part: xc-table for Ceperly-Alder, standard interpolation
POSCAR, INCAR and KPOINTS ok, starting setup
WARNING: small aliasing (wrap around) errors must be expected
FFT: planning ...( 16 )
reading WAVECAR
WARNING: random wavefunctions but no delay for mixing, default for NELMDL
entering main loop
N E dE d eps ncg rms rms(c)
Segmentation fault
I am quite at a loss as to the problem. I have tried linking against other external libraries (i.e. netlib, Intel MKL, etc), MPI interfaces (i.e. OpenMPI, MPICH2) and have tried both Intel and Gnu Compilers. Currently, I am using the following
OpenMPI 1.2.6
Intel 10.1.X Fortran and C/C++ compilers
Intel MKL 10.2.X
I do not think this is a installation error, I think it might have something to do with array allocation however, I may be mistaken.
I eagerly await your reply,
Tom
VASP 5.2.2 Segfault For "Large" Systems
Moderators: Global Moderator, Moderator
VASP 5.2.2 Segfault For "Large" Systems
Last edited by tommy91779 on Tue Sep 08, 2009 12:33 pm, edited 1 time in total.
-
- Hero Member
- Posts: 584
- Joined: Tue Nov 16, 2004 2:21 pm
- License Nr.: 5-67
- Location: Germany
VASP 5.2.2 Segfault For "Large" Systems
Hi there,
I'm currently using OpenMPI 1.2.6, Intel 11.0.83 (and the now co-delivered MKL) on a SLES 10 SP2.
If I remember correctly I had once issues with Intel 10. Intel 9.1.51 was used for a production executable before Intel11 and showed high stability. Also Intel 11.0.83 produces fine exes.
I also tried OpenMPI 1.3.2 with Intel 11.0.83 with no success.
Hth
alex
I'm currently using OpenMPI 1.2.6, Intel 11.0.83 (and the now co-delivered MKL) on a SLES 10 SP2.
If I remember correctly I had once issues with Intel 10. Intel 9.1.51 was used for a production executable before Intel11 and showed high stability. Also Intel 11.0.83 produces fine exes.
I also tried OpenMPI 1.3.2 with Intel 11.0.83 with no success.
Hth
alex
Last edited by alex on Tue Sep 08, 2009 2:36 pm, edited 1 time in total.
VASP 5.2.2 Segfault For "Large" Systems
Alex,
Thank you for the suggestion. I have tried the Intel 11 compilers and recompiled our MPI however, we are still having issues with our "large" systems. I was wondering if you had any other suggestions?
Thanks in advance,
Tom
Thank you for the suggestion. I have tried the Intel 11 compilers and recompiled our MPI however, we are still having issues with our "large" systems. I was wondering if you had any other suggestions?
Thanks in advance,
Tom
Last edited by tommy91779 on Wed Sep 09, 2009 12:32 pm, edited 1 time in total.
-
- Sr. Member
- Posts: 339
- Joined: Mon Apr 24, 2006 9:07 am
- License Nr.: 173
- Location: Gothenburg, Sweden
VASP 5.2.2 Segfault For "Large" Systems
Sounds like a stack size problem. First try to compile with the option "-heap-arrays" to see if the seg fault disappears. If it helps please either keep the flag or increase the stack size as suggested in this previous thread:
http://cms.mpi.univie.ac.at/vasp-forum/ ... php?2.5776
The latter deals with the problem instead of just circumventing it by putting everything on the heap. Both alternatives however work just fine.
Hope this helps
Cheers,
/Dan
<span class='smallblacktext'>[ Edited Thu Sep 10 2009, 02:08AM ]</span>
http://cms.mpi.univie.ac.at/vasp-forum/ ... php?2.5776
The latter deals with the problem instead of just circumventing it by putting everything on the heap. Both alternatives however work just fine.
Hope this helps
Cheers,
/Dan
<span class='smallblacktext'>[ Edited Thu Sep 10 2009, 02:08AM ]</span>
Last edited by forsdan on Thu Sep 10, 2009 12:05 am, edited 1 time in total.