NEB Problem (is my system too large?)

Queries about input and output files, running specific calculations, etc.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
tak
Newbie
Newbie
Posts: 9
Joined: Mon Feb 18, 2008 9:14 pm
License Nr.: 828

NEB Problem (is my system too large?)

#1 Post by tak » Thu May 21, 2009 6:50 pm

I have problem with NEB calculation. I have Fe(100) surface, 5 layers, 20 Fe atoms with HCOO on it (I also put H atom on the back to avoid unpaired electron), total of 25 atoms. I have optimized Fe-HCOO geometries, one for HCOO plane vertical to Fe(100) surface and one for HCOO plane tilted, and put them under 00 and 03 subdirectory as POSCAR. I made 2 images which I'm sure they are about right, and put them under 01 and 02 directories as POSCAR as well.

This is my INCAR file:

SYSTEM = NEB for Fe20HCOO
IMAGES = 2
ISMEAR = 1
SIGMA = 0.2
ENCUT = 350

ISPIN = 2
NBANDS = 277

IBRION = 3
POTIM = 0.2
NSW = 50
EDIFF = 1.0E-04

LREAL = Auto
SPRING = -5

The problem is after 1500hrs of CPU time (100 CPUs for 15 hours), it cannot converge. In fact it did not finish even first calculation cycle. Here is the log file.

Running VASP 4.6

running on 4 nodes
each image running on 2 nodes
distr: one band on 1 nodes, 2 groups
vasp.4.6.31 08Feb07 complex
01/POSCAR found : 4 types and 25 ions
scaLAPACK will be used
LDA part: xc-table for Pade appr. of Perdew
00/POSCAR found : 4 types and 25 ions
03/POSCAR found : 4 types and 25 ions
POSCAR, INCAR and KPOINTS ok, starting setup
WARNING: wrap around errors must be expected
FFT: planning ... 9
reading WAVECAR
WARNING: random wavefunctions but no delay for mixing, default for NELMDL
User defined signal 2
MPI: daemon terminated: hawk-3 - job aborting

------------------------------------------------------------
Sender: LSF System <lsfaltix@hawk-3>
Subject: Job 335650: <vasptest.MPI> Exited

Job <vasptest.MPI> was submitted from host <hawk-0> by user <yamadat>.
Job was executed on host(s) <100*hawk-3>, in queue <standard>, as user <yamadat>.
</hafs12/yamadat> was used as the home directory.
</hafs12/yamadat/vasp/fehcoo/fe20hcooneb> was used as the working directory.
Started at Tue May 19 10:31:35 2009
Results reported at Wed May 20 01:32:01 2009

I wonder if anyone can find what is wrong with my calculation.

Thank you very much.

Tak
Last edited by tak on Thu May 21, 2009 6:50 pm, edited 1 time in total.

tracy
Newbie
Newbie
Posts: 26
Joined: Fri Aug 22, 2008 5:48 pm

NEB Problem (is my system too large?)

#2 Post by tracy » Fri May 22, 2009 2:20 am

Hi, shouldn't you include the information of EDIFFG and MAGMOM in your calculation? I think EDIFFG should definitly included in INCAR file for NEB calculation. If I said something stupid, please correct me.
Last edited by tracy on Fri May 22, 2009 2:20 am, edited 1 time in total.

Danny
Full Member
Full Member
Posts: 201
Joined: Thu Nov 02, 2006 4:35 pm
License Nr.: 5-532
Location: Ghent, Belgium
Contact:

NEB Problem (is my system too large?)

#3 Post by Danny » Sun May 24, 2009 4:45 pm

hmm are you sure you run on 100 cpu's? Vasp reports only running on 4:

Code: Select all

running on 4 nodes 
(if however that is normal, it could be your job just hung)

Danny
<span class='smallblacktext'>[ Edited Sun May 24 2009, 08:32PM ]</span>
Last edited by Danny on Sun May 24, 2009 4:45 pm, edited 1 time in total.

panda

NEB Problem (is my system too large?)

#4 Post by panda » Tue May 26, 2009 7:01 pm

I agree with Danny, I would check how the job is being submitted to the queue and make sure that the parallelization has been set up correctly during the compilation of the source code. MPI: daemon terminated: hawk-3 - job aborting makes me think that there is a problem with your parallelization
Last edited by panda on Tue May 26, 2009 7:01 pm, edited 1 time in total.

Post Reply