MD segmentation fault
Moderators: Global Moderator, Moderator
-
- Full Member
- Posts: 122
- Joined: Tue Mar 10, 2009 2:04 am
MD segmentation fault
Dear All,
I am trying nowadays to do some MD calculations on a system constituted by Ti and H. I am new with MD so maybe I am failing but don't know where and why. I am using the parallel version 5.2.8.
What I want to do is to anneal my system from 0 to 300 K. and then thermalize it at 300K.
In order to do this I am I think it is safe procedure to increase very slowly the T of the system, i.e., I decided to increase with SMASS=-1 Nose mass from 0 to 20, from 20 to 40....and so on.
Here my INCAR
SYSTEM = Ti54_2d_a_Td
PREC = Low
EDIFF = 1E-05
ISPIN = 2
NELMDL = 4
NELMIN = 8
!BMIX = 2.0 ! mixing parameter
!MAXMIX = 50 ! keep dielectric function between ionic movements
IWAVPRE=11
ISYM = 0
LREAL = A
ISMEAR = 2
SIGMA = 0.05
ISIF = 3
SMASS = -1.0
TEBEG = 0.
TEEND= 20.
POTIM = 3.0
IALGO = 48
NSW = 3000
IBRION = 0
LWAVE = .FALSE.
LCHARG = .FALSE.
Can anyone confirm me it is a reasonable INCAR for my porpouse?
Moreover, I was reading in the manual that to facilitate the convergency of the calculation it is safe procedure to change POMASS in POTCAR files.
Thus, I did it and I took POMASS=1 for both Ti and H. Correct?
The reason for my doubt stems from the fact that same INCAR in different machines seems to give rise to different behaviour.
In particular,
in this machine
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
stepping : 10
cpu MHz : 2666.840
cache size : 6144 KB
it works.
In this other
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU X5355 @ 2.66GHz
stepping : 7
cpu MHz : 2666.742
cache size : 4096 KB
I get this segmentation fault error and the job "dies".....
Is a problem related to the cache size only?
Thanks for your attention, hoping to receive a prompt reply.
My very best,
Giacomo
I am trying nowadays to do some MD calculations on a system constituted by Ti and H. I am new with MD so maybe I am failing but don't know where and why. I am using the parallel version 5.2.8.
What I want to do is to anneal my system from 0 to 300 K. and then thermalize it at 300K.
In order to do this I am I think it is safe procedure to increase very slowly the T of the system, i.e., I decided to increase with SMASS=-1 Nose mass from 0 to 20, from 20 to 40....and so on.
Here my INCAR
SYSTEM = Ti54_2d_a_Td
PREC = Low
EDIFF = 1E-05
ISPIN = 2
NELMDL = 4
NELMIN = 8
!BMIX = 2.0 ! mixing parameter
!MAXMIX = 50 ! keep dielectric function between ionic movements
IWAVPRE=11
ISYM = 0
LREAL = A
ISMEAR = 2
SIGMA = 0.05
ISIF = 3
SMASS = -1.0
TEBEG = 0.
TEEND= 20.
POTIM = 3.0
IALGO = 48
NSW = 3000
IBRION = 0
LWAVE = .FALSE.
LCHARG = .FALSE.
Can anyone confirm me it is a reasonable INCAR for my porpouse?
Moreover, I was reading in the manual that to facilitate the convergency of the calculation it is safe procedure to change POMASS in POTCAR files.
Thus, I did it and I took POMASS=1 for both Ti and H. Correct?
The reason for my doubt stems from the fact that same INCAR in different machines seems to give rise to different behaviour.
In particular,
in this machine
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
stepping : 10
cpu MHz : 2666.840
cache size : 6144 KB
it works.
In this other
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU X5355 @ 2.66GHz
stepping : 7
cpu MHz : 2666.742
cache size : 4096 KB
I get this segmentation fault error and the job "dies".....
Is a problem related to the cache size only?
Thanks for your attention, hoping to receive a prompt reply.
My very best,
Giacomo
Last edited by giacomo giorgi on Fri Jan 20, 2012 4:39 am, edited 1 time in total.
-
- Global Moderator
- Posts: 1817
- Joined: Mon Nov 18, 2019 11:00 am
Re: MD segmentation fault
Hi,
We're sorry that we didn’t answer your question. This does not live up to the quality of support that we aim to provide. The team has since expanded. If we can still help with your problem, please ask again in a new post, linking to this one, and we will answer as quickly as possible.
Best wishes,
VASP