vasp/5.5.4-omc submission script

Queries about input and output files, running specific calculations, etc.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
nicola_zagni
Newbie
Newbie
Posts: 2
Joined: Mon Mar 09, 2020 8:15 am

vasp/5.5.4-omc submission script

#1 Post by nicola_zagni » Wed Mar 11, 2020 1:07 pm

Please can you help with vasp/5.5.4-omc submission script? I tried to submit jobs with same submission script as in vasp 5.5.4 but it doesn't work and it sends me the error message:


Primary job terminated normally, but 1 process returned a non-zero exit code.. Per user-direction, the job has been aborted.
mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated.
The first process to do so was:
Process name: [[2821,1],21]
Exit code: 29


Submission script I use for vasp/5.5.4-omc is
#$ -S /bin/bash
#$ -q parallel
#$ -l node_type=10Geth*
#$ -l nodes=2
source /etc/profile
module load vasp/5.4.4-omc
mkdir OUTPUT
cd OUTPUT
for x in INCAR KPOINTS POTCAR POSCAR; do ln -s ../$x .; done
mpirun -np 16 vasp_std
rm INCAR KPOINTS POTCAR POSCAR
echo "Job finished at"
date
################### Job Ended ###################
exit 0


The script is same as for "plain" vasp/5.4.4 except for:

module load vasp/5.4.4-omc AND
mpirun -np 16 vasp_std

since I googled it and they suggest to add number of processes for running non default modules, in this case -np 16 as we have 16 cores per node on our computer cluster. However, 16 is number of cores/processes/nodes although not sure about it as I went through different interpretations. This matches the number of cores (32) I got in OUTCAR when working with vasp/5.4.4. or else it's -np 16 as it's meant to be number of cores/processes per node and I choose to work with l=nodes=2.

merzuk.kaltak
Administrator
Administrator
Posts: 282
Joined: Mon Sep 24, 2018 9:39 am

Re: vasp/5.5.4-omc submission script

#2 Post by merzuk.kaltak » Mon Mar 30, 2020 9:44 am

Typically the total number of cores ( nodes times core/per node ) is passed to -np, but this might differ.
This is, therefore, a question for your system administrator.
The answer to your problem might be found on the official man page of mpirun.

Post Reply