VASP on intel quardcore cluster

Questions regarding the compilation of VASP on various platforms: hardware, compilers and libraries, etc.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
abalone
Newbie
Newbie
Posts: 15
Joined: Tue Jan 08, 2008 7:58 pm

VASP on intel quardcore cluster

#1 Post by abalone » Sun May 18, 2008 12:17 am

We are currently using Intel dual-processor quad-core – Dell PowerEdge 1950.

I compiled the program with mpif90 (intel 10.1.015/mpich-intel64) and intel mkl 9 without any problem. If just run on one node, it is working perfectly fine.

However, if I use mpirun, it shows

Error reading item 'IMAGES' from file INCAR.

It seems to be related to mpi problem. Can anyone provide any more information on that?

Thanks.
<span class='smallblacktext'>[ Edited ]</span>
Last edited by abalone on Sun May 18, 2008 12:17 am, edited 1 time in total.

admin
Administrator
Administrator
Posts: 2921
Joined: Tue Aug 03, 2004 8:18 am
License Nr.: 458

VASP on intel quardcore cluster

#2 Post by admin » Mon May 19, 2008 1:22 pm

please check if your INCAR and all other input files have been copied to the master node successfully. This error usually indicates that INCAR is not found in the working directory
Last edited by admin on Mon May 19, 2008 1:22 pm, edited 1 time in total.

abalone
Newbie
Newbie
Posts: 15
Joined: Tue Jan 08, 2008 7:58 pm

VASP on intel quardcore cluster

#3 Post by abalone » Mon May 19, 2008 7:47 pm

Since the node has 2 processors and each processor has 4 cores, I tried to just run vasp on one node with 8 threads. It still won't work.
The program is working if I use "./vasp", but it won't work if I use "mpirun -np 1 ./vasp".

What may be the problem then?
<span class='smallblacktext'>[ Edited Mon May 19 2008, 09:49PM ]</span>
Last edited by abalone on Mon May 19, 2008 7:47 pm, edited 1 time in total.

admin
Administrator
Administrator
Posts: 2921
Joined: Tue Aug 03, 2004 8:18 am
License Nr.: 458

VASP on intel quardcore cluster

#4 Post by admin » Tue May 20, 2008 2:26 pm

IMAGES is the first item to be read from INCAR if you run the parallel executable. Therefore there are 3 possible reasons for that error:
1) (most commonly):INCAR is not found in the working directory. please ask your sys admin if the input files have to be located on the master node only (as usually) or if they have to be copied to all nodes
2) the read variable for IMAGES has the wrong type: it has to be an integer number if defined (please note that if "IMAGES" is not given at all in INCAR, this does not lead to an error).
3) IMAGES is defined, but there is a hidden character in the input string. This may (but should not) happen if you import INCAR from a computer running with a different OS. please test by giving 'strings INCAR'
Last edited by admin on Tue May 20, 2008 2:26 pm, edited 1 time in total.

abalone
Newbie
Newbie
Posts: 15
Joined: Tue Jan 08, 2008 7:58 pm

VASP on intel quardcore cluster

#5 Post by abalone » Fri May 23, 2008 12:08 am

I guess the 2nd and 3rd will not be the problem because I tested the input files on another Athlon cluster (with different executable file, of course, using the same makefile). The parallel job ran without any problem. Also, I did not define IMAGES in the INCAR file. I tried to define IMAGES = 1 in the INCAR, and the same error message.

I also tried to run the job directly on one node, i.e. I log on to one node, copy the working folder to the local disk and execute the job using mpirun, it still won't work (while it works just run ./vasp). Hopefully I can get some more information about this problem.

BTW, the cpu is intel Xeon E5300 series .

I attached the Makefile in case you need it.

.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Intel Fortran compiler for Operton systems
#
# The makefile was tested only under Linux on Intel platforms
# (Suse 5.3- Suse 9.2)
# the followin compiler versions have been tested
# 5.0, 6.0, 7.0 and 7.1 (some 8.0 versions seem to fail compiling the code,
# 8.1 is slower than 8.0)
# presently we recommend version 7.1 or 7.0, since these
# releases have been used to compile the present code versions
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# BLAS must be installed on the machine
# there are several options:
# Kazushige Goto's BLAS is required
# http://www.cs.utexas.edu/users/kgoto/signup_first.html
# (see below libgoto comments)
#
# FFT:
# the fftw.3.0.1 must be available and installed, since
# the ifc compiler creates crap code if the build in fft routines are used
# (see below fftw comments)
#-----------------------------------------------------------------------

# all CPP processed fortran files have the extension .f90
SUFFIX=.f90

#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
FC = mpif90
# fortran linker
FCL=$(FC)


#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
# CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
# CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
# SUSE X.X, maybe some Red Hat distributions:

CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)

#-----------------------------------------------------------------------
# possible options for CPP:
# NGXhalf charge density reduced in X direction
# wNGXhalf gamma point only reduced in X direction
# avoidalloc avoid ALLOCATE if possible
# IFC work around some IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS)
#-----------------------------------------------------------------------
# CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC -Dkind8 -DNGZhalf -DNGXhalf -DwNGXhalf -DCACHE_SIZE=4000 -DPGF90 -Davoidal
loc -DMPI_BLOCK=2000 -Duse_cray_ptr

# -DRPROMU_DGEMV -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# general fortran flags (there must a trailing blank on this line)
#-----------------------------------------------------------------------

FFLAGS = -FR -lowercase -xT -cm -w95

#-----------------------------------------------------------------------
# optimization
# we have tested whether higher optimisation improves performance
# -axK SSE1 optimization, but also generate code executable on all mach.
# xK improves performance somewhat on XP, and a is required in order
# to run the code on older Athlons as well
# -xW SSE2 optimization
# -axW SSE2 optimization, but also generate code executable on all mach.
# -tpp6 P3 optimization
# -tpp7 P4 optimization
#-----------------------------------------------------------------------

OFLAG=-O2


OFLAG_HIGH = $(OFLAG)
OBJ_HIGH =

OBJ_NOOPT =
DEBUG = -FR -O0
INLINE = $(OFLAG)


#-----------------------------------------------------------------------
# the following lines specify the position of BLAS and LAPACK
# on Operton, you really need the libgoto library
#-----------------------------------------------------------------------

#BLAS= $(HOME)/OPT/src/vasp.4.6/libgoto_opteron-32-r0.99.so -lpthread
BLAS = -L/apps/steele/intel/mkl/10.0.2.018/lib/em64t -lmkl_em64t -lguide -lpthread
#BLAS= /opt/libs/libgoto/libgoto_p4_512-r0.6.so


# LAPACK, simplest use vasp.4.lib/lapack_double
LAPACK= ../vasp.4.lib/lapack_double.o

#-----------------------------------------------------------------------

LIB = -L../vasp.4.lib -ldmy \
../vasp.4.lib/linpack_double.o $(LAPACK) \
$(BLAS)

# options for linking (for compiler version 6.X, 7.1) nothing is required
LINK = -L/apps/steele/intel/mkl/10.0.2.018/lib/em64t -lsvml
# compiler version 7.0 generates some vector statments which are located
# in the svml library, add the LIBPATH and the library (just in case)
#LINK = -L/opt/intel/compiler70/ia32/lib/ -lsvml

#-----------------------------------------------------------------------
# fft libraries:
# On Operton you really have to used fftw.3.0.X (http://www.fftw.org)
# the ifc compiler creates suboptimal performance on the Opteron for
# the build in fft routines
#
# fftw.3.0.1 was compiled using the following command lines:
# > export CC="gcc -m32"
# > export F77="f77 -m32"
# > ./configure --enable-sse2 --prefix=/home/kresse/ifc_opt/fftw-3.0.1/
# > make
# > make install
# PLEASE do not send querries related to fftw to the vasp site
#-----------------------------------------------------------------------

#FFT3D = fft3dfurth.o fft3dlib.o
#FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o


#=======================================================================
# MPI section, uncomment the following lines
#
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77
# appends *two* underscores to symbols that contain already an
# underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90/ifc
# compilers however append only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X to lam-7.0.4 are stable
# mpich.1.2.1 was configured with
# ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \
# -f90="pgf90 " \
# --without-romio --without-mpe -opt=-O \
#
# lam was configured with the line
# ./configure -prefix /opt/libs/lam-7.0.4 --with-cflags=-O -with-fc=ifc \
# --with-f77flags=-O --without-romio
#
# please note that you might be able to use a lam or mpich version
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above, you can use the following line
#-----------------------------------------------------------------------

#FC=mpif77
#FCL=$(FC)

#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf charge density reduced in Z direction
# wNGZhalf gamma point only reduced in Z direction
# scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------

CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC -DNGZhalf -DwNGZhalf\
-Dkind8 -DCACHE_SIZE=8000 -DPGF90 -Davoidalloc \
-DMPI_BLOCK=2000 -DPROC_GROUP=8\
-DRPROMU_DGEMV -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------

BLACS=$(HOME)/archives/SCALAPACK/BLACS/
#SCA_=$(HOME)/archives/SCALAPACK/SCALAPACK

#SCA= $(SCA_)/libscalapack.a \
$(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a

#SCA=

#-----------------------------------------------------------------------
# libraries for mpi
#-----------------------------------------------------------------------

#LIB = -L../vasp.4.lib -ldmy \
# ../vasp.4.lib/linpack_double.o $(LAPACK) \
# $(SCA) $(BLAS)

# FFT: fftmpi.o with fft3dlib of Juergen Furthmueller
FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o

# fftw.3.0.1 is much faster on Opteron
#FFT3D = fftmpiw.o fftmpi_map.o fft3dlib.o /opt/libs/fftw-3.0.1/lib/libfftw3.a

#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------

BASIC= symmetry.o symlib.o lattlib.o random.o

SOURCE= base.o mpi.o smart_allocate.o xml.o \
constant.o jacobi.o main_mpi.o scala.o \
asa.o lattice.o poscar.o ini.o setex.o radial.o \
pseudo.o mgrid.o mkpoints.o wave.o wave_mpi.o $(BASIC) \
nonl.o nonlr.o dfast.o choleski2.o \
mix.o charge.o xcgrad.o xcspin.o potex1.o potex2.o \
metagga.o constrmag.o pot.o cl_shift.o force.o dos.o elf.o \
tet.o hamil.o steep.o \
chain.o dyna.o relativistic.o LDApU.o sphpro.o paw.o us.o \
ebs.o wavpre.o wavpre_noio.o broyden.o \
dynbr.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \
brent.o stufak.o fileio.o opergrid.o stepver.o \
dipol.o xclib.o chgloc.o subrot.o optreal.o davidson.o \
edtest.o electron.o shm.o pardens.o paircorrection.o \
optics.o constr_cell_relax.o stm.o finite_diff.o \
elpol.o setlocalpp.o aedens.o

INC=

vasp: $(SOURCE) $(FFT3D) $(INC) main.o
rm -f vasp
$(FCL) -o vasp $(LINK) main.o $(SOURCE) $(FFT3D) $(LIB)
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
$(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
$(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
$(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB)
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
$(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
$(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)

clean:
-rm -f *.g *.f *.o *.L *.mod ; touch *.F

main.o: main$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX)

makeparam.o: makeparam$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX)

makeparam$(SUFFIX): makeparam.F main.F
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
poscar.o: poscar.inc poscar.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.inc wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F

$(OBJ_HIGH):
$(CPP)
$(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
$(CPP)
$(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)

fft3dlib_f77.o: fft3dlib_f77.F
$(CPP)
$(F77) $(FFLAGS_F77) -c $*$(SUFFIX)

.F.o:
$(CPP)
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
$(CPP)
$(SUFFIX).o:
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)

# special rules
#-----------------------------------------------------------------------
# these special rules are cummulative (that is once failed
# in one compiler version, stays in the list forever)
# -tpp5|6|7 P, PII-PIII, PIV
# -xW use SIMD (does not pay of on PII, since fft3d uses double prec)
# all other options do no affect the code performance since -O1 is used
#-----------------------------------------------------------------------

fft3dlib.o : fft3dlib.F
$(CPP)
$(FC) -FR -lowercase -O1 -tpp7 -xW -prev_div -unroll0 -e95 -vec_report3 -c $*$(SUFFIX)
fft3dfurth.o : fft3dfurth.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

radial.o : radial.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

symlib.o : symlib.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

symmetry.o : symmetry.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

dynbr.o : dynbr.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

broyden.o : broyden.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)

us.o : us.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

wave.o : wave.F
$(CPP)
$(FC) -FR -lowercase -O0 -c $*$(SUFFIX)

LDApU.o : LDApU.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
<span class='smallblacktext'>[ Edited Fri May 23 2008, 02:11AM ]</span>
Last edited by abalone on Fri May 23, 2008 12:08 am, edited 1 time in total.

Post Reply