parallel installation (continued)

Questions regarding the compilation of VASP on various platforms: hardware, compilers and libraries, etc.


Moderators: Global Moderator, Moderator

Post Reply
Message
Author
maryam
Newbie
Newbie
Posts: 19
Joined: Fri Jul 10, 2009 10:04 am

parallel installation (continued)

#1 Post by maryam » Wed Dec 02, 2009 12:07 pm

Dear friends,
Our parallel installation of VASP.4.6 has not been completed. We have modified our Makefile as it follows, and We think that some improvements have been achieved. our new error is

ld: /lib/for_main.o: No such file: No such file or directory
make: *** [vasp] Error 1


as you can see the executable file of vasp is not made. Our Makefile is


.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Intel Fortran compiler for P4 systems
#
# The makefile was tested only under Linux on Intel platforms
# (Suse 5.3- Suse 9.0)
# the followin compiler versions have been tested
# 5.0, 6.0, 7.0 and 7.1 (some 8.0 versions seem to fail compiling the code)
# presently we recommend version 7.1 or 7.0, since these
# releases have been used to compile the present code versions
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
# retrieve the lapackage from ftp.netlib.org
# and compile the blas routines (BLAS/SRC directory)
# please use g77 or f77 for the compilation. When I tried to
# use pgf77 or pgf90 for BLAS, VASP hang up when calling
# ZHEEV (however this was with lapack 1.1 now I use lapack 2.0)
# 2) most desirable: get an optimized BLAS
#
# the two most reliable packages around are presently:
# 3a) Intels own optimised BLAS (PIII, P4, Itanium)
# http://developer.intel.com/software/products/mkl/
# this is really excellent when you use Intel CPU's
#
# 3b) or obtain the atlas based BLAS routines
# http://math-atlas.sourceforge.net/
# you certainly need atlas on the Athlon, since the mkl
# routines are not optimal on the Athlon.
# If you want to use atlas based BLAS, check the lines around LIB=
#
# 3c) mindblowing fast SSE2 (4 GFlops on P4, 2.53 GHz)
# Kazushige Goto's BLAS
# http://www.cs.utexas.edu/users/kgoto/signup_first.html
#
#-----------------------------------------------------------------------

# all CPP processed fortran files have the extension .f90
SUFFIX=.f90

#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
#FC=ifort
# fortran linker
#FCL=$(FC)


#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
# CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
# CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
# SUSE X.X, maybe some Red Hat distributions:

CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)

#-----------------------------------------------------------------------
# possible options for CPP:
# NGXhalf charge density reduced in X direction
# wNGXhalf gamma point only reduced in X direction
# avoidalloc avoid ALLOCATE if possible
# IFC work around some IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS)
#-----------------------------------------------------------------------


#CPP = $(CPP_) -DHOST=\"LinuxIFC\" \
# -Dkind8 -DNGXhalf -DCACHE_SIZE=12000 -DPGF90 -Davoidalloc \
# -DRPROMU_DGEMV -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# general fortran flags (there must a trailing blank on this line)
#-----------------------------------------------------------------------

FFLAGS = -FR -lowercase -assume byterecl

#-----------------------------------------------------------------------
# optimization
# we have tested whether higher optimisation improves performance
# -axK SSE1 optimization, but also generate code executable on all mach.
# xK improves performance somewhat on XP, and a is required in order
# to run the code on older Athlons as well
# -xW SSE2 optimization
# -axW SSE2 optimization, but also generate code executable on all mach.
# -tpp6 P3 optimization
# -tpp7 P4 optimization
#-----------------------------------------------------------------------

OFLAG=-O3 -xW -tpp7

OFLAG_HIGH = $(OFLAG)
OBJ_HIGH =

OBJ_NOOPT =
DEBUG = -FR -O0
INLINE = $(OFLAG)


#-----------------------------------------------------------------------
# the following lines specify the position of BLAS and LAPACK
# on P4, VASP works fastest with the libgoto library
# so that's what I recommend
#-----------------------------------------------------------------------

# Atlas based libraries
#ATLASHOME= $(HOME)/archives/BLAS_OPT/ATLAS/lib/Linux_P4SSE2/
#BLAS= -L$(ATLASHOME) -lf77blas -latlas

# use specific libraries (default library path might point to other libraries)
#BLAS= $(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a

# use the mkl Intel libraries for p4 (www.intel.com)
# mkl.5.1
# set -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines
# BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lpthread

# mkl.5.2 requires also to -lguide library
# set -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines
#BLAS=-L/opt/intel/mkl/10.2.1.017/lib/em64t -lguide -lmkl_blacs_lp64 -lmkl_intel_thread -lmkl_core -lmkl_intel_lp64
#BLAS=-L/opt/intel/Compiler/11.1/056/mkl/lib/em64t -lmkl_solver_lp64 -lmkl_blacs_lp64
BLAS=/home/maryam/GotoBLAS/libgoto_core2-r1.26.a -lsvml
# /opt/intel/mkl/10.2.1.017/lib/em64t/libguie.a \
# /opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_blacs_lp64.a \
# /opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_intel_thread.a \
# /opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_core.a \
# /opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_intel_lp64.a \
# /opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_scalapack_lp64.a \
# /opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_sequential.a \
#BLAS =-L/opt/intel/mkl/10.2.1.017/lib/em64t -lmkl_core -lmkl_blacs_intelmpi_ilp64 -lguide
# /opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_blacs_intelmpi_lp64.a

#BLACS=-L/opt/intel/mkl/10.2.1.017/lib/em64t -lmkl_blacs_lp64
# even faster Kazushige Goto's
# http://www.cs.utexas.edu/users/kgoto/signup_first.html
#BLAS=/home/maryam/libgoto_northwoodp-r1.26.so

# LAPACK, simplest use vasp.4.lib/lapack_doublei
LAPACK= ../vasp.4.lib/lapack_double.o

# use atlas optimized part of lapack
#LAPACK= ../vasp.4.lib/lapack_atlas.o -llapack -lcblas

# use the mkl Intel lapack
#LAPACK=-L/opt/intel/mkl/10.2.1.017/lib/em64t -lmkl_scalapack_ilp64 -lmkl_lapack -lmkl_core -liomp5 -lmkl_blacs_intelmpi_ilp64 -lmkl_pgi_thread \
#/opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_blacs_openmpi_ilp64.a
# /opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_lapack.a \
# /opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_em64t.a \
# /opt/intel/mkl/10.2.1.017/lib/em64t/libmkl_core.a \
# /opt/intel/mkl/10.2.1.017/lib/em64t/libguide.a
#LAPACK= ../vasp.4.lib/lapack_double.o
#-----------------------------------------------------------------------

LIB = -L../vasp.4.lib/libdmy.a \
../vasp.4.lib/linpack_double.o $(LAPACK) \
$(BLAS)

# options for linking (for compiler version 6.X, 7.1) nothing is required
#LINK =
# compiler version 7.0 generates some vector statments which are located
# in the svml library, add the LIBPATH and the library (just in case)
# LINK =/opt/intel/Compiler/11.1/056/lib/intel64/for_main.o


#-----------------------------------------------------------------------
# fft libraries:
# VASP.4.6 can use fftw.3.0.X (http://www.fftw.org)
# since this version is faster on P4 machines, we recommend to use it
#-----------------------------------------------------------------------

#FFT3D = fft3dfurth.o fft3dlib.o
FFT3D = fftw3d.o fft3dlib.o /usr/local/lib/libfftw3.a


#=======================================================================
# MPI section, uncomment the following lines
#
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77
# appends *two* underscores to symbols that contain already an
# underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90/ifc
# compilers however append only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X to lam-7.0.4 are stable
# mpich.1.2.1 was configured with
# ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \
# -f90="pgf90 " \
# --without-romio --without-mpe -opt=-O \
#
# lam was configured with the line
# ./configure -prefix /opt/libs/lam-7.0.4 --with-cflags=-O -with-fc=ifc \
# --with-f77flags=-O --without-romio
#
# please note that you might be able to use a lam or mpich version
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above, you can use the following line
#-----------------------------------------------------------------------

FC=/home/maryam/intel/impi/3.2.2.006/bin64/mpiifort
FCL=$(FC)

#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf charge density reduced in Z direction
# wNGZhalf gamma point only reduced in Z direction
# scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------

CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \
-Dkind8 -DNGZhalf -DCACHE_SIZE=16000 -DPGF90 -Davoidalloc \
-DMPI_BLOCK=500 \
-DRPROMU_DGEMV -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------

BLACS=$(HOME)/archives/SCALAPACK/BLACS/
SCA_=$(HOME)/archives/SCALAPACK/SCALAPACK

SCA= $(SCA_)/libscalapack.a \
$(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a

SCA=

#-----------------------------------------------------------------------
# libraries for mpi
#-----------------------------------------------------------------------

LIB = -L../vasp.4.lib/libdmy.a\
../vasp.4.lib/linpack_double.o $(LAPACK) \
$(SCA)$(BLAS)

# FFT: fftmpi.o with fft3dlib of Juergen Furthmueller
#FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o

# fftw.3.0.1 is slighly faster and should be used if available
FFT3D = fftmpiw.o fftmpi_map.o fft3dlib.o /usr/local/lib/libfftw3.a

#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------
BASIC= symmetry.o symlib.o lattlib.o random.o

SOURCE= base.o mpi.o smart_allocate.o xml.o \
constant.o jacobi.o main_mpi.o scala.o \
asa.o lattice.o poscar.o ini.o setex.o radial.o \
pseudo.o mgrid.o mkpoints.o wave.o wave_mpi.o $(BASIC) \
nonl.o nonlr.o dfast.o choleski2.o \
mix.o charge.o xcgrad.o xcspin.o potex1.o potex2.o \
metagga.o constrmag.o pot.o cl_shift.o force.o dos.o elf.o \
tet.o hamil.o steep.o \
chain.o dyna.o relativistic.o LDApU.o sphpro.o paw.o us.o \
ebs.o wavpre.o wavpre_noio.o broyden.o \
dynbr.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \
brent.o stufak.o fileio.o opergrid.o stepver.o \
dipol.o xclib.o chgloc.o subrot.o optreal.o davidson.o \
edtest.o electron.o shm.o pardens.o paircorrection.o \
optics.o constr_cell_relax.o stm.o finite_diff.o \
elpol.o setlocalpp.o aedens.o

INC=

vasp: $(SOURCE) $(FFT3D) $(INC) main.o
rm -f vasp
$(FCL) -o vasp $(LINK) main.o $(SOURCE) $(FFT3D) $(LIB)
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
$(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
$(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
$(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB)
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
$(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
$(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)

clean:
-rm -f *.g *.f *.o *.L *.mod ; touch *.F

main.o: main$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
$(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX)

makeparam.o: makeparam$(SUFFIX)
$(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX)

makeparam$(SUFFIX): makeparam.F main.F
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
poscar.o: poscar.inc poscar.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.inc wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F

$(OBJ_HIGH):
$(CPP)
$(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
$(CPP)
$(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)

fft3dlib_f77.o: fft3dlib_f77.F
$(CPP)
$(F77) $(FFLAGS_F77) -c $*$(SUFFIX)

.F.o:
$(CPP)
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
$(CPP)
$(SUFFIX).o:
$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)

# special rules
#-----------------------------------------------------------------------
# these special rules are cummulative (that is once failed
# in one compiler version, stays in the list forever)
# -tpp5|6|7 P, PII-PIII, PIV
# -xW use SIMD (does not pay of on PII, since fft3d uses double prec)
# all other options do no affect the code performance since -O1 is used
#-----------------------------------------------------------------------

fft3dlib.o : fft3dlib.F
$(CPP)
$(FC) -FR -lowercase -O1 -tpp7 -xW -unroll0 -w95 -vec_report3 -c $*$(SUFFIX)
fft3dfurth.o : fft3dfurth.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

radial.o : radial.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

symlib.o : symlib.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

symmetry.o : symmetry.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

dynbr.o : dynbr.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

broyden.o : broyden.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)

us.o : us.F
$(CPP)
$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

wave.o : wave.F
$(CPP)
$(FC) -FR -lowercase -O0 -c $*$(SUFFIX)

LDApU.o : LDApU.F
$(CPP)
$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
#mpi.o : mpi.F
# $(CPP)
# $(FC) -FR -lowercase -O0 -c $*$(SUFFIX)


********************************************************************************************
Share with us if you have any suggests,
Thanks a lot.
<span class='smallblacktext'>[ Edited ]</span>
Last edited by maryam on Wed Dec 02, 2009 12:07 pm, edited 1 time in total.

pafell
Newbie
Newbie
Posts: 24
Joined: Wed Feb 18, 2009 11:40 pm
License Nr.: 196
Location: Poznań, Poland

parallel installation (continued)

#2 Post by pafell » Thu Dec 03, 2009 9:01 am

You're linking against Intel's svml library:
BLAS=/home/maryam/GotoBLAS/libgoto_core2-r1.26.a -lsvml
To do so, you have to either "read" Intel's ifort configuration (source /path/to/ifort/ifort_vars.sh) or set LD_LIBRARY_PATH to include path to svml (Intel compiler's libraries) directory.
Last edited by pafell on Thu Dec 03, 2009 9:01 am, edited 1 time in total.

maryam
Newbie
Newbie
Posts: 19
Joined: Fri Jul 10, 2009 10:04 am

parallel installation (continued)

#3 Post by maryam » Thu Dec 03, 2009 6:56 pm

Pafell,
Thank you for your attention!
We made the executable file of vasp. As you mentioned, we found that the path of ifortvars must be set (even if the –lsvml library was not linked).

Now, there is a question!
As explained in the previous thread, we have used mpiifort compiler (installed intelmpi 3.2.2). But in the process of running vasp on 4 nodes that each node is quad core 2, there is this message in the OUTCAR file: "Running on one node". What does this message mean?

While settings for parallelization in the INCAR file are as bellow:
LPLANE= .TRUE.
NPAR = 4
NSIM = 4

Also, the environmental variable (OMP_NUM_THREADS) is set in the command line.

Do you have any suggestion?

Thank you for any idea.
Last edited by maryam on Thu Dec 03, 2009 6:56 pm, edited 1 time in total.

pafell
Newbie
Newbie
Posts: 24
Joined: Wed Feb 18, 2009 11:40 pm
License Nr.: 196
Location: Poznań, Poland

parallel installation (continued)

#4 Post by pafell » Fri Dec 04, 2009 1:26 pm

Blind guess is that you forgot to use mpi and try to use vasp executable directly.

I've no experience with Intel's MPI, but it should be similar to LAM or OpenMPI. So you should boot MPI environment (or it can be booted automatically like with openmpi). Then you should run vasp using mpirun wrapper. In OUT file you should see Running on X nodes, where X stands for number of nodes you actually try to use.
Maybe someone using Intel's MPI will help, or you could have a look at Intel MPI's documentation.
Last edited by pafell on Fri Dec 04, 2009 1:26 pm, edited 1 time in total.

maryam
Newbie
Newbie
Posts: 19
Joined: Fri Jul 10, 2009 10:04 am

parallel installation (continued)

#5 Post by maryam » Fri Dec 04, 2009 4:08 pm

Paffel,
I'm grateful!

We have a remote accessible to a server in which 4 nodes of that server are used for our running (nodes of 0, 1, 2, and 3) with this command:
ssh compute-0-3

After that, as written in the Reference-Manual of Intel MPI, this command is written for running vasp:
mpiexec /home/maryam/vasp

There is this statement "Running on one node" in the OUTCAR file. Now, if I cancel the process of running, this message is written:
"mpiexec notice that process rank 0 with PID 13358 on node compute-0-3.local exited on signal 0 (unknown signal 0)"

My important question is that if nodes 0 through 3 (compute-0-3) have been considered as one node or there is another problem?

Thank you very much!
Last edited by maryam on Fri Dec 04, 2009 4:08 pm, edited 1 time in total.

pafell
Newbie
Newbie
Posts: 24
Joined: Wed Feb 18, 2009 11:40 pm
License Nr.: 196
Location: Poznań, Poland

parallel installation (continued)

#6 Post by pafell » Sat Dec 05, 2009 11:05 am

Each used core should be mentioned as node (there's no distinction between core/cpu/host - each core is counted as node).

Have you tried running mpirun -np NUMBER_OF_CORES instead of mpiexec? Also you could try running mpiexec -np NUMBER_OF_CORES. I'm not familiar with Intel's MPI, so these are just guesses after some googling around.

My last idea is to check if MPI is working correctly - try compiling and running test from PATH_TO_IMPI/test/ directory. Maybe mpd is not configured properly.
Last edited by pafell on Sat Dec 05, 2009 11:05 am, edited 1 time in total.

Post Reply