VASP.5.3 compile error

Questions regarding the compilation of VASP on various platforms: hardware, compilers and libraries, etc.


Moderators: Global Moderator, Moderator

Locked
Message
Author
lineyarna
Newbie
Newbie
Posts: 5
Joined: Tue Jul 09, 2013 8:34 pm

VASP.5.3 compile error

#1 Post by lineyarna » Tue Jun 10, 2014 11:58 pm

I'm trying to compile VASP.5.3 on Linux x86_64 and the compiler is ifort 13.0.020120731

I compile the vasp.5.lib first and get this:
ifort -O0 -FI -FR -c preclib.f
cc -O -c timing_.c
cc -O -c derrf_.c
cc -O -c dclock_.c
gcc -E -P -C diolib.F >diolib.f
ifort -O0 -FI -FR -c diolib.f
gcc -E -P -C dlexlib.F >dlexlib.f
ifort -O0 -FI -FR -c dlexlib.f
gcc -E -P -C drdatab.F >drdatab.f
ifort -O0 -FI -FR -c drdatab.f
ifort -O0 -FI -c lapack_double.f
lapack_double.f(10179): remark #5140: Unrecognized directive
CDIR$ NEXTSCALAR
-------------------------^
lapack_double.f(10181): remark #5140: Unrecognized directive
CDIR$ NEXT SCALAR
--------------------------^
lapack_double.f(20692): remark #5140: Unrecognized directive
CDIR$ NEXTSCALAR
-------------------------^
lapack_double.f(20694): remark #5140: Unrecognized directive
CDIR$ NEXT SCALAR
--------------------------^
lapack_double.f(20706): remark #5140: Unrecognized directive
CDIR$ NEXTSCALAR
----------------------------^
lapack_double.f(20708): remark #5140: Unrecognized directive
CDIR$ NEXT SCALAR
-----------------------------^
lapack_double.f(20733): remark #5140: Unrecognized directive
CDIR$ NEXTSCALAR
-------------------------^
lapack_double.f(20735): remark #5140: Unrecognized directive
CDIR$ NEXT SCALAR
--------------------------^
ifort -O0 -FI -c linpack_double.f
ifort -O0 -FI -c lapack_atlas.f
lapack_atlas.f(12864): remark #5140: Unrecognized directive
CDIR$ NEXTSCALAR
-------------------------^
lapack_atlas.f(12866): remark #5140: Unrecognized directive
CDIR$ NEXT SCALAR
--------------------------^
lapack_atlas.f(18861): remark #5140: Unrecognized directive
CDIR$ NEXTSCALAR
-------------------------^
lapack_atlas.f(18863): remark #5140: Unrecognized directive
CDIR$ NEXT SCALAR
--------------------------^
lapack_atlas.f(18875): remark #5140: Unrecognized directive
CDIR$ NEXTSCALAR
----------------------------^
lapack_atlas.f(18877): remark #5140: Unrecognized directive
CDIR$ NEXT SCALAR
-----------------------------^
lapack_atlas.f(18902): remark #5140: Unrecognized directive
CDIR$ NEXTSCALAR
-------------------------^
lapack_atlas.f(18904): remark #5140: Unrecognized directive
CDIR$ NEXT SCALAR
--------------------------^
rm libdmy.a
rm: cannot remove `libdmy.a': No such file or directory
make: [libdmy.a] Error 1 (ignored)
ar vq libdmy.a preclib.o timing_.o derrf_.o dclock_.o diolib.o dlexlib.o drdatab.o
ar: creating libdmy.a
a - preclib.o
a - timing_.o
a - derrf_.o
a - dclock_.o
a - diolib.o
a - dlexlib.o
a - drdatab.o


The heart of my makefile for VASP.5.3 looks like this:

SUFFIX=.f90

#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
#FC=ifort
FC=mpif90
# fortran linker
FCL=$(FC)


#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
# CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
# CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
# SUSE X.X, maybe some Red Hat distributions:

CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)

# this release should be fpp clean
# we now recommend fpp as preprocessor
# if this fails go back to cpp
CPP_=fpp -f_com=no -free -w0 $*.F $*$(SUFFIX)

#-----------------------------------------------------------------------
# possible options for CPP:
# NGXhalf charge density reduced in X direction
# wNGXhalf gamma point only reduced in X direction
# avoidalloc avoid ALLOCATE if possible
# PGF90 work around some for some PGF90 / IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS)
# tbdyn MD package of Tomas Bucko
#-----------------------------------------------------------------------

CPP = $(CPP_) -DHOST=\"LinuxIFC\" \
-DCACHE_SIZE=12000 -DPGF90 -Davoidalloc -DNGXhalf \
# -DRPROMU_DGEMV -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# general fortran flags (there must a trailing blank on this line)
# byterecl is strictly required for ifc, since otherwise
# the WAVECAR file becomes huge
#-----------------------------------------------------------------------

FFLAGS = -FR -names lowercase -assume byterecl

#-----------------------------------------------------------------------
# optimization
# we have tested whether higher optimisation improves performance
# -axK SSE1 optimization, but also generate code executable on all mach.
# xK improves performance somewhat on XP, and a is required in order
# to run the code on older Athlons as well
# -xW SSE2 optimization
# -axW SSE2 optimization, but also generate code executable on all mach.
# -tpp6 P3 optimization
# -tpp7 P4 optimization
#-----------------------------------------------------------------------

# ifc.9.1, ifc.10.1 recommended
OFLAG=-O2 -ip

OFLAG_HIGH = $(OFLAG)
OBJ_HIGH =
OBJ_NOOPT =
DEBUG = -FR -O0
INLINE = $(OFLAG)

#-----------------------------------------------------------------------
# the following lines specify the position of BLAS and LAPACK
# we recommend to use mkl, that is simple and most likely
# fastest in Intel based machines
#-----------------------------------------------------------------------

# mkl path for ifc 11 compiler
#MKL_PATH=$(MKLROOT)/lib/em64t

# mkl path for ifc 12 compiler
MKL_PATH=$(MKLROOT)/lib/intel64

MKL_FFTW_PATH=$(MKLROOT)/interfaces/fftw3xf/

# BLAS
# setting -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines usually speeds up program execution
# BLAS= -Wl,--start-group $(MKL_PATH)/libmkl_intel_lp64.a $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -lguide
# faster linking and available from at least version 11
BLAS= -lguide -mkl

# LAPACK, use vasp.5.lib/lapack_double

#LAPACK= ../vasp.5.lib/lapack_double.o

# LAPACK from mkl, usually faster and contains scaLAPACK as well

LAPACK= $(MKL_PATH)/libmkl_intel_lp64.a

# here a tricky version, link in libgoto and use mkl as a backup
# also needs a special line for LAPACK
# this is the best thing you can do on AMD based systems !!!!!!

#BLAS = -Wl,--start-group /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -liomp5
#LAPACK= /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_lp64.a

#-----------------------------------------------------------------------

LIB = -L../vasp.5.lib -ldmy \
../vasp.5.lib/linpack_double.o $(LAPACK) \
$(BLAS)

# options for linking, nothing is required (usually)
#Link added by Liney
LINK =

#-----------------------------------------------------------------------
# fft libraries:
# VASP.5.2 can use fftw.3.1.X (http://www.fftw.org)
# since this version is faster on P4 machines, we recommend to use it
#-----------------------------------------------------------------------

FFT3D = fft3dfurth.o fft3dlib.o

# alternatively: fftw.3.1.X is slighly faster and should be used if available
#FFT3D = fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a

# you may also try to use the fftw wrapper to mkl (but the path might vary a lot)
# it seems this is best for AMD based systems
#FFT3D = fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a
#INCS = -I$(MKLROOT)/include/fftw

#=======================================================================
# MPI section, uncomment the following lines until
# general rules and compile lines
# presently we recommend OPENMPI, since it seems to offer better
# performance than lam or mpich
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi
#-----------------------------------------------------------------------

FC=mpif90
FCL=$(FC)

#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf charge density reduced in Z direction
# wNGZhalf gamma point only reduced in Z direction
# scaLAPACK use scaLAPACK (recommended if mkl is available)
# avoidalloc avoid ALLOCATE if possible
# PGF90 work around some for some PGF90 / IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS)
# tbdyn MD package of Tomas Bucko
#-----------------------------------------------------------------------

#-----------------------------------------------------------------------

CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \
-DCACHE_SIZE=16000 -DPGF90 -Davoidalloc -DNGZhalf \
#-DCACHE_SIZE=4000 -DPGF90 -Davoidalloc -DNGZhalf \

-DMPI_BLOCK=8000 -Duse_collective
# -DscaLAPACK
## -DRPROMU_DGEMV -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply leave this section commented out
#-----------------------------------------------------------------------

# usually simplest link in mkl scaLAPACK
#BLACS= -lmkl_blacs_openmpi_lp64
#SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)

#-----------------------------------------------------------------------
# libraries
#-----------------------------------------------------------------------

LIB = -L../vasp.5.lib -ldmy \
../vasp.5.lib/linpack_double.o \
$(SCA) $(LAPACK) $(BLAS)

#-----------------------------------------------------------------------
# parallel FFT
#-----------------------------------------------------------------------

# FFT: fftmpi.o with fft3dlib of Juergen Furthmueller
FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o

# alternatively: fftw.3.1.X is slighly faster and should be used if available
#FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a

# you may also try to use the fftw wrapper to mkl (but the path might vary a lot)
# it seems this is best for AMD based systems
#FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a
#INCS = -I$(MKLROOT)/include/fftw

__________________________________________________

This is the error I get:

.o lcao_bare.o wnpr.o dmft.o rmm-diis_mlr.o linear_response_NMR.o wannier_interpol.o linear_response.o auger.o getshmem.o dmatrix.o fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o -L../vasp.5.lib -ldmy ../vasp.5.lib/linpack_double.o -L/opt/intel/composer_xe_2013.1.117/mkl/lib/intel64 -lmkl_lapack95_lp64 -lguide -mkl
ld: cannot find -lguide
make: *** [vasp] Error 1


I saw someone had similar problem which was related to the libdmy.a libary not being created correctly. I have tried recompiling it and that didn't help.

I appreciate your help
Liney
Last edited by lineyarna on Tue Jun 10, 2014 11:58 pm, edited 1 time in total.

support_vasp
Global Moderator
Global Moderator
Posts: 1817
Joined: Mon Nov 18, 2019 11:00 am

Re: VASP.5.3 compile error

#2 Post by support_vasp » Wed Sep 04, 2024 12:47 pm

Hi,

We're sorry that we didn’t answer your question. This does not live up to the quality of support that we aim to provide. The team has since expanded. If we can still help with your problem, please ask again in a new post, linking to this one, and we will answer as quickly as possible.

Best wishes,

VASP


Locked