INSTALL the different projects of cm3 lab
G BECKER 26/07/2012

This file describes how to install a cm3 projects on your computer. For cm3's cluster (e.g. cm3011 cm3012) all packages are install and the procedure of installation is described later (see B).

There is now a special cm3apps projects that rules them all which is described herein but you can install only 1 project as before directly on the project folder. 

A) The packages needed to install gmsh are (gmsh has to be install to use cm3apps). 

   # For Ubuntu use sudo apt-get install
      - libblas-dev
      - libopenmpi-dev
      - openmpi-bin
      - liblappack-dev
      - g++
      - gfortran
      - swig
      - libfltk1.3-dev
      - libpng-dev
      - libjpeg-dev

   ° For openSUSE the following packages are available via yast2

        - make, cmake, gcc, gcc-c++, gcc-fortran python (+ python_devel) and svn (for svn search subversion in package manager under openSUSE) 

        - fltk 

        - GLU (for openSUSE the package name is freeglut install freeglut_devel too)

        - blas (advanced you can install GotoBLAS see instruction later. It can be install after PETSc)
          for openSUSE 12.2 or higher the package is blas-devel (and not blas)

        - lapack for openSUSE 12.2 or higher the package is lapack-devel (and not lapack)

        - openmpi (don't forget to select openmpi_devel)

        - libpng and libjpeg (with devel and libpng-compat-devel)

        - PCRE (devel and tools) for swig (can be install directly by swig see later)

    ° CodeBlocks (optional it is just to edit the source in a friendly cross plateform IDE)
         1) On opensuse go to yast
         2) select software repository 
         3) select add
         4) choose http and next
         5) Give a name to the repository eg codeblocks
            give the following url:  http://download.opensuse.org/repositories/devel%3a/tools%3a/ide/openSUSE_11.3 
            be aware if you don't use opensuse 11.3 but 11.4 change the 3 in 4 at the end of url
         6) Install CodeBlocks via yast2 The package CodeBlocks Contrib has to be removed (otherwise it'll use 100% of your CPU all the time even after you close the window!)
    ° SWIG > 2.0 (can be installed from packages repository for >= openSuSE 12.1)

        1) Download sources from http://www.swig.org/download.html Be aware 2.0.4 seems not working on Lemaitre2 used instead the 2.0.2 version 

        2) untar the archive (by default it's install in usr/lib so perform installation in superuser and the place of extraction is not important. If you cannot be logged in root (on clusters for example) add --predix=$HOME/swig to ./configure command it will instal locally swig in the folder swig in your home directory 

        3) go in the extracted folder

        4) Type :
               ./configure (or ./configure --prefix=$HOME/swig)
           if configure report a problem with pcre you have to follow the instruction provided (download pcre from http://www.pcre.org, and place the archive in the swig's configuration folder. Then type Tools/pcre-build.sh (more help type Tools/pcre-build.sh --help) then redo ./configure in swig.
               make
               make install

    ° PETSc (has to be installed manually)

        1) Download sources : http://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html
                              or use the tar version of folder distpetsc

        2) untar the archive where you want to install

        3) edit your .bashrc file (located in your home folder) add lines :
                  export PETSC_DIR= <the absolute path where you install petsc>
                  export PETSC_ARCH= linux-gnu-c-opt (if configure petsc with --with debugging=0 otherwise it's linux-gnu-c-debug)

        4) close your terminal and open it (to reload your bashrc)

        5) Go to your petsc installation folder and type the following command (you have to use this configuration for hmem server):               
               ./config/configure.py --with-debugging=0 --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-f-blas-lapack=1 --download-openmpi=1  --CFLAGS=-fPIC --CXXFLAGS=-fPIC --FFLAGS=-fPIC --with-shared=1 --with-clanguage=cxx
             note if you have openmpi problem when you launch a computation :"error: /usr/lib/openmpi/lib/openmpi/mca_paffinity_linux.so: undefined symbol: mca_base_param_reg_int"
             add this extra-line in your .bashrc (for hmem server you have to add this line):
                  export LD_PRELOAD=$LD_PRELOAD:<the absolute path where you install petsc>/$PETSC_ARCH/lib/libmpi.so
                  (configuration hmem)
                      ./config/configure.py --with-debugging=0 --download-f-blas-lapack=1 --download-openmpi=1  --with-pic --with-shared=1 --with-clanguage=cxx --with-batch --known-mpi-shared=0
                  nic3
                    module purge module add shared sge  ict/compiler/64/11.1/038  openmpi/intel/64/1.4.2  
                   ./config/configure.py --with-debugging=0 --download-f-blas-lapack=1 --with-mpi-dir=/cvos/shared/apps/openmpi/gcc/64/1.3.0/  --with-pic --with-shared=1 --with-clanguage=cxx --with-batch -known-mpi-shared=0
                     add this extra-line in your .bashrc 
                  export LD_PRELOAD=$LD_PRELOAD:/cvos/shared/apps/openmpi/gcc/64/1.3.0/lib64/libmpi.so



           Alternative configuration using mpich (can be usefull if you have mpi problems)
               ./config/configure.py --with-debugging=0 --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1  --with-pic --with-shared=1 --with-clanguage=cxx
           Second Alternative using openmpi (maybe the best but the most complicated ones)
               ./config/configure.py --with-debugging=0 --download-f-blas-lapack=1 --with-mpi-dir=/usr/lib64  --with-pic --with-shared=1 --with-clanguage=cxx
           where /usr/lib64 is the path where openmpi is installed by default under openSuSE (11.4). To use it you have to install openmpi (with openmpi-devel) from your package manager (yast2 under openSuSE) and add the following lines in your .bashrc:
           export PATH=$PATH:/usr/lib64/mpi/gcc/openmpi/bin
           export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/mpi/gcc/openmpi/lib64

        for 3.2 version (with blas and lapack take from your library)
             ./config/configure.py --with-debugging=0 --with-blas-lapack-dir=/usr/lib64 --with-mpi-dir=/usr/lib64 --with-pic --with-shared-libraries=1 --with-clanguage=cxx

        for 3.2 version on lemaitre2 and you have to submit a job (see 5b below)
            ./config/configure.py --with-debugging=0 --with-blas-lib=/usr/local/blas/3.2.1/gcc/lib64/libblas.so --with-lapack-lib=/usr/local/lapack/3.2.1/gcc/lib64/liblapack.so --with-mpi-dir=/usr/local/openmpi/1.4.5/gcc/ --with-pic --with-shared-libraries=1 --with-clanguage=cxx --with-batch --known-mpi-shared-libraries=0
  
        for 3.2 version on hmem (lapack is not install --> installed through petsc) and you have to submit a job (see 5b belo
./config/configure.py --with-debugging=0 --download-f-blas-lapack=1 --with-mpi-dir=/usr/local/openmpi/1.5.3-gcc-4.4.4/ --with-pic --with-shared-libraries=1 --with-clanguage=cxx --with-batch --known-mpi-shared-libraries
         for 3.3 version on cm3011
 ./config/configure.py --with-debugging=0 --with-blas-lapack-dir=/usr/lib64 --with-mpi-dir=/usr/lib64/ --with-pic --with-shared-libraries=1 --download-mumps=yes --download-scalapack=yes --download-blacs=yes --download-parmetis=yes --download-metis=yes --download-superlu_dist=yes

           If you have the following error Cannot run executables created with FC add (openSuSE 12.2 petsc3.3p3)
             --with-fc=0

        My last config (with superlu and mumps)
./config/configure.py --with-debugging=0 --with-blas-lapack-dir=/usr/lib64 --with-mpi-dir=/usr/lib64/ --with-pic --with-shared-libraries=1 --download-mumps=yes --download-scalapack=yes --download-blacs=yes --with-batch --known-mpi-shared-libraries=0 --download-parmetis=yes --download-metis=yes --download-superlu_dist=yes

           put --with-debugging=1 for debug
           for more petsc configure options type: ./configure/configure.py -help
           On cluster (6/12 or 4/8 cpu) add --with-batch --known-mpi-shared=0

         5b) On Ceci clusters (hmem and lemaitre2) the configure option create a file that you have to submit to the job manager. Toward this end type 
          cp gmsh/project/NonLinearSolver/clusterScript/ceci_petsc.sh $PETSC_DIR
          sbatch ceci_petsc.sh
Then wait until the job end (the last line of the created file "petsc-install.out" has to be "Finish" the file ./reconfigure-linux-gnu-c-opt.py is created that you have to launch by typing:
           ./reconfigure-linux-gnu-c-opt.py
          
        6) If configuration failed ask or search... Otherwise type the command (if it is suggested by configure copy and paste it): 
            make PETSC_DIR=<installation folder> PETSC_ARCH=linux-gnu-c-opt all

        7) Perform test

        8) Go to your home folder. Create a file ".petscrc" which allow to give some options to petsc add the two lines:
                -ksp_type preonly
                -pc_type lu
           If these options are set before test this one can failed (MPI on 2 cpu) but everything is OK (seems OK for petsc3.3)

         8b) To perform // computation with a direct solver add the line in your .petscrc
                -pc_factor_mat_solver_package mumps

        9) Unfortunately (with version 3.1-p7 and openSUSE 11.3) there is a problem with library libmpi_f77.so.0 . To fix it copy the library of openmpi installation in $PETSC_DIR/$PETSC_ARCH/lib . Assume openSUSE installation type the command
           sudo cp /usr/lib64/mpi/gcc/openmpi/lib64/libmpi_f77.so.0 <absolute path to petsc installation folder/linux-gnu-c-opt/lib>  

           If you use the alternative configuration (with mpich) this problem appears solved. If petsc is installed with mpich uninstall other versions mpich and openmpi of your computer via your software management program (yast on opensuse) and edit your .bashrc file:
              cd
              vi .bashrc
              export PATH=$PATH:<petsc installation path>/<petsc arch>/bin
              export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<petsc installation path>/<petsc arch>/lib
              :wq (to quit vi)
              bash (or exit to reload .bashrc in your shell)
           Proceeding like this the mpi version used is the one installed by petsc and it ensures that the same version of mpi is used by gmsh 
  

          
    ° GotoBLAS (Advanced)
       
       1) Download the source: http://www.tacc.utexas.edu/tacc-projects/gotoblas2/downloads/

       2) untar it in an installation folder

       3) Edit the Makefile.rule to choose the compilation's options
             be aware the option target has to be NEHALEM

       4) Type the make command

       5) Unfortunately there is a problem when gmsh or dgshell link librairies. To fix it the 2 following libraries have to be copy from petsc :
              libfblas.a
              libflapack.a  


B) Now you can install your project(s) (dgshell and/or dG3D and/or msch), NonLinearSolver and gmsh in the same time. gmsh is created in the NonLinearSolver folder. Because in fact NonLinearSolver is also a gmsh's project that has to be included in your project(s). (To be proper NonLinearSolver should be outside projects in the gmsh folder but it is apparently not possible) (These instructions have to be followed on cluster too)

       0) On cluster loggin with ssh username@cluster edit your .bashrc file and type (on your own PC you did it for petsc installation)
              export PETSC_DIR=/petsc/petsc-3.1-p7 (/petsc/petsc-3.2-p6 on cm3015)
              export PETSC_ARCH=linux-gnu-c-opt
              export SLEPC_DIR=/slepc/slepc-3.2-p4 (on cm3015 only to use slepc)     

          exit the terminal and loggin again (to reload your .bashrc file)
          On cm3012 you must add these two lines to use mpi (if you don't want mpi put ENABLE_MPI=OFF at point 4)):
              export PATH=$PATH:/petsc/petsc-3.1-p7/linux-gnu-c-opt/bin
              export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/petsc/petsc-3.1-p7/linux-gnu-c-opt/<petsc arch>/lib
      
           On the next lines APPS = dgshell dG3D or msch or the 3 if you want to compile the 3
       0b) on hmem edit projects/APPS/CMakeLists.txt and replace PYTHON_INCLUDE_PATH by PYTHON_INCLUDE_DIR

       1) retrieve source via svn: svn co https://geuz.org/svn/gmsh/trunk gmsh type your username and password (ask for it)

       2) go to the folder gmsh/projects/APPS or gmsh/projects/cm3apps to compile different projects at the same time

       3) create a folder for installation 
              mkdir release (or debug release on cluster)

       4) type ccmake .. and set the option
             CMAKE_BUILD_TYPE release (or debug)
             ENABLE_MPI = ON (for // implementation)
             ENABLE_TAUCS=OFF (or install it but it's hard)
             ENABLE_SLEPC=OFF
             ENABLE_WRAP_PYTHON=ON
             if you use cm3apps you have to put ON the projects you want to install (no projects install by default except the solver)
             ENABLE_DGSHELL = OFF/OFF
             ENABLE_DG3D = ON /OFF
             ENABLE_MSCH = ON / OFF

             On hmem it fails to find the mpi version used for petsc so you have to edit the configuration yourself
             (press T to go in advanced mode and you hae to press c one time after put ENABLE_MPI = ON)
             MPIEXEC = /usr/local/openmpi/1.5.3-gcc-4.4.4/bin/mpiexec
             MPIEXEMAX_NUMPROCS = 2
             MPIEXEC_NUMPROC_FLAG = -np
             MPI_COMPILER = /usr/local/openmpi/1.5.3-gcc-4.4.4/bin/mpic++
             MPI_EXTRA_LIBRARY = /usr/local/openmpi/1.5.3-gcc-4.4.4/lib/libmpi.so;/usr/local/openmpi/1.5.3-gcc-4.4.4/lib/libopen-rte.so;/usr/local/openmpi/1.5.3-gcc-4.4.4/lib/libopen-pal.so
             MPI_INCLUDE_PATH = /usr/local/openmpi/1.5.3-gcc-4.4.4/include
             MPI_LIBRARY = /usr/local/openmpi/1.5.3-gcc-4.4.4/lib/libmpi/so
             MPI_LINK_FLAGS = -Wl,--export-dynamic


           (Advanced for used GotoBLAS press t for advanced mode and change BLAS_blas_LIBRARY by <path of GotoBLAS' installation>/libgoto2_nehalemp-1.13.so 
            the name of library can vary with your GotoBLAS version)
            On cluster press t for advanced mode and change BLAS_blas_LIBRARY by /gotoblas/GotoBLAS2/libgoto2_nehalemp-1.13.so  
       5) press c to configure

       6) press e

       7) press c If the report shows some errors I made a mistake in the description of installation. Please ask or install the missing packages. if no error appears press e then g for generate 

       8) If you want a codeBlocks project file type 
               cmake .. -G "CodeBlocks - Unix Makefiles"

       9) type 
               make -j<number of cpu> to generate gmsh and dgshell or make dgshell -j<number of cpu> to compile just dgshell

 
         If you have an error during the compilation about mpi.h which is not found for petsc (can occur with petsc 3.2) you must define in your bashrc the variable NLSMPIINC which has to give the path of your mpi include (eg /usr/lib64/mpi/gcc/openmpi/include
) type then bash in your shell of compilation and compile again

      10) If errors occur ask or fix them
          On lemaitre2 there is a problem with the compilation of wrapcxxfile of gmsh (not the one of dgshell)
          To compile them you have to open the following files (they are created during the compilation) 
          <folder where you compile>/NonLinearSolver/gmsh/wrappers/gmshpy/gmshCommonPYTHON_wrap.cxx
<folder where you compile>/NonLinearSolver/gmsh/wrappers/gmshpy/gmshGeoPYTHON_wrap.cxx
<folder where you compile>/NonLinearSolver/gmsh/wrappers/gmshpy/gmshMeshPYTHON_wrap.cxx
<folder where you compile>/NonLinearSolver/gmsh/wrappers/gmshpy/gmshNumericPYTHON_wrap.cxx
<folder where you compile>/NonLinearSolver/gmsh/wrappers/gmshpy/gmshPostPYTHON_wrap.cxx
<folder where you compile>/NonLinearSolver/gmsh/wrappers/gmshpy/gmshSolverPYTHON_wrap.cxx
and add the following line at the beginning of the file
          #include<stddef.h> 
          
It seems that is a problem with the version of gcc (from internet)          
<folder where you compile>/src/dgshell_wrap.cxx maybe you have to replace #include<Python.h> by #include</usr/include/python2.6/Python.h>

      11) edit your .bashrc file to add folders which contain gmsh executable and library and dgshell library
          here APPS is the name of the project(s) you compile (dgshell, dG3D or msch) add the path of each project
              export PYTHONPATH=$PYTHONPATH:$PATH:<path of APPS library (APPS.py, APPS.so)>
              export PYTHONPATH=$PYTHONPATH:$PATH:<path of cm3apps library (gmshpy.py normally in the NonLinearSolver/gmsh folder of your compilation folder) >
              export PATH=$PATH:<path of gmsh executable>  

          Note that if you choose CMAKE_BUILD_TYPE = debug (or Debug) the name of .py file is dgshellpyDebug.py and otherwise it is dgshellpy.py which allow to switch easily from one version
          to another. You have just to change the import in the beginning of your example.py file

      12) Go to gmsh/projects/dgshell/battery folder and type 
              python battery.py --launch= to perform test

      13) after test see report.csv. If large error appear or there are some missing results please ask

      14) Enjoy it ! 

    

        
