Support #20842

Install ROOT 6.14 and gcc-8.2.0 as a UPS product

Added by Nikolai Mokhov over 2 years ago. Updated about 2 years ago.

Target version:
Start date:
Due date:
% Done:


Estimated time:
DUNE, LBNE, Mu2e, g-2
SSI Package:


The MARS15 is supported and actively used by four MARS groups on Fermigrid: marslbne, marsmu2e, marsgm2 and marsaccel. To exploit the powerful features of the current ROOT and several other general-purpose modules, the new release of the MARS15 code requires the latest gcc 8.2 compiler supported release and the latest stable version ROOT 6.14/00. I would greatly appreciate it if you could install ROOT 6.14/00 and gcc-8.2.0 on Fermigrid as a UPS product.

It is important that these are accessible by all the four MARS groups either by installing it four times in


or by creating four links in the above areas to a single installation to save the disk space and efforts for installation (just my idea).

Thank you.



#1 Updated by Nikolai Mokhov over 2 years ago

Sorry, I forgot to mention that when we do the ROOT installation ourselves - preceding the MARS15 code installation, we do it in the following sequence:
1. Setup the compiler (gcc-8.2.0 in this case)to be used for the rest
2. Install GSL (version 2.5 in this case) as a separate package, not as a built-in ROOT library
3. Install ROOT (version ROOT 6.14/00 in this case) with at least the following two components (libraries) enabled: gdml (requires python version > 2.7) and fortran (minicernlib).


#2 Updated by Kyle Knoepfel over 2 years ago

  • Status changed from New to Feedback

We'd like to discuss with you what exactly you need. We will contact you regarding this.

#3 Updated by Nikolai Mokhov over 2 years ago

This is from Igor Tropin for the MARS group:

It is important to stress that we use ROOT as a set of libraries, not as an application.

Needed for the current MARS15 the ROOT version v6.14 is requested for installation on grid because it is the current production version of the package. Currently, 3 minor releases have been issued: v.6.14.00, v.6.14.02, v.6.14.04. I have tested the LBNF MARS and other applications with all of them on my local linux machine with the GCC v8.1 Tests have shown the following: When the LBNF MARS-based application is linked with ROOT libraries of v6.14. [02,04], then the memory corruption is detected in function belonging to the TGeo package (
Because of this, we are asking to install v6.14.00 which did no show any issues, but not the latest v.6.14.04

gcc 8.X is requested for installation because the features of cxx17 are used in implementation of the shared memory pool and shared memory allocator, a newly developed code aimed to increase MARS I/O efficiency on HPC clusters. Traits of cxx17 are also used in ROOT. So, it would be good to have opportunity to compile modern C++ code on grid. Also, there were problems with compilation of the current MARS code by means of gcc 4.8 currently in use on Fermigrid. It was C++ module for the beam-line builder, where lambda-expression and "auto" declarations were used.
GCC 4.8 had compiled module without any warnings messages, however results were incorrect. At the same time, MARS15 compiled with
gcc 8.1 works fine. Therefore, we are requesting to install as a UPS product, the gcc 8.2 compiler supported release.

Working prototype for the requested installations - GSL-2.5, CERNLIB-2006 and ROOT 6.14.00 - has been created for the marslbne group, installation directory is /dune/marslbne/Tools.
GCC 8.2, used for compilation, was installed from the local UPS by means of the following setup command:
. /cvmfs/

setup gcc v8_2_0 -z /dune/app/marslbne/ups

Compilation of CERNLIB requires imake command which is not accessible on dunegpvm05 node. That is why I compiled CERNLIB on my local linux machine (by means of gcc v8.1), but not on dunegpvm05. The CERNLIB installation is relocatable and can be safely used with other libraries compiled with gcc 8.2

The entire set of environment variables allowing the use of the installed tools for compilation of MARS itself and MARS-based applications is given below:

  1. marslbne
    . /cvmfs/
  2. GCC
    setup gcc v8_2_0 -z /dune/app/marslbne/ups
    export FC=/dune/app/marslbne/ups/gcc/v8_2_0/Linux64bit+2.6-2.12/bin/gfortran
    export CC=/dune/app/marslbne/ups/gcc/v8_2_0/Linux64bit+2.6-2.12/bin/gcc
    export CXX=/dune/app/marslbne/ups/gcc/v8_2_0/Linux64bit+2.6-2.12/bin/c++
    export TOOLS=/dune/app/marslbne/Tools
  3. ROOT
    . $TOOLS/bin/
    export CERN=$TOOLS/cernlib
    export CERN_LEVEL=2006


#4 Updated by Kyle Knoepfel over 2 years ago

The SciSoft team is not responsible for installing any products anywhere. That is the responsibility of whoever manages the installations for the machines you are using. GCC 8.2 has been provided for both SLF6 and SL7 machines. See here for the tarballs:

As far as which version of ROOT to use. Have you filed a bug report with the ROOT team regarding the memory corruption you have detected in TGeo?

The only version of ROOT that we have packaged for use is 6.14/04, which is known to have at least one problem that affects art-using experiments. Our strong preference is to wait until we have the fixes required, and then we can package a new version of ROOT.

Is it possible for you to use an older version of ROOT while you are waiting for the 6.14 bugs to be fixed?

#5 Updated by Rob Kutschke over 2 years ago

Mu2e can look after the deployment to Mu2e cvmfs.

We request a manifest for root so that we can use the pull products system to pull root and all of the products that it depends on, including the compiler. Failing that we would appreciate a clear statement of what products are needed so that we know which tar.bz2 files to curl and unpack.

We are agnostic about root versions - you can sort that out with Nikolai.

#6 Updated by Igor Tropin over 2 years ago

If gcc v8_2_0 from would be accessible by means the UPS setup command for all users in all mars groups (mu2e, marsaccel etc), then it can be first step to the goal. Installation of other stuff can be next step.

Question : Is that possible to run the Docker or Singularity containers on FermiGrid? I remember that about (half)year ago I meet the presentations where such opportunity was introduced.

#7 Updated by Kyle Knoepfel over 2 years ago

Please contact your system administrator for installing GCC 8.2 on the machines you need.

#8 Updated by Kyle Knoepfel over 2 years ago

  • Status changed from Feedback to Resolved

Since GCC 8.2.0 has been installed, we would like to close this issue.

#9 Updated by Kyle Knoepfel about 2 years ago

  • % Done changed from 0 to 100
  • Status changed from Resolved to Closed

We are closing this issue due to no additional feedback.

Also available in: Atom PDF