The MicroBooNE Guide to Using LArSoft

The following is a quick guide to the things you, a member of MicroBooNE, will need to do in order to run and develop applications in LArSoft. The following assumes

  • You have all your computing privileges and accounts, including an account on the uboone gpvm nodes (e.g. If you don’t see here: (in particular, the software section).
  • You are working on one of the uboone gpvm nodes. Many of these should work on other machines/sites as well, if they are configured in the standard way, but there are no guarantees for that. Talk to your system/local administrator if you’re having trouble, to make sure there’s no differences in the basic setup.
  • You have a working knowledge of UNIX.
  • You are using the bash shell (most of the things for csh should be clear)
  • You know something about the general structure of LArSoft. If you don’t know what a fcl file is, and/or don’t know how to make/modify one, then you will need a more basic tutorial than what’s here.

Run from a tagged LArSoft release

First, do

source /grid/fermiapp/products/uboone/

Then, to list the available versions of the MicroBooNE software, do
ups list -aK+ uboonecode

You should see a list (possible long list…) of versions of the code to set up. Each line should look something like this:
“uboonecode” “v04_35_00” “Linux64bit+2.6-2.5” “e9:prof” “”

In the above example output, "v04_35_00" is the version, "Linux64bit+2.6-2.5" is the operating system, and "e9:prof" is a "qualifier" used to select variants such whether the code was compiled with optimized code, profiling, debugging symbols, or that kind of thing.
Pick which version you want to setup (probably the latest, and probably the profile version indicated by the "prof" qualifier), and do something like this (with the version and qualifiers you want):
setup uboonecode v04_35_00 -q e9:prof

This will automatically setup larsoft as well, so don’t worry about anything else. You can think of “uboonecode” as MicroBooNE’s own larsoft. You should then be able to run your usual larsoft job:
lar -c my_fcl_file.fcl ...

Run the latest/greatest/possibly-not-100%-stable version of LArSoft

(i.e. the ‘develop’ branch, or HEAD of the repository)

Like above, the first thing to do is

source /grid/fermiapp/products/uboone/

Then, you should set up the “nightly” version of the uboonecode (with the qualifiers you would like):
setup uboonecode nightly -q e4:prof

And now you should be able to run larsoft:
lar -c my_fcl_file.fcl ...

Develop and build in LArSoft, based on the latest development branch

Initial setup

Like above, the first thing to do is

source /grid/fermiapp/products/uboone/

But now, things get different! First, setup the nightly build version of LArSoft
setup larsoft nightly -q e4:prof

Alternatively, some experts prefer

setup uboonecode {latest version} -q {qualifiers}
where {latest version} is the tag for the version you want (usually the latest for development) and {qualifier} is the qualifier you want. You will still be pulling down the "develop" branch in the mrb g step later.

Now, you should make a new directory where you intend to put your code (not home, as the libraries you build can get large). Let’s say that new directory is going to be called larsoft_mydev. Then do

mkdir larsoft_mydev
cd larsoft_mydev

Then, inside the new directory, do
mrb newDev #  -v v04_35_01 -q debug:e5 -T debug.slf6.v04_35_01 -f 
source localProducts_larsoft_nightly_e4_prof/setup

where the stuff after the comment in mrb newDev is an example for the case where it's time to upgrade to a new release from say v04_35_01 and yet keep your srcs code!

Alternatively, some experts prefer to use

mrb newDev -p
to upgrade to a new release while keeping your srcs code. This makes a new localProducts_larsoft_{version}_{qualifiers} directory, where {version} and {qualifiers} will be whatever you used in your setup uboonecode {version} -q {qualifiers} command earlier. This may not work with "nightly", but works great with the setup uboonecode {latest version} approach.

Now, pull down uboonecode, and any other packages you think you will need (like, for example, larreco)

cd srcs
mrb g uboonecode
mrb g larreco

Make a new branch

Now, if you want to just build, see below. However, I bet you want to develop some stuff. Before touching the code, do this in the srcs/<package> directory (e.g. srs/larreco/)

git flow feature start $USER_testFeature

where $USER is your username, and testFeature is up to you. You should now see that you are on this new branch (feature/$USER_testFeature) if you issue git branch -a. Make all of your changes, and commit as you will. You will need to create a new branch like this for every repository/package in which you are changing code! This is all done manually now, but expect nice scripts that do this for you in the future.

You can share this feature branch with the remote repository by typing:

git push origin feature/$USER_testFeature

This is optional and not necessary for you to develop code, but useful if you want to share your feature branch with other developers.

Building your code

Go to the build directory in your development area,
cd larsoft_mydev/build

and then...
  • if this is your first time building with these local packages (i.e. if you add or remove a local package, you should do this again), do
    mrb i -j4
    (the second commands is short for mrb install -j4). That latter flag tells it to build on 4 cores, if available. Check for build errors! It may be easier to break that step above into two parts---mrb build -j4; mrb install---in order to more easily check for compilation issues.
  • if you've already done the steps above, and no new local packages are there, you can do
    make install -j4
    which should run faster.

Setup your code to run

After everything is built and installed, go to the base development directory:

cd larsoft_mydev

and do

to setup your newly built code. (‘slp’ stands for setup local products.) Now you can run your lar job:
lar -c my_fcl_file.fcl ...

You should need to do this mrbslp command only once during a development session, unless you add/remove products from your development area.

While you are developing…

In git, you have your own “repository” in your development area. You can and should make frequent commits to it. Note that this, unlike svn, won’t change the global repository that everyone uses, for two reasons:
  1. because you are good at following best practices, you have made your own branch in which all of your changes are going into; and,
  2. you are only making commits/changes to your own local repository.

So, suppose you have changed (or created!) in some srcs/<package> area. To commit this change, do

git add
git commit -m ‘message about the commit here; if you do not use -m, it will open a text editor and allow you to make a very long commit message’

You can add multiple files at once (git add my_file.h) and use just one commit message for them. Or git add my_dir, and it adds recursively everything in directory my_dir.

Also incredibly helpful is the git status command, which will tell you all of the files that you have changed, new files that you have created, and any files you have staged---i.e., done git add on but not yet committed.

Update your feature branch with updates to the repository

It’s likely that people will make updates to the repository (develop branch) while you are working on your feature. It’s a good idea to incorporate those regularly, and it’s a requirement that you do that before pushing in your own commits. To do that, go to the package you want to update and rebase

cd larsoft_mydev/srcs/<package>
git fetch origin
git rebase origin/develop

You may then need to address rebase conflicts (and once you do, commit those changes).

Merging your changes back into the repository

Once you have tested your feature, and are ready for the world to use it, do the following:

cd larsoft_mydev/srcs/<package>
git fetch origin
git rebase origin/develop
git flow feature finish

That will take your feature branch and merge it back into your develop branch (remember, you have your own repository!). Now you need to push your changes in develop to the main (origin) repository:
git push origin develop

and congratulations, you've unleashed your terror on the LArSoft world!

hot fix

We will not pretend that there is not a short cut around all this manufacture of extra branches and git flow usage. Bare git commands can be used to make quick changes. Namely, if you know you are going to fix one piece of code in, e.g., larcore/SimpleTypesAndConstants/PhysicalConstants.h maybe there's no reason to make this extra branch. You will fix develop, and that's it. Here's how to do that. Remember the First rule: never push to master branch..

cd larsoft_mydev/srcs
> mrb g larcore
> # make your edits, build, test the code...
> cd larcore
> git pull origin develop  # get any git pushes that have happened since you last pulled
> git branch -a
> git checkout develop # if needed, to get you onto the develop branch in order to comply with the *First rule: never push to master branch.*.
> git commit -a -m "Make hbar = h/2pi" # promote your changes to being ready to push
> git push origin develop  # push your changes back to the origin  

Removing a package from your local area

Finally, after you’re done working on it, you’ll probably want to remove a package from your area. To do that for <package>

cd larsoft_mydev/srcs
rm -rf <package>
mrb uc

The last line removes references to that package from the build script. Remember, to build again, you should do mrb z; mrbslp in the build directory before trying to build (just to be safe!).

Next Steps

The above should be enough to do what you are typically used to doing on a day-to-day basis, and it’s a good place to start. Once you think you have those down, there are some further things you may want to be able to do, outlined below.

Collaborating with others on a feature branch: advanced gitology

One of the advantages of git is how its decentralization can make collaborating with others an easier, more sustainable task.

First, decide on which user’s repository will be used as the base for this new feature. Collaborators will need to have write permission to this user’s development area (group write permissions are on as a default). Suppose that user is bluebeard, and they have development code in /uboone/app/users/bluebeard/larsoft_dev. Bluebeard should make a new branch in the package(s) of interest, following the instructions, except putting on "shared" to the name of the branch, instead of bluebeard. Let’s say that branch is called feature/shared_newFeature, and the package is larreco.

Other collaborators should set up Bluebeard’s repository as a new remote:

cd larsoft_mydev/srcs
mrb g larreco
cd larreco
git remote add blue /uboone/app/users/bluebeard/larsoft_dev/srcs/larreco/

Now, you have a remote repository blue from which you can get and commit code into. In fact, the treatment of this remote repository blue is the same, in git, as the treatment of the main repository, nicknamed origin. However, the commands for interaction may look a little different in spots, because getting and using branches from a non-origin remote is not done using git flow. So, be a little careful.

Back to instructions: first, you need to make a new branch based on Bluebeard's shared branch

 git fetch blue
git checkout -b feature/$USER_newFeature blue/feature/shared_newFeature

With that done, you can now make changes and commit to your own branch, like you do. You should occasionally, for good measure, update your branch to reflect other people's pushes into blue:
git fetch blue
git rebase blue/feature/shared_newFeature

Then, when you're ready to push a change for everyone else to use, after committing all your changes and doing the above, do
git push blue feature/$USER_newFeature:feature/shared_newFeature

This pushes your branch (named feature/$USER_newFeature) in the branch named feature/shared_newFeature on remote blue. A little cumbersome, I know, but doable (and we'll try to automate this with scripts more).

That's what everyone but Bluebeard should do. Bluebeard should do something very similar in his own repository, except without dealing with the remote:

 git checkout -b feature/bluebeard_newFeature feature/shared_newFeature
git rebase feature/shared_newFeature
git push . feature/bluebeard_newFeature:feature/shared_newFeature

Finally, when everything is done, Bluebeard will need to move into and close out the feature branch, like normal:

git checkout feature/shared_newFeature
git fetch origin; git rebase origin/develop
git flow feature finish
git push origin develop

And, everybody can then delete their local branches

 git branch -d feature/$USER_newFeature

Creating a new module (or any class) in an existing repository

Remember our jargon: modules are the files that live in a package (top level directory) inside a repository. Classes are the .cxx/.h pairs that one uses generally to make modules or Services.

Imagine you're working in your directory in your git clone'd, pull'd version of larsim/LArG4. LArG4 is a package -- it's a top-level directory underneath a project, which makes it a package in our parlance. Say you would like to create histograms or a TTree that characterizes some truth level information. (You're somehow unhappy with the module that already does this and want to start fresh.) You thus want an ED::analyzer module.

To accomplish this you would proceed like any right-thinking physicist and cp another * file from somewhere else and pull out all the un-needed bits, leaving the key structure of the module -- the methods you must over-write to make this thing work. You would then declare, initialize, and Fill your histograms/TTrees in the right places. (You don't need to Write() them because the TFileService in ART takes care of that, as usual.)

Now just go back to your build directory and mrb z; mrbsetenv; mrb i -j4. You hopefully don't need to touch any of the files that do the building, because the drilling down to find your new .cc just does this as a feature in the CMakeLists.txt. Now, if you had added stuff that needs access to header files that were not already included in larsim/LArG4/CMakeList.txt you would need to edit that file to add them. Same if you added new functions/methods not already in Libraries in that CMakeList.txt.

Creating a new package in an existing repository

Okay, instead, let's add a new directory under the existing repository, itself containing one or more modules and perhaps other .cpp and .h files. First, to get it to compile, your Makefile needs to know about it. As yet, it does not know about it. Remember, the cmake that happens in the first layer of doing an mrb b or mrb i takes your CMakeLists.txt files and creates the Makefiles. So that's what happened above, and merely adding a file in an existing directory, as we did there, did not complicate matters too much. Here, we have at least one other step. We'll need to inform the top level CMakeLists.txt that you have a new directory into which to drill down. You'll then need a CMakeLists.txt in that directory too.

Let's do this in uboonecode. Add a new add_subdirectory(MyNewDirectory) line in uboonecode/lbne/CMakeLists.txt. That is, you are directing it to drill down into uboonecode/lbne/MyNewDirectory to look for the next CMakeLists.txt file to generate the Makefile to build all the fabulous code you've put there. cp a CMakeLists.txt file that is nearby. Use the cetbuildtools macros already in there and from other CMakeLists.txt files to specify any new include directories to compile against and libraries to link against.

Creating a new product in a new repository

A new product means a new ups product, and remember commonly upses are 1:1 with a repository.

This may be for slightly more advanced users and may not represent too common a need for the lay-collaborator. Imagine coming along with a whole new package of code (which itself might live in nusoft, say) for which you've written specific algorithms, e.g. Pandora. That use case is one for code that might merit a whole new repository. But, any new wire simulation or optical scintillation package or calorimetry reco module can go into an existing repository: uboonecode or larsim or larreco, correspondingly. We present below also a case for Analyses modules that you may wish to have live outside existing repositories.

If you do need to do this you also need all the "hair" that comes with the package to make it build-able and ups-able by the cetbuildtools system. (Details beyond the below example can be found at

Here's a potentially real case where you may want to do this. Suppose you're working on a MicroBooNE analysis. Where should you put your code?
  • In one of the LArSoft packages?
    No, not a good idea. Your analysis code might/should be experiment specific. Your analysis code is not part of the functionality of LArSoft as a whole. And, that's your analysis code! It's not for everyone using larsoft (which could literally be anyone) to see!
  • In uboonecode?
    No, again. uboonecode is reserved for code that standard MicroBooNE processes need for running. Specific analysis code doesn't fit that definition. Besides, you wouldn't want to risk breaking MicroBooNE's very own larsoft because your plotting macros had an error in them.

You could always just keep your code somewhere and in some way that's entirely outside the mrb structure. But, it may be helpful to have it in a repository (so it's version controlled), and in an environment that's the same/similar to running LArSoft.

OK, so, first step, you need to make a repository for yourself. You can create either an svn repository, or a git one! Suppose you want to create a repository in /uboone/app/users/$USER/Repositories, then

  • For git, do
    cd /uboone/app/users/$USER/Repositories
    mkdir mygitrepo.git
    cd mygitrepo.git
    git init --bare
  • For svn, do
    cd /uboone/app/users/$USER/Repositories
    svnadmin create mysvnrepo
Now, go back to your working area and into the srcs directory (cd $MRB_SOURCE will get you there if you've already set everything up). Then...
  • For git, do
    mrb newProduct mygitrepo
  • For svn, do
    mrb newProduct -c mysvnrepo

    Note the -c option for the svn case. Also, for the svn case, the new product name must be the same as your repository name---for git users, it doesn't matter.

You'll notice there is a CMakeLists.txt in this new project directory, and a ups directory with a bunch of things in it for handling product dependencies and the like. You should look at examples of other packages to know how to handle these. The mrb newProduct command also nicely tells you that you will need to start a directory for source code. Again, look at a few other packages to see how that's done.

But, even before we do that, we should set this area up to be part of your repository, and put all these files in your repository.

  • For git, do
    cd mygitrepo
    git remote add origin /uboone/app/users/$USER/Repositories/mygitrepo.git
    git add *
    git commit -m 'initial commit'
    git push origin develop
  • For svn, do
    svn checkout file:///ubooneapp/users/$USER/Repositories/mysvnrepo
    cd mysvnrepo
    svn add *
    svn commit -m 'initial commit'

And your repository will now have all the right files in it!

Grabbing a product from an existing repository that is not larxyz or uboonecode

Here, someone has already gone through the trouble of making an mrb-ready repository, and your life is simple.
  • For git, do
    cd $MRB_SOURCE
    mrb gitCheckout <full_repo_path>
    if it's local, or
     mrb gitCheckout ssh://$<full_repo_path>
    if it's on another machine.

You're done. That's it. It's like pulling any other larxyz or uboonecode repository. All the cetbuildtools hair comes with, as desired. The new package is downloaded into your srcs directory, ready to be built, per the usual.

Adding code from other repositories

Perhaps you've identified an existing package in which you want this new external code to live. Say someone has just made a repository, or there's a rogue or legacy repository from which you're pulling.

  • For svn, do
    cd $MRB_SOURCE
    mrb svnCheckout file://<full_repo_path>
    if it's local, or
    if it's on another machine.

Also, one can set this up with existing repositories in redmine, like those in ubooneoffline. For instance, doing the following:

mrb svnCheckout svn+ssh://
will give you the ubfcl part of the main ubooneoffline repository.

Now see above bit on making a new package (Creating a new package in an existing repository) for how to attach the cetbuildtools hair. Then, you're ready to build this code as part of the full package.

Unit Tests and Integration tests.

Refer to this page.

Same srcs directory, different flavor/architecture/qualifier builds

Over the life of your development, the various products' version numbers will get bumped. Or you may wish to build for slf5 or slf6, or you might like to have a debug and a prof build -- all from the same source code. You can imagine the complications: you must have newly built code and new install areas, despite using the same srcs area. The next three methods show how one might do this. I (Eric) favor the 2nd one: Multiple Build and Install Areas. It keeps the needed directories unambiguously labeled with intuitive tree structure.

Updating your development area to a new version

First, when do you need to do this?
  • You made and are working out of a development area based on a static version (e.g. v04_35_02, not nightly); and,
  • you want to update your setup to automatically pull from a new/different version of larsoft products (e.g. v04_35_04); and,
  • you want to keep your current srcs/ directory, with all of its branches and your own code.
So, when do you not need to do this?
  • You are working in a development area that looks to the larsoft nightly tag, so you are always on the bleeding edge of development; or,
  • you are confident your local code and updates to your local code will work just fine with the previous larsoft version, and so don't want to bother with using a new larsoft version; or,
  • you want to use a new larsoft version, but just want to set up a new development area.
    Personal side note, I think the first reason is a benefit of setting up against nightly. I think the latter two points are bound to cause pain at some point, so beware.
OK, so, assuming you fit the criteria, done one of either:
  • Start from a fresh shell (login again...this is recommended); or,
  • do unset PRODUCTS.

Then, as is now typical, do

source /grid/fermiapp/products/uboone/
setup uboonecode vX_YY_ZZ -qe4:prof

where vX_YY_ZZ is the new version you want to setup against (and you can of course change the qualifiers).

Now, cd into your development area (top level), and do

mrb newDev -p

The -p option is necessary so that your current srcs/ directory is left alone.

You should now see a new localProducts* directory with the right version number. You should now source the setup script in that directory. Note: you can leave the old localProducts directory there if you like (if you think you'll use it), but you are also welcome to remove it. You must now clean your build area using "mrb zap" to force a full rebuild upon mrbsetenv mrb i -j4 using the new larsoft version. Then, as ever, mrbslp to run.

Building multiple flavor binaries from a single source area.

There are two general strategies for building binaries with different flavors, which are a) different install areas (localProductsXXXX), and b) different build areas associated with a single install area. Both strategies are described below.

Multiple build and install areas.

Use multiple install areas when you want to build flavors using different ups qualifiers (-q setup option). This method would typically be used when you want to have separate build and install areas for debug (-q debug:e9) and profiled/optimized (-q e9:prof) code. The qualifiers commute! Each install area should be initialized using the command mrb newDev.
mrb newDev -v v04_35_01 -q debug:e9 -T debug.v04_35_01 -f
mrb newDev -v v04_35_01 -q e9:prof -T prof.v04_35_01 -f

The -T option instructs mrb to put the build and install areas into the specified subdirectory. The -f option tells mrb that it is OK to use an existing source area. Here are a couple of caveats about this method.
  • You can omit the -v and -q options if you have larsoft setup (mrb will inherit the version and qualifiers from your setup version of larsoft).
  • Your specified qualifiers are hardwired into the generated localProductsXXXX/setup initialization script. This initialization script is not reentrant. That is, you can not switch flavors simply by sourcing a different localProductsXXXX/setup script. You should always source localProductsXXXX/setup in a fresh shell.

Multiple build areas, single install area.

Mrb also allows you to have multiple build areas feeding into a single install area. This is useful if you want to create an install area that is compatible with multiple OS or architecture flavors (e.g. slf5 and slf6). When mrb generates a new build area, it automatically detects the OS and architecture, and encodes these elements into a unique build directory name (e.g. build_slf6.x86_64).

On a machine running the first OS/architecture, use the command "mrb newDev" to initialize your build and install areas like this.

mrb newDev -v v04_35_01 -q debug:e5 -T debug.v04_35_01 [-f]

Use option -f if you want to reuse an existing srcs area. Then run "mrb newDev" again with the option "-b."
mrb newDev -v v04_35_01 -q debug:e5 -T debug.v04_35_01 -b

Option "-b" tells mrb just to make a new build area (the existing source and install areas from the previous "mrb newDev" will be used).

To build your binaries for either OS/architecture, log in to that machine, initialize mrb as usual, and do the following.

mrb i -j4

After building on all OS/architectures, your install area will hold multiple sets of binaries. When you initialize your mrb runtime environment using source mrb setup_local_products (mrbslp), you will automatically get the correct flavor for the machine you are running on.

By default, mrb always puts binaries with different ups qualifiers (e.g. debug and optimized) in different install areas. If you want to put debug and optimized binaries together in a single install area, you can do this by overriding the default install area using the -I option of mrb install.

mrb i -j4 -I <common install area>