Lookup Reference


Redmine project management provides various facilities (even a forum module) but we typically only use the wiki and repository hosting modules. The Redmine installation hosted by Fermilab contains projects (or subprojects) for most major experiments at the lab.

Code Repository


After obtaining a Kerberos ticket you can clone the repository locally with a sufficiently-recent version of git:

$ kinit jstaplet@FNAL.GOV
Password for jstaplet@FNAL.GOV: 

$ git clone ssh://
Cloning into 'gm2ilratio'...
remote: Counting objects: 86, done.
remote: Compressing objects: 100% (73/73), done.
remote: Total 86 (delta 37), reused 0 (delta 0)
Receiving objects: 100% (86/86), 39.59 KiB | 0 bytes/s, done.
Resolving deltas: 100% (37/37), done.
Checking connectivity... done.

$ cd gm2ilratio && git checkout develop
Branch develop set up to track remote branch develop from origin.
Switched to a new branch 'develop'

Note that you must have a valid Kerberos ticket to push committed changes back to the Redmine repository.

You can also clone the repository without a Kerberos ticket with git clone but you will not be able to push changes to the repository.

Building Blinders Library

DO THIS FIRST if you are running on the g-2 VMs:

source /cvmfs/
setup gcc v6_4_0

This is required because the g-2 VMs are an old SLF6 installation, and its compiler doesn't support the C++11 standard. You can therefore use the compiler (and other more updated software) from CVMFS.

We are using Lawrence Gibbons' Blinders library, with a wrapper based on Mark Lancaster's work. Both are documented here . This must be built in-place in your gm2ilratio/ directory:

cd /path/to/gm2ilratio/pymodules/Blinders/

Setting up the Environment

There are two (unavoidable) environment setup steps to get the code running:

export OMEGA_BLINDERS_PHRASE="your blinders phrase" 
source gm2ilratio/py.env

The first is necessary because we should use the Blinders library by default. The second checks some environment variables and then adds gm2ilratio/pymodules to PYTHONPATH so that Python can find our modules.

If you are on the g-2 VMs, you should consider doing this:

source /cvmfs/
setup git                 # Scientific Linux 6 installation has an old git
setup gm2pip v1_01_00     # Scientific Linux 6 VMs don't have numpy or matplotlib (which our Python scripts use)

pip install --user

As of the end of 2019, these instructions set up almost everything needed to run the Python fitting code. Unfortunately gm2pip hasn't been updated since version v1_01_00 and does not contain all of the required Python libraries. You can use the Offline-integrated pip on the virtual machines to install any other required Python modules into your own user directory.

  • You must setup gm2pip v1_01_00 first. You can check that the proper Python/pip install is setup by executing which python and making sure that the path starts with /cvmfs/ (not SLF6's /usr/bin/python).
  • You can ignore the warnings about upgrading pip.
  • For now (Jan 2020) you must install lmfit version 0.9.14 because the latest lmfit (0.9.15) is incompatible with the older version of scipy (1.1.0) installed in gm2pip v1_01_00.
  • You should NOT have to add anything to PYTHONPATH environment variable because Python has a default location in home directories that it checks. If for some reason this doesn't work, then you will have to add the path ~/.local/lib/python2.7/site-packages to PYTHONPATH.

This is an example for lmfit, the module required by the new (fall 2019) LMFit-based fitting routines in the util module:

# check that we're using the right python/pip
jstaplet@gm2gpvm03:~ $ which python

# install a specific version of lmfit
jstaplet@gm2gpvm03:~ $ pip install --user lmfit==0.9.14
Collecting lmfit==0.9.14
  Downloading (250kB)
    100% |████████████████████████████████| 256kB 3.0MB/s 
Collecting asteval>=0.9.12 (from lmfit==0.9.14)
  Downloading (53kB)
    100% |████████████████████████████████| 61kB 3.8MB/s 
Requirement already satisfied: numpy>=1.10 in /cvmfs/ (from lmfit==0.9.14)
Requirement already satisfied: scipy>=0.19 in /cvmfs/ (from lmfit==0.9.14)
Requirement already satisfied: six>1.10 in /cvmfs/ (from lmfit==0.9.14)
Collecting uncertainties>=3.0 (from lmfit==0.9.14)
  Downloading (232kB)
    100% |████████████████████████████████| 235kB 3.5MB/s 
Building wheels for collected packages: lmfit, asteval, uncertainties
  Running bdist_wheel for lmfit ... done
  Stored in directory: /nashome/j/jstaplet/.cache/pip/wheels/a2/ad/74/1efa3d8294126064517e79592df763b8f30e010c18fadf2d6a
  Running bdist_wheel for asteval ... done
  Stored in directory: /nashome/j/jstaplet/.cache/pip/wheels/49/74/b2/5b7bcf77e0eed2c654451c84b52d5c7d2fbd95c0e3b36efe0e
  Running bdist_wheel for uncertainties ... done
  Stored in directory: /nashome/j/jstaplet/.cache/pip/wheels/d9/d3/0e/5b0b743a8abd50373705427438456da5dc2621891138d7a618
Successfully built lmfit asteval uncertainties
Installing collected packages: asteval, uncertainties, lmfit
Successfully installed asteval-0.9.17 lmfit-0.9.14 uncertainties-3.1.2
You are using pip version 9.0.1, however version 20.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

# verify that the lmfit module installed properly (no errors/exceptions/etc)
jstaplet@gm2gpvm03:~ $ python
Python 2.7.14 (default, Jan 10 2018, 09:46:06) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-18)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import lmfit
>>> lmfit.__version__

Using the Blinders Library

For Python applications, the util module automatically creates a Blinders object using the phrase in the environment variable $OMEGA_BLINDERS_PHRASE. It is embedded in the model function code, and is used any time you call the model functions like fiveparam_model or sine_lsq_model. These functions are written to call a function blindcos() which does essentially the following:

def blindcos(r,t,phi):
  freq = my_blinder.param_to_freq_radGHz(r)
  return np.cos(freq*t + phi)

If you implement a model function with trigonometric computation of the cosine or sine of the spin precession, remember to use blindcos(r,t,phi) or blindsin(r,t,phi) instead of the trigonometric functions in the math or numpy modules. There is also a convenience function blindomega(r,t,phi) which directly outputs the properly blinded omega_a given the ppm shift r.

To disable blinding (e.g. if working on Toy Monte Carlo or on unblinded data), import the util module like this at the top of your Python script:

import util

# or this if you use a cluttered namespace:
from util import *
from util import my_blinder  # this is REQUIRED or the old blinded blinder hangs around

These steps replace the default blinders object (at util.my_blinder) with a new blinder object which applies no blinding.

Code Repository with art

Building and using the in-framework code is different from the instructions above in a few ways:

  1. this can only be done in an SLF6 environment with the full Muon g-2 Offline Software setup.
  2. you will be building the code with mrb
  3. the Blinders module will not be built, and you do not have to set the OMEGA_BLINDERS_PHRASE environment variable
  4. the shell environment setup is somewhat different

Notes about current software versions (Dec 2019):

  • we still have to source the script gm2ilratio/py.env while the repository is still being configured & debugged for Offline use
  • we will use gm2 v9_33_00 for the same reason

The steps roughly follow

  • log in to a Muon g-2 virtual machine (kinit & ssh)
  • source Offline Software setup script
  • set up a development area with mrb (usually in its own directory)
  • download and build the repository code
  • source the py.env setup script so tools can be found

The last two items demonstrate that you can now run art modules, call the scripts in gm2ilratio/bin using your PATH environment variable, and use the Python bindings to FHiCL (required for the FHiCL parameter scan script).

Crafting a FHiCL Parameter Scan

We need a FHiCL file and a 'scan configuration' file, and we need an empty directory to work in:

jstaplet@gm2gpvm03:/gm2/app/users/jstaplet/irmathing $ mkdir scandemo
jstaplet@gm2gpvm03:/gm2/app/users/jstaplet/irmathing $ cd scandemo

jstaplet@gm2gpvm03:/gm2/app/users/jstaplet/irmathing/scandemo $ cat >> scans.cfg
  services.GasGunBdyn.twidth          =   115,120,125
  producers.SomeProducer.SomeSetting  =   -1,0,1
  producers.SomeOtherProducer.q       =   20

jstaplet@gm2gpvm03:/gm2/app/users/jstaplet/irmathing/scandemo $ cp ${MRB_SOURCE}/gm2ilratio/fcl/ratio_grid.fcl ./

The lines in scans.cfg indicate FHiCL parameters and their values for the scan, and generates all permutations according to these instructions. The text specifying the values must evaluate to valid Python code, including Python and Numpy functions like range(10) and np.linspace(20,30,10).

Generate the FHiCL file permutations:

jstaplet@gm2gpvm03:/gm2/app/users/jstaplet/irmathing/scandemo $ ratio_grid.fcl scans.cfg 
Reading scan config from "scans.cfg"...
Loaded scan configuration:
  producers.SomeOtherProducer.q range = 20
  producers.SomeProducer.SomeSetting range = (-1, 0, 1)
  services.GasGunBdyn.twidth range = (115, 120, 125)

Found file ratio_grid.fcl in directory /gm2/app/users/jstaplet/irmathing/scandemo
Prepending path 
to FHICL_FILE_PATH (but you should DOUBLE-CHECK that fhiclcpp is
grabbing the file at the path that you expect!)

Reading parameters from ratio_grid.fcl...

Exporting filename/configuration list to "files_config.dat"...
...closing "files_config.dat".

Writing FHiCL config files with parameter permutations...
WARNING: config item not found!
Creating new item producers, but you might want to check your spelling...
WARNING: config item not found!
Creating new item producers.SomeOtherProducer, but you might want to check your spelling...
WARNING: config item not found!
Creating new item producers.SomeOtherProducer.q, but you might want to check your spelling...
WARNING: config item not found!
Creating new item producers.SomeProducer, but you might want to check your spelling...
WARNING: config item not found!
Creating new item producers.SomeProducer.SomeSetting, but you might want to check your spelling...
WARNING: config item not found!
Creating new item services.GasGunBdyn, but you might want to check your spelling...
WARNING: config item not found!
Creating new item services.GasGunBdyn.twidth, but you might want to check your spelling...

### files ratio_grid-NNN.fcl and files_config.dat are created in the current directory ###
jstaplet@gm2gpvm03:/gm2/app/users/jstaplet/irmathing/scandemo $ ls
files_config.dat    ratio_grid-002.fcl  ratio_grid-004.fcl  ratio_grid-006.fcl  ratio_grid-008.fcl  ratio_grid.fcl
ratio_grid-001.fcl  ratio_grid-003.fcl  ratio_grid-005.fcl  ratio_grid-007.fcl  ratio_grid-009.fcl  scans.cfg

Note the warnings about the FHiCL parameters producers.SomeOtherProducer.q and producers.SomeProducer.SomeSetting: these parameters do not exist in ratio_grid.fcl (the input file used as a template). The script will use them anyway, but you should check your spelling in the scan config file (scans.cfg). (Mis-spelling the name of a previously-existing FHiCL parameter will not be caught, and the FHiCL scans will have invalid parameters.)

Running on the Grid

First clone gm2analyses to get the 'production' grid submission script:

jstaplet@gm2gpvm03:/gm2/app/users/jstaplet/irmathing $ mrb g gm2analyses
git clone: clone gm2analyses at /gm2/app/users/jstaplet/irmathing/srcs
NOTICE: Running git clone ssh:// 
Cloning into 'gm2analyses'...
remote: Counting objects: 17151, done.
remote: Compressing objects: 100% (15682/15682), done.
Receiving objects: 100% (17151/17151), 241.10 MiB | 10.70 MiB/s, done.
remote: Total 17151 (delta 11958), reused 2026 (delta 1364)
Resolving deltas: 100% (11958/11958), done.
ready to run git flow init for gm2analyses
Already on 'master'
Your branch is up to date with 'origin/master'.
Using default branch names.
Already on 'develop'
Your branch is up to date with 'origin/develop'.
Branch 'develop' set up to track remote branch 'develop' from 'origin'.
Already up to date.
NOTICE: Adding gm2analyses to CMakeLists.txt file
NOTICE: You can now 'cd gm2analyses'

You are now on the develop branch (check with 'git branch')
To make a new feature, do 'git flow feature start <featureName>'

We don't actually want to build gm2analyses, we just need the grid submission script. Prevent this by commenting out two parts of the CMake config near the bottom of ${MRB_SOURCE}/CMakeLists.txt, like this:


# gm2ilratio package block
set(gm2ilratio_not_in_ups true)
include_directories ( ${CMAKE_CURRENT_SOURCE_DIR}/gm2ilratio )
## gm2analyses package block
#set(gm2analyses_not_in_ups true)
#include_directories ( ${CMAKE_CURRENT_SOURCE_DIR}/gm2analyses )



Be sure to NOT comment out the entries for gm2ilratio.

TODO: finish explanation & check for mistakes in paths
  • use the grid submission script wrapper with the same options as the true submission script
  • wrapper script is at gm2ilration/bin/ (must be in '$PATH')
  • the real ("production") grid submission script is at gm2analyses/ProductionScripts/produce/
  • the real script is not in $PATH so you have to cd ${MRB_SOURCE}/gm2analyses/ProductionScripts/produce/ first
  • the wrapper script also requires the FHiCL scan config file as its first argument (just the filename with no --option syntax)

Using git

We generally need a git version greater than 2. Changing code follows the typical git use patterns, and there are many wonderful introductions on Google.

Basic Examples

$ git clone ssh://                # remember to cd into the directory after cloning
$ cd gm2ilratio

$ git status                                                                         # check status COMPULSIVELY
On branch master
Your branch is up-to-date with 'origin/master'.
nothing to commit, working directory clean

$ git checkout develop                                                               # checkout develop branch (we don't use master by convention)
Branch develop set up to track remote branch develop from origin.
Switched to a new branch 'develop'

$ echo "# an example change that breaks nothing" >> CMakeLists.txt                   # modify a file (appends comment to end)

$ git status                                                                         # status shows the modification, but does not automatically
On branch develop                                                                    # 'stage the file for commit' (so changes will be ignored)
Your branch is up-to-date with 'origin/develop'.
Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)

    modified:   CMakeLists.txt

no changes added to commit (use "git add" and/or "git commit -a")

$ git add CMakeLists.txt                                                             # explicitly tell git to track previous changes to this file

$ git status                                                                         # file is listed under 'Changes to be committed'
On branch develop
Your branch is up-to-date with 'origin/develop'.
Changes to be committed:
  (use "git reset HEAD <file>..." to unstage)

    modified:   CMakeLists.txt

$ git push                                                                           # actually move the changes to the remote repository
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 335 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
To ssh://
   c935cb7..0637ef5  develop -> develop

$ git status                                                                         # changes were pushed successfully
On branch develop
Your branch is up-to-date with 'origin/develop'.
nothing to commit, working directory clean

Remember these things in particular:

  • the git interface is unfortunately more like a series of shell scripts than a single interface for all operations...
  • git COMMAND help pulls up lengthy manual pages on each command
  • check git status often
  • git pull will fetch changes from the repository and merge them for the current branch (which may ask for a commit message using vim, emacs, or some other terminal-based editor like nano) (see git pull --help, git fetch --help, and/or git merge --help
  • git push pushes changes from your current branch to the repository (the option --all pushes changes from all local branches
  • when your Kerberos ticket expires (~24 hours?) you will need to obtain a new one with kinit
  • the entire commit history is a tree structure of changesets (identified by commit ID or 'hash') related to each other by one or more 'parent-child' relationships
  • a tag is a type of reference or label which points to a single commit (see git tag --help)
  • a 'branch' is like a dynamic tag, advancing with every new child commit to create an independent commit history which diverges from other branches (see git branch --help)
  • 'merging in' a branch creates a new child commit with divergent histories as parents (see git merge --help)
  • a 'conflict' occurs when merged changesets interfere with each other (e.g. modifying the same line of code in different ways)
  • in the event of a conflict, git inserts both changes into the file (in blocks delimited by '<<<<' or '>>>>' or '====') to allow a human to fix the conflict and commit changes
  • HEAD is basically a symlink to the commit/branch/tag currently checked out
  • git reflog shows the history of the HEAD reference

Inspecting History

You can check the commit history with git log. It has many formatting options and can even show the entire history's tree structure in a meaningful way in the terminal. My favorite options are git log --all --decorate --oneline --graph (this follows the mnemonic "git log a dog").

* 9a4760d (HEAD -> modelgen, newfitters) some bugfixes
* 5ea3200 rename init_params() and params_initd to init() and inited
* 1614cfb (origin/newfitters, origin/modelgen) LMFit constructor gets arguments: params_vary/min/max/expr/brute_step (but they MUST be tuples of length N_pars, and don't yet convert 'None' to the correct default values).  also bugfix for 'method' and 'minimizer_kwargs' arguments.
*   98e72d2 Merge branch 'develop' into newfitters
| * bbe30e4 (origin/develop, develop) added util.pearson_corr_matrix
* | 0f5c6ec add example of re-running an LMFit fit to
* | fba98e4 required changes for to use the new LMFit class
* | 1d3c18c Add facility for LMFit to set data members 'X' and 'X_err' for every parameter named 'X' (after a successful fit).
* | df47417 fix bugs in LMFit.__str__(), and add to LMFit.__doc__
* | b1371db update LMFit comments (including TODO list), shorten names of a couple of members
* | cf563e9 finished making LMFit class work: FitBase backward-compatible with leastsq_args option and use of 'result' data member, FitBase does not track parameter bounds, 'vary' option, or anything else besides param names and initial values, bugfix for FitBase.__getstate__(), add some useful comments, fix up how LMFit initializes lmfit.Parameters (might need more work though), LMFit._unpack_raw_fit_result knows how to interpret lmfit.minimizer.MinimizerResult, clear LMFit data members when fit re-initializes lmfit.Parameters(), implement LMFit.__str__() with some tweaks for parameter bounds and fixed parameters
* | dae0a5d util.Pmers: a class derived from lmfit.Parameters that can be fed to any of our residuals functions that expect a TUPLE of parameters.
* | 7eafe88 working LMFit with (most) drop-in interface things done
* | 8f7bac5 MWE using util.LMFit (and util.FitBase)
* | 7d393a5 rename original leastsq-based FitResult to LeastSqFitResult
* 8c3c77c deprecate util.do_residuals_fft in favor of dfft_ns2radGHz (already worked for any regularly-spaced time series)
*   5a156b0 Merge branch 'develop' of ssh:// into develop
| * 117ae10 now plots without cutting on the requirement that all fits succeed in a trial, ie the different methods can have different number of successfull results. But only the trials that are successfull for all fits are used in the 2d correlation plots. The first part of this is maybe not optimal and could be modified as a switch.
| * c7347a8 adding a new version of scan_trials, which doesnt throw out trials where not all of T,R,exp fits succeeded. I needed this to look at the trials where the T method fails. However the comparison plots between T and R results are no longer valid. So I am just adding this in a separate file for now. In teh future should merge the two cases into a single script.
| * e24c755 modification to, adding plot on the chi2 distribution of T and R methods
| * 43f772a Adding linear fit between T and R method results for r. Importing statsmodels.api for this. You will need to install the module with something like pip install --user statsmodels
| * a3daf7e Finishing up with reimplementing pileup and threshold. Pushing all changes to Seems to work very good now but some tests should be run.
| * eda519e Fixed a lot of issues with the toyMC. The main issue was assuming uniform distribution of counts within each bin, then sub-dividing each bin in a way that attempts to extract  infromation faster than the bin width. This was pointed out to me in the hack-athon (by James M), which was super helpful! I reverted back to an old method by James S to get the expected counts in each sub-bin, ie faster than 149.2ns. Then these are properly sub-divided to the sub-histos. The phenomenological randomization of the separation to the sub-histos is also fixed now. All fits now get P=1 when poisson randomization per-bin is turned off, and reduced_chi2->1 with randomization on. The old issue with the residual spike on the exp fit is also fixed!    I will park all this in for now, as I still have to add pileup, threshold, small things like that.
* | fba0c3c util.read_TObjects and .enforce_identical_binning (for loading ROOT THists/TObjects in Python
* a205d1b Moving all progress into the, and removing the . All progress on radnomization is good additions without any indication of error or modification to the results.
*   7228427 Merge branch 'develop' of ssh:// into develop
| * 3dcbcf9 fixing up pulls, residuals, and residuals FFT plots
| * b710d81 fix the pyfitter args preventing plots
* | 2438be0 Adding a separate development file for run_trials, . This is an attempt at properly randomizing counts in each trial, and also in the separation of counts to the 4 sub-histos, to properly account for correlations.