Reproducing the 2019 joint analysis


The 2019 analysis is a “top-up” analysis: all elements from the 2018 analysis (cuts, analysis techniques, CV tunes, systematics, etc.) are identical. Only FD RHC data were added (epoch7d and epoch8b).

FHC: period1 period2 period3 period5  
RHC: period4 period6 epoch7a epoch7b epoch7c epoch7d epoch8b

The event containing the executive summary and tech notes is

More about periods and epochs
FD NuMI datasets


Setup the release and check out CAFAna

setup_nova -b maxopt -r R19-04-17-2019ana.a
newrel -t  R19-04-17-2019ana.a rel_2019ana
cd rel_2019ana/
srt_setup -a
addpkg_svn -b CAFAna R19-04-17-2019ana-br

More about releases and release branches.

Concat definitions

New data only





Full datasets, including new data

There is no new FHC data - FHC concats should be identical to 2018 FHC concats.





Full RHC datasets from 2018

These concats do not include the new data from epochs 7d and 8b - they should be identical to the 2018 RHC concats. There are no blinded definitions.



Making new predictions

The framework for generating analysis predictions with the systematic shifts is undergoing a bit of an overhaul, but its usable in many cases already. The current workflow is based on the code committed in CAFAna/shared/syst_framework.
The main interface is through a python wrapper script which parses various input options such as the analysis, beam configuration, list of systematics etc. For more info on the options : -h 
usage: [-h] -a ANALYSIS -b BEAM [-c] [-nt] [-fk]
                      [-sg SYST_GROUP [SYST_GROUP ...]] [-all]
                      [-xp EXTRAPS [EXTRAPS ...]] [-all_xp] [-g [GRID]]
                      [-i [INTERACTIVE]]

Submit jobs for generating analysis predictions

optional arguments:
  -h, --help            show this help message and exit
  -a ANALYSIS, --analysis ANALYSIS
                        Specify analysis, numu2018/nue2018/nus2018
  -b BEAM, --beam BEAM  Specify beam mode, FHC/RHC
  -c, --concat          Specify whether to use concats
  -nt, --notau          Do not use tau swap files
  -fk, --fake_data      Use fake data at ND
  -sg SYST_GROUP [SYST_GROUP ...], --syst_group SYST_GROUP [SYST_GROUP ...]
                        Specify syst groups to generate
  -all, --all_systs     Generate predictions for all syst groups
  -xp EXTRAPS [EXTRAPS ...], --extraps EXTRAPS [EXTRAPS ...]
                        Specify extrapolations to carry out
  -all_xp, --all_extraps
                        Generate predictions for all extrapolations
  -g [GRID], --grid [GRID]
                        Run this on the grid with options
  -i [INTERACTIVE], --interactive [INTERACTIVE]
                        Run this interactively with cafe options

For example, one can generate predictions interactively over a single concat file, for the LightLevelUp systematic in FHC mode for the nue2018 analysis by simply running : -a "nue2018" -b "FHC" -c -sg 'LightUp' -all_xp -i='-bq -l1 -ss'

The python wrapper picks up the relevant analysis components from syst_header.h in the same folder and writes a CAFAna macro make_predictions_systs.C on the fly based on a template macro. This is similar in behaviour to make_sim_fcl.
Right now, it only supports interactive running but the grid option will be implemented soon.

If you want to generate systematically shifted predictions on the grid right now, you can use the old framework in CAFAna/shared/Ana2019 that contains a static copy of make_predictions_systs.C whose behaviour you control by passing text string options to the macro. For example, generate the same predictions as we did, using above, you would run:

cafe -bq -l 1 -ss make_predictions_systs.C FHCLightLevelUp_nueconcat_realNDData allxp_nue

To run this on the grid you would then just pass it to You can find a more detailed example of how to run this in the accompanying PDF for the tutorial (docdb-35464).

If you are generating systematically shifted predictions on the grid, you should end up with a number of files per systematics group for which you generate predictions. You should have the same number of files as jobs submitted multiplied by the number of extrapolations you ran (since each decomp/extrap option generates a separate prediction root file). We have a useful build script to ensure you combine job output root files and root files for different decomp/extrap options in the correct order. The eventual goal is for to autogenerate an accompanying build script to merge all the prediction files it outputs. But until this mechanism works correctly you can use in CAFAna/shared/syst_framework, and just edit the directory string at the top of the script. Running the script is then as simple as:


All merged files are output to a merged_predictions directory. To clean or rebuild the files in merged_predictions, run ./ clean or ./ rebuild. Again you can find a more detailed explanation of this tool in (docdb-35464).

Cosmic predictions

For the top-up, only the NuMI RHC cosmics were updated, as described below.

  • Numu: reused the 2018 FHC and RHC cosmic trigger histograms, and the 2018 FHC NuMI timing sideband histogram.
    A new RHC NuMI timing sideband spectrum was generated, thus updating the normalization but not the shape of the expected background.
    Two files in: /nova/ana/nu_mu_ana/Ana2019/Cosmics/
  • Nue: reused the 2018 FHC NuMI timing sideband spectrum.
    A new RCH NuMI timing sideband spectrum was generated, thus updating both the normalization and shape of the expected background.
    One file in /nova/ana/nu_e_ana/Ana2019/Cosmics/

Code in CAFAna/shared/Ana2019/{get,plot}_cosmics_nue_numu_2019.C

Performing the joint fit

Making supporting plots

Making the official release files (UNDER CONSTRUCTION)

The same code that makes slice and contour plots generates the official release files. This is the safer choice! Just set

 bool makedatarelease = true;

The output directory is currently set to /. in the macro.
Edit joint_fit_2019_datarelease_tools.h if you need to change file names or other conventions.
Run the contours:
cafe -bq joint_fit_2019_contours.C false true false joint_realData_both false false true
cafe -bq joint_fit_2019_contours.C false true false joint_realData_both true false true

The outputs are NOvA_2019_official_contour_deltassth23.root and NOvA_2019_official_contour_ssth23dmsq32.root

To check the file contents: open the root file, TBrowser, draw the axes, set draw option "same", draw the graphs, right click - DrawClone for the best fit markers.

Run the slices:

cafe -bq joint_fit_2019_slices.C false true false "joint_realData_both" false true false false true 
cafe -bq joint_fit_2019_slices.C false true false "joint_realData_both" false true true false true
cafe -bq joint_fit_2019_slices.C false true false "joint_realData_both" true false false false true 

The outputs are NOvA_2019_official_slice_delta.root, NOvA_2019_official_slice_dmsq32.root, NOvA_2019_official_slice_ssth23.root

To check the file contents: open the root file, draw the axes, set draw option "same", draw the graphs.

Root files have been copied to /nova/ana/nu_e_ana/Ana2019/Results/Release/, and the tarball is published in docdb

Make the tarball

cd /nova/ana/nu_e_ana/Ana2019/Results/Release/
tar -czvf NOvA_2019_official_data_release.tar.gz *

Additional material/notes

Editing systematics code

Rudimentary examples with more explanation can be found in the tutorial macro source:/trunk/CAFAna/tute/demoSysts.C

Videos from the 2019-Analysis workshop:

  • Liudmila Kolupaeva's tutorial on how to do the joint fit: (see file zoom_0.mp4 below)
  • Andrew Sutton's tutorial on how to do the joint fit: (see file zoom_1.mp4 below)