Project

General

Profile

Reproducing the nus 2017 Analysis

Work in progress as of February 2nd 2018

First of all, familiarise yourself with the Executive Summary and associated technical notes (linked within EC doc-db) for the Nus17 analysis.

Set up an appropriate release

Setup the summer 2017 analysis tag (when it is available). All appropriate macros live in CAFAna/nus/Nus17/

setup_nova -r R17-08-22-prod3nus17.b -b maxopt

Producing ND spectra

The basic data/MC comparisons just take a Spectrum object for the data and CheatDecomp for the MC.
The scripts DataMCLoad_nus17.C and DataMCAna_nus17.C make all of these comparisons, for each level of the analysis cut flow.
As a result, they require the full CAFs and take many hours to complete.
DataMCLoad_nus17.C should be run first via grid submission.

Example submission:

submit_cafana.py -n 200 -r R17-08-22-prod3nus17.a -o <somewhere_on_pnfs> -t <path_to_my_test_release> DataMCNDLoad_nus17.C <output_filename>.root # note this will use your test release

This makes and saves the CAFAna objects to the <output_filename>.root that you specified and takes nearly all of the running time.
DataMCAna_nus17.C is run next and creates and saves the stylized plots to your specified directory (input argument to the macro is the location of the <output_filename>.root.
Suggest you first hadd_cafana the output of the Load macro to your own /nova/ana area first!

Sijith will add instructions for running Systematics and making the systematic error bands on plots in due course.

Producing FD prediction

The prediction object is created by running:

cafe -bq CAFAna/nus/Nus17/MakeNus17PredictionSysts.C <output_filename>.root #this variant adds the templates for systematic shifts)

Note: Only run small-scale tests with this command. Again, one should run grid jobs for creating the prediction as it's quite a laborious process.

It works by loading the ND Data/MC and FD MC decaf files (trimmed cafs: see the decaf section).
It then creates a sterile prediction object SterileGenerator.
Under the hood, this creates ProportionalDecomp objects for the numu-selected background
and the NC-selected background separately. Essentially proportionally splits the MC into components based on what the MC dictates.
With these decomposition objects it performs the NC disappearance extrapolation via the ModularExtrap::NCDisappearance object.
Suggest one reads doc-db 15343 and referenced documents therein for more details on this process.

Ultimately you are left with an extrapolated NC disappearance prediction that is then passed, along with systematics to create a PredictionInterp object (which implements systematic errors by interpolation between shifted templates, creating a nominal and sigma-shifted predictions).

Producing FD cosmic background spectra

For this we calculate the cosmic background from the cosmic data by period and compare to the out-of-time numi cosmic background.

cafe -bq CAFAna/nus/Nus17/ComputeNus17CosBkgd.C

which creates individual spectra, by period. This output is then fed to:

cafe -bq CAFAna/nus/Nus17/PrintNus17CosBkgd.C

which creates the resulting final cosmic background "prediction" from data and spits out the number of events.

Producing FD data spectra

cafe -bq CAFAna/nus/Nus17/nus17_box_opening.C

Creates energy, time spectra for selected events when running on the unblinded FD dataset.
Also spits out text files listing information needed to create event displays or interrogating the individual selected events for more details on variable values.

Producing 1D and 2D contours

With the output from the MC prediction, data spectrum, and cosmic background data spectrum you're now ready to fit and create contours.
To do that we can with the PredictionInterp (systematics included) object via:

cafe -bq CAFAna/nus/Nus17/PlotNus17PredSystsData.C # input files are hardcoded at present for Nus17 analysis

This loads the above-required spectra/prediction, creates a 3-flavour and 4-flavour calculator instance and a prediction spectrum with 3-flavour oscillation assumptions.
We create GaussianContraint for sin^2(theta_23) and delta m^2(23) using NOvA's best fit for lower octant (conservative) assumption.
We then proceed to create a SingleSampleExperiment to allow for comparison of the data spectrum to the MC+cosmic expectation (this dictates a shape-fit analysis, as opposed to a CountingExperiment for a rate-only fit).
Then create a MultiExperiment to pass the constraints.

It is then able to run fits for 1D and 2D slices/contours in the theta_24 and theta_34 parameter space.
The current output will be the 2D contour for t24 vs. t34 but, with a bit of work, this can all be configured.

Comparison with Ana01

For this you can simply run with the resulting 2D contour output from the abpve with the Ana01 contour via:

cafe -bq CAFAna/nus/Nus17/Nus17vsAna01_NuFACT__2D.C #so named as this is the iteration shown at NuFACT '17.

Producing Feldman-Cousins corrected contours

Dung Phan will add this section