Project

General

Profile

Reproducing the nue 2017 Analysis results » History » Version 31

« Previous - Version 31/34 (diff) - Next » - Current version
Erika Catano Mur, 02/09/2018 10:24 AM


Reproducing the nue 2017 Analysis results

TODO everywhere: what you need, how long does it take to run
TODO everywhere: better links to repo

Set up the appropriate release

The 2017 analysis branch is R17-11-22-2017ana-br. The most recent tagged release is R17-11-22-2017ana.a.

Setup the release and check out CAFAna

setup_nova -b maxopt -r R17-11-22-2017ana.a
newrel -t R17-11-22-2017ana.a rel_2017ana
cd rel_2017ana/
srt_setup -a
addpkg_svn -b R17-11-22-2017ana-br CAFAna

Learn more about releases and release branches.


Datasets

The details of all datasets are always available on the official datasets page, but below are the key concat datasets you're most likely to start with.
(NB: The Decaf Cuts live in CAFAna/Cuts/NueCuts2017.h for both ND and FD, See also: Creating concats below)

The Prod3Loaders (doxygen) class is designed to retrieve appropriate nominal and systematic datasets, for caf,decaf,sumdecaf.
For the nue analysis, the nue_or_numu_or_nus datasets have preselection cuts that are incompatible with the analysis cuts. Always double-check that the definitions printed on-screen are what you'd expect.


Central weights

All plots should be using these weights, which adjust the flux and cross-sections to our central tunes:

kPPFXFluxCVWgt * kXSecCVWgt2017

Selections

As selection cuts are finalized they're added to NueCuts2017.h
To select the "core" sample use: kNue2017FD.
To select the "peripheral" sample use: kNue2017FDPeripheral.

The best default for other cuts for now is probably whatever was used in the second analysis

Sideband cuts

TODO
---

Exposures

TODO this isn't true anymore

Until the final processing of the data files, and all the accounting and crosschecks, are done, we will use round-number estimates of the beam exposure: 9e20 POT, and livetime within the beam spill (for scaling cosmic backgrounds) of 440s. These values are also available in CAFAna/Analysis/Exposures.h

The POT and livetime numbers for FD data now live in CAFAna/Analysis/Exposures.h


Creating predictions

Ana2017 prediction framework

With the 2017 analysis, we met two complications in creating our far detector predictions. In short, we need a way to extrapolate background events in the peripheral sample without a good ND handle on these events and a way to add the FD rock estimate with the fiducial prediction. For every prediction we make, we must first prepare three constituent pieces. We need a PredictionNoExtrap that gives the simulation's fiducial prediction at the far detector for both the core and peripheral samples, a PredictionExtrap that sets up ND decompositions and an extrapolation just for the FD core fiducial sample, and a PredictionNoExtrap that gives the simulation's rock estimate. Assuming you have pointers to all three, you need to do:

PredictionExtendToPeripheral *predFid = new PredictionExtendToPeripheral(PredictionExtrap *predCore, PredictionNoExtrap *predNoExtrap);
PredictionAddRock *pred = new PredictionAddRock(IPrediction *predFid,PredictionNoExtrap *predRock, 1./10.87, 1./13.13);

Here, predCore gives your extrapolated prediction to the core sample, predNoExtrap gives the MC prediction of all samples, and predRock gives the rock prediction. The 10.87 and 13.13 are rock spill duty factors. See DocDB 23108 if you want to learn more. The final pred is the prediction we actually want to use in the analysis.

Creating systematically shifted predictions

TODO short description and link to shared nue numu wiki. Remove macros from repository that were not used

This section was written by a convener who hasn't tried actually running any of the jobs. To be fleshed out.

$ for k in `seq 0 4`; do cafe -bq nue/Ana2017/make_nue_filesyst_pred.C $k; done
$ cafe -bq nue/Ana2017/make_nue_xsec_pred.C
$ hadd_cafana hadded.root fout_make_nue_*_pred.root pred_xsec_fhc_*.root

These files are read in by PredictionSystNue2017

Systematic predictions for the nue and numu 2017 analyses

TODO extrapdolation systematics might require a special place
TODO link to PPFX principal components wiki if it exists

Creating cosmic background predictions

TODO add note about sideband weight. What's in the file

For cosmic bkg prediction run CAFAna/nue/Ana2017/get_cosmic_spectra.C. It will produce a file with spectra, for reading the spectra from the *.root file follow the little instruction in the end of get_cosmic_spectra.C file.

TODO rock prediction

TODO data spectrum

TODO add note on binning


Creating concats

The reduce script used for making the above listed datasets from the parent CAFs is CAFAna/nue/reduce_bendecomp.C. It can be used for making both the ND and FD concats. The following nue selection cuts are used :

  • kNueFD2017DecafCut (= kNue2017BasicPart && kCVNe > 0.5) for FD
  • kNueND2017DecafCut (= kNueDQ2017CVN && kNue2017NDFiducial && kNue2017NDContain && kNue2017NDFrontPlanes) for ND

Both these cuts live in CAFAna/Cuts/NueCuts2017.h

In addition, to constrain the beam nue backgrounds using BEN, we look at high-statistics samples of contained and uncontained numuCC events.
Therefore, additional cuts (in CAFAna/Cuts/BeamNueCuts.h) are applied to fit them into the ND concats. They are :

  • kNumuContainNDDecafCut (= kNumuBasicQuality && (kNumuContainND2017 || kNumuContainND) && (kNumuPID2017 || kNumuNCRej))
  • kNumuUncontainNDDecafCut (= kNumuBasicQuality && kBENKaNumuFiducial && (!kNumuContainND2017 || !kNumuContainND) && (kNumuPID2017 || kNumuNCRej))

To prevent the concat size blowing up by adding in so many numuCC events, a special reduction method called ReduceForBENDecaf (in CAFAna/Decomp/BENDecomp.cxx) is used in the macro. This essentially keeps only the necessary branches required for BEN for the numuCC sample. The nue sample is otherwise untouched by this method. This keeps the concats at a manageable size and allows us to calculate the BEN scale factors on-the-fly for both nominal and systematically shifted datasets (in contrast to the Second Analysis).

The interface to the reduction script is basically through submit_concat_project.sh which lives in the NovaGridUtils package. A couple of extra concat scripts have been committed to NovaGridUtils to make the task simpler. The procedure is :

  • Create a comma-separated txt file with the different CAF definitions you want to concat and the number of output concat files you want to create for each of them. An example is given in NovaGridUtils/bin/extra_concat_scripts/datasets.txt
  • Run NovaGridUtils/bin/extra_concat_scripts/submit_multiple_concats.sh to submit a bunch of concat projects at once. The "nue2017" parameter runs reduce_bendecomp.C on the grid
    submit_multiple_concats.sh <output_dir for concat jobs> <release> "nue2017" <comma-separated txt file>
    
  • Run NovaGridUtils/bin/extra_concat_scripts/get_metadata.sh to check for metadata differences between output concats and parent CAFs. Check with #production if there's anything weird in them. (IMPORTANT!!)
    get_metadata.sh <output dir for metadata json files> <output_dir for concat jobs from previous step> "nue2017" <comma-separated txt file>
    
  • If everything is okay, then use NovaGridUtils/bin/extra_concat_scripts/cp_dropbox.sh to copy over your concat files to the FTS dropbox. You'll need novapro permissions for this step. Either request on #production or get someone who has them to copy them over for you.
    cp_dropbox.sh <output_dir for concat jobs from previous step> "nue2017" <comma-separated txt file>
    
  • Finally, once they are copied over and declared (might take a while), run NovaGridUtils/bin/extra_concat_scripts/make_definitions.sh to well..make the definitions (like the ones given above)
    make_definitions.sh "nue2017" <comma-separated txt file>
    

make_definitions.sh checks the comma-separated txt file for the expected number of concat files and aborts if it doesn't find the required number. That could mean either that the files haven't been declared yet from Step 3 or more SAM lookup-parameters need to be added to nail down the concat files.


Reproducing ND Data/MC comparisons


Reproducing FD Data/MC comparisons

Sidebands


Sensitivities and fit results


Feldman-Cousins corrections


Other blessed plots