Muon Monitors g4numi

This is a living document originally intended for personal use, so there may be some rough edges and room for improvement, but hopefully this will help anyone interested in this work skip past my original obstacles. -- Tyler Rehak, February 2019

Following the steps below should lead to plots of data from a g4numi simulation of the muon monitors.

Get the current MINERvA branch of g4numi. Instructions here:
I have my working directory on the gpvms at /minerva/app/users/trehak/G4NuMI-minerva

Run some grid jobs (see the above g4numi instructions) with interesting beam parameters and generate files in the format g4numiv6_*.root. Note that all of the procedures here support multiple jobs/run.
I store these root files in pnfs/minerva/persistent/users/trehak/flux/test (the default g4numi grid output location). I use a standard of 20 jobs/run.

Now we need to convert the neutrino .root files back to their muon parent particles. I use a script, nu2mubatch.cxx, to do this. You can find it in /minerva/app/users/trehak/Nu2Mu/nu2mubatch.cxx
You'll need to set the input/output directories in lines 3 and 4 of nu2mubatch.cxx, possibly adjust the jobNum limits in line 5, and edit the file names in lines 6 and 9 as needed.
Run the script in root using

.x nu2mubatch.cxx("#")

where # = run number. The output of nu2mubatch.cxx will be muon_*.root files.

To simplify upcoming grid submissions, I merge these muon files into roughly 50MB files. There is a pesky file size limit for grid submissions if you merge too many files together. From my standard 20 jobs/run, I create 4 merged files/run using hadd. I do this within /minerva/persistent/users/trehak/flux/test/nu2mu using, for example with run 0012, the following snippet.

hadd muon_me000z200i_merged_0_0012.root  muon_me000z200i_[0-4]_0012.root;
hadd muon_me000z200i_merged_1_0012.root  muon_me000z200i_[5-9]_0012.root;
hadd muon_me000z200i_merged_2_0012.root  muon_me000z200i_1[0-4]_0012.root;
hadd muon_me000z200i_merged_3_0012.root  muon_me000z200i_1[5-9]_0012.root;

Now we need to feed these muon_*.root files into g4numi so we can simulate the particles at the muon monitors. I originally created a separate build of g4numi for this purpose since I wasn't sure if there would be any needed changes. One possibly significant change was commenting out analysis->FillMeta(); near the end of which was causing issues with the file output. My build directory is located at /minerva/app/users/trehak/g4numinu2mu/g4numi

This step will input the muon_*.root files from before into g4numi and output hadmmNtuple_*.root files with data from the muon monitors.
Just as before, we will use grid jobs to run g4numi, but we need to pass in the muon_*.root files. I use a modified submission script, Here's an example again for run 0012 using 4 grid jobs for each of the separate muon_*.root files.

python --infile /pnfs/minerva/persistent/users/trehak/flux/test/nu2mu/muon_me000z200i_merged_0_0012.root 
                             --infilename muon_me000z200i_merged_0_0012.root --run_number 120;
python --infile /pnfs/minerva/persistent/users/trehak/flux/test/nu2mu/muon_me000z200i_merged_1_0012.root 
                             --infilename muon_me000z200i_merged_1_0012.root --run_number 121;
python --infile /pnfs/minerva/persistent/users/trehak/flux/test/nu2mu/muon_me000z200i_merged_2_0012.root 
                             --infilename muon_me000z200i_merged_2_0012.root --run_number 122;
python --infile /pnfs/minerva/persistent/users/trehak/flux/test/nu2mu/muon_me000z200i_merged_3_0012.root 
                             --infilename muon_me000z200i_merged_3_0012.root --run_number 123;

The output is hadmmNtuple_*.root files in pnfs/minerva/persistent/users/trehak/flux/test (the default g4numi grid output location).

The final step is merging the hadmmNtuples using hadd as before and plotting the contents with root. I store my final merged files in /minerva/data/users/trehak/hadmmNtuple. Note that when plotting, you'll want to use limits to ignore the default values (usually -99999) for most of the data. For example,