PDS Simulation and Reconstruction Working Group » History » Version 121

« Previous - Version 121/127 (diff) - Next » - Current version
Diego Gamez, 02/20/2020 03:01 PM

PDS Simulation and Reconstruction Working Group

Welcome to the Redmine page for the SBN Light Detection Systems Simulation/Reconstruction and Calibration Working Group!

SBND & ICARUS Optical libraries

With the aim of converge as much as possible in the way SBND and ICARUS make the optical simulation, people from both collaborations have worked together to identify and fix whenever possible these differences.
In relation with the geometry, SBND has updated the definition of the PMT volumes from disk to semi-spherical windows, a la ICARUS. SBND has also updated (from 6 to 5) the number of PMTs per APA window.

In relation with the voxelization scheme for the generation of the library, we have decided to sample the whole argon volume (cryostat) in voxels of 5x5x5 cm3.
X/cm = [-260.1, 260.1] in 104 voxels
Y/cm = [-271.15, 271.15] in 109 voxels
Z/cm = [-143.1, 559.6] in 141 voxels

A new library has been generated (2.2GB) and will be included in the new sbndcode release (version will be specified once included)

Propagation time

A correction to the pure double exponential decay time from the argon scintillation emission (fast component 6 ns and slow component 1590 ns) has been developed. It accounts for propagation effects: transportation and Rayleigh scattering.
To make the parametrization, 100M optical photons were generated per scintillation point (dEdx ~ 4.2 GeV) to capture the shape of the signals at very large distances. We have modeled the arrival time distribution with a Landau + Exponential function (5 parameters) at distances shorter than 3 meters and a single Landau (3 parameters) for larger distances (see Figure below).

An example of the correlation between two of the parameters of the models (landau normalization and MPV) as a function of the distance is shown in the next Figure.

The points in black (gray) represents the Landau + Exponential (Landau) region. We have updated and extended the parametrization region a factor x2, from 5 meters to 10 meters, making this approach also applicable for larger detectors like DUNE (SP and DP). In the MPV plot, we can see a small deviation from the linear trend at distances larger than about 8.5 meters. This is a result from a decrease in statistics in the distributions used for the parametrization at the larger regions (it is very hard to get well populated signals at those distances). The solution will be to make a linear extrapolation for distances larger than 8.5 meters. We are working right now in including all this into LArSoft.

Tutorial 1: Generate optical output objects (larg4 level) in SBND. This tutorial has been based in Here

The following directory contains the tutorial files:


Explanation of the configuration

Take a look at the file g4_fastopticalsim.fcl. It is an extension example of the file standard_g4_sbnd.fcl that will include the output products from the larg4 optical simulation for further study. These output objects can be either SimPhotons or SimPhotonsLite, whose structures are below:

 // This structure contains all the information per photon                                                                                                                  
  // which entered the sensitive OpDet volume.                                                                                                                               

  class OnePhoton

    bool           SetInSD;
    TVector3       InitialPosition;
    TVector3       FinalLocalPosition; // in cm                                                                                                                              
    float          Time;
    float          Energy;
    int            MotherTrackID;

  class SimPhotonsLite
      SimPhotonsLite(int chan)
        : OpChannel(chan)

      int   OpChannel;
      std::map<int, int> DetectedPhotons;

      SimPhotonsLite& operator+=(const SimPhotonsLite &rhs);
      const SimPhotonsLite operator+(const SimPhotonsLite &rhs) const;

      bool operator==(const SimPhotonsLite &other) const;

  // Define a OpDet Hit as a list of OpDet photons which were                                                                                                                
  // recorded in the OpDet volume.                                                                                                                                           

  class SimPhotons : public std::vector<OnePhoton>

As can be seen, SimPhotons objects save detailed information about each detected photon, while SimPhotonsLite objects reduce memory and size at the price of keeping only the number of photons at a time-slot. The kind of object you want to save in your simulation is specified in the configuration file by the line:

services.LArG4Parameters.UseLitePhotons: true  # false to save SimPhotons

There are several other key pieces required to make the photon simulation work. The first is to include the optical detector fhicl files:

#include "photpropservices_sbnd.fcl" 
#include "opticaldetectormodules_sbnd.fcl" 

Then we choose to use the v01_03 geometry

# SBND geometry with PMTs + Bars + ARAPUCAS + Foils
services.Geometry.GDML: "sbnd_v01_03.gdml" 
services.AuxDetGeometry.GDML: "sbnd_v01_03.gdml" 

and set up the corresponding "PhotonVisibilityService" which handles either the photon library mode

services.PhotonVisibilityService: @local::sbnd_library_vuv_vis_prop_timing_photonvisibilityservice

or the semi-analytic mode

services.PhotonVisibilityService: @local::sbnd_Nhits_vuv_vis_prop_timing_photonvisibilityservice

The optical physics processes need to be turned on in Geant4:

# Custom Physics List
services.LArG4Parameters.UseCustomPhysics: true
services.LArG4Parameters.EnabledPhysics: [ "Em", "FastOptical", "SynchrotronAndGN", "Ion", "Hadron", "Decay", "HadronElastic", "Stopping", "NeutronTrackingCut" ]

For detectors that need to load the TPB spectra information, you need the line

services.LArPropertiesService.LoadExtraMatProperties: true

And finally the user can set the model for the scintillation yield as a function of energy deposited by particle type: ScintillationByParticleType (defined in larproperties.fcl) or G4EmSaturation.

services.LArPropertiesService.ScintByParticleType: false

All these pieces are set up ahead of time in the sim fhicl file.

Running the tutorial

To begin, just try simulating an event. For the input file to the Geant4 simulation you can use any of the particle gun examples provided in sbndcode, in the example we will use prodsingle_mu_3GeV_uniformupstream_fixangleforwardgoing.fcl

lar -c prodsingle_mu_3GeV_uniformupstream_fixangleforwardgoing.fcl -n 1 -o prodsingle_mu_3GeV.root
lar -c g4_fastopticalsim.fcl -n 1 -s prodsingle_mu_3GeV.root -o prodsingle_mu_3GeV_G4.root

The file prodsingle_mu_3GeV_G4.root contains the ART event records including the optical ones (SimPhotonsLite in this example). Another hists*.root file has been generated containing some information related with the Geat4 simulation.

Tutorial 2: How to generate digitized photon detector signals

In this tutorial, we will take simulated photons, like those produced in the first tutorial, and pass them through a simulation of the SiPM/PMT and front-end electronics which "digitize" the output to produce signals which look more like the output of a real detector. The fhicl file which digitizes the photon signals is called run_digitizer_sbnd.fcl and can be found in the tutorial directory /sbnd/app/users/gamez/SBND/Tutorial.

The digitization module lives in sbndcode/OpDetSim. It is called by this line in the producers section of the fhicl file:

opdaq:     @local::sbnd_opdetdigitizer

When we run the digitisation, we need to indicate again which output products are we going to analyse

physics.producers.opdaq.UseLitePhotons: 1 # 1: Use SimPhotonsLite; 0: Use SimPhotons

We can examine the output of the digitizer with a pre-built analyzer module, called from the analyzers section:

 wvfanalyzer:     @local::wvf_ana

Now, run the fhicl file and let's look at the output. This fhicl file uses the output of the first tutorial as an input, which you specify with the -s <name.root> command.

lar -c run_digitizer_sbnd.fcl -s prodsingle_mu_3GeV_G4.root -o prodsingle_mu_3GeV_G4_OpDet.root 

As usual, together with the ART event it generates the output of the Analysis module digitizer_sbnd_OpDet-*_hists.root. Open up this hist file in root, and let's look at the contents of the wvfanalyzer directory:

root [0] 
Attaching file digitizer_sbnd_OpDet-20190912T084226_hists.root as _file0...
(TFile *) 0x7fbf5e6803f0
root [1] .ls
TFile**        digitizer_sbnd_OpDet-20190912T084226_hists.root    
 TFile*        digitizer_sbnd_OpDet-20190912T084226_hists.root    
  KEY: TDirectoryFile    wvfanalyzer;1    wvfanalyzer (wvfAna) folder
root [2] wvfanalyzer->cd()
(bool) true
root [3] .ls
TDirectoryFile*        wvfanalyzer    wvfanalyzer (wvfAna) folder
 KEY: TH1D    event_1_opchannel_6;1    
 KEY: TH1D    event_1_opchannel_7;1    
 KEY: TH1D    event_1_opchannel_8;1    
 KEY: TH1D    event_1_opchannel_9;1    
 KEY: TH1D    event_1_opchannel_10;1    
 KEY: TH1D    event_1_opchannel_11;1    
 KEY: TH1D    event_1_opchannel_12;1    
 KEY: TH1D    event_1_opchannel_13;1    
 KEY: TH1D    event_1_opchannel_14;1    
 KEY: TH1D    event_1_opchannel_148;1    
 KEY: TH1D    event_1_opchannel_149;1    
 KEY: TH1D    event_1_opchannel_178;1    
 KEY: TH1D    event_1_opchannel_179;1    
root [4] 

Each histogram corresponds to one digitized waveform. The naming convention is:

  • Event: The simulated event number (in this case, there is only 1)
  • OpChannel: The channel the waveform was read out on. In SBND we will have different channels (PMTs and ARAPUCAS) and both can be sensitive to different light components (i.e. coated or uncoated PMTs). The digitisation module knows all this information and works accordingly. The information about which channel is which device in the v01_03 geometry can be found in the file /sbndcode/OpDetSim/sbndPDMapAlg.cxx.

Let's take a look at two of the channels.

root [4] event_1_opchannel_9->Draw("hist")
root [4] event_1_opchannel_141->Draw("hist")

You should see a charge vs. time distribution looking something like this for the PMTs (Left) and for the ARAPUCAS (Right).

The PMT features included are electron transit time, transit time spread, saturation, baseline, dark noise, baseline noise, and pre-trigger. The ARAPUCA features included are cross talk, detection time (optical window + collection time), baseline, baseline noise, and dark noise rate. Details of the digitisation parameters can be seen in digi_pmt_sbnd.fcl and digi_arapuca_sbnd.fcl. For more information about the digitisation module you can look here and here.

The existing analysis module is currently very simple: it only save all the digitised signals as TH1D in a root file. But it is ideal for use as a starting point to create a more complex one depending on your analysis needs.

Note1: A preliminary version of the trigger has been implemented recently (see). It will be covered in a next Tutorial.
Note2: For the SBN workshop on 16-20 September '19, in addition to the specific productions, there are two extra samples light-simulation oriented, with 10k events each, of i) down-going muons at almost fixed x-coordinate and ii) stopping muons. The simulations are at the Geant4 level, which is the CPU and memory taxing phase, and the output products generated are SimPhotons. You can find the files in /pnfs/sbnd/persistent/users/gamez/Downgoing and /pnfs/sbnd/persistent/users/gamez/Stopping respectively. Enjoy!

Tutorial 3: Semi-Analytic mode: How to generate the correction curves

In this tutorial we are going to explain step-by-step how to build the Gaisser-Hillas correction curves for our VUV scintillation light simulation. Remember that they depend on (i) the Rayleigh scattering length (λ_RS), (ii) on the detector size and (iii) on the optical properties of the detector materials (in our case, mainly on the detector walls and the field-cage). The correction curves are simply the ratio between the real (i.e. full Geant4) signals and the geometric acceptance (projected solid angle Ω/cosθ) between the scintillation point and the optical detector sensitive surface. We have identified two main dependencies for these curves: the distance d and the offset-angle θ between the scintillation point and the center of the sensitive optical detector. For an accurate light simulation, when generating our correction curves, we need to cover as much as we can the phase-space (d, θ).

Note: All the files mentioned in this tutorial can be found in

Step 1: Prepare

Ideally we would full-Geant4 simulate scintillation point covering all the active volume of our detector and save the number of photons arriving to each of our optical sensitive devices. But of course this is not efficient from the simulation point of view. Fortunately, what we really need is to simulate a sample of points to cover the full (or as much as we can) of our phase-space (d, θ). For doing it, we recommend the user to select the optical detector position (x, y, z) closer to the centre of the photocathode-plane as a reference point. From this position, we will move in steps of Δx (drift), Δy (heigh) and Δz (beam-direction) to define our subsample of scintillation points. To cover the full length of drift distances we can move from our reference point in steps of Δx = 10-20cm. To cover the θ small values region, we can move from our reference point in 3-5 steps of Δy or Δz = 5-10cm. To include the border effects we will also need to generate scintillation points at different distances form the detector edges. Again, to cover the full θ range we recommend to move in the y-z directions from the centre of the different optical detectors and repeat the procedure described before (see Figure below). The number of points and the step size between them depends on how accurate the user wants to be.

In the Figure above, you can see an example for the LDS in SBND: The circles (rectangles) represent the PMTs (ARAPUCAS) and the yellow-stars are an example of the sample of scintillation points (projected on the y-z plane) selected for the generation of the correction curves. In this example, if we chose a Δx = 20cm, in SBND we will need to generate 10(drift steps)x12(reference channels)x3(points around each reference)=360 scintillation points. Due to the large attenuation of the light signals with the distance, we will need to generate a large number of photons in each point. We have been using typical 10-100M photons. Note that we will be using the signal in all the optical detectors, not only the ones we have used as references to chose our sample of scintillation points.

Step 2: Grid Jobs

In our jobs we will use the Lightsource Event Generator in its File-Mode version (i.e. SourceMode=0), where the light source is placed event by event in locations specified by an input text file. Then, a filename must be specified via a parameter in the modules parameter set or FHiCL file (light_generator_sbnd.fcl in our example):

physics.producers.generator.SourceMode: 0
physics.producers.generator.SteeringFile:  "./myLightSourceSteering.txt" 

The script generates these input files (here named myLightSourceSteering_XX.txt) with the scintillation points information we need for our grid jobs. The user only would need to define the (x, y, z) positions to be simulated as arrays of coordinates (in shell). Following with the example above we have:

# Arrays with the (x, y, z) for the scintillation points 
x_=( 10. 30. 50. 70. 90. 110. 130. 150. 170. 190. )
y_=( 40. 50. 60. 90. 100. 110. 170. 180. 190. )
z_=( 285. 345. 405. 465. )

In the script there is another user variables to define the number of photons to be generated in each job:

Note that, as mentioned before, we will need to simulate a large number of photons to have big statistics in our signals, especially for large distances and offset angles. If the number of photons per job is very large (> 1M) we might have memory problems in our jobs. For that reason, an easy solution can be to run our jobs with multiple events (this last specified in our light_generator_sbnd.fcl file):
  maxEvents:   100     # Number of events to create

With the previous settings we will be generating 100x100000 = 1.e7 photons per scintillation point.
As an example, the output file myLightSourceSteering_0.txt from the script with the above settings will look like:
 x        y          z         t        dx      dy      dz      dt        p         dp             n

 10.     40.        285.      0.0       0.0     0.0     0.0     0.0     9.69       0.25          100000

This is the input file format for the SteeringFile in the Lightsource Event Generator.
We will also use the SimPhotonCounter module to generate our output files with the photon information as TTree objects. In particular, we will be interested in saving:
physics.producers.generator.FillTree: true
physics.analyzers.pmtresponse.MakeAllPhotonsTree: true

Also note that when we calcule the correction curves no efficiencies must be applied (we want all photons arriving to our detector windows) at this level, so we need to be sure that:
services.LArG4Parameters.OpticalParamModels: ["TransparentPlaneAction"]
services.LArPropertiesService.ScintPreScale: 1

If we use for our job submission, we will need to include the script in our scint_points_sim.xml configuration file with the information about the grid job.

<stage name="gen">

This file needs to be modified by any user to set properly its variables (i.e. larsoft version, number of jobs, input & output file paths, ...). For example, the number of jobs/events in our example case would be:
<numevents>36000</numevents> <!-- Project size = events per job x number of jobs-->

Once we have modified properly all the needed files, we can submit our grid jobs: --xml scint_points_sim.xml --stage gen --submit

As mentioned before, we have saved the PhotonsGenerated and AllPhotons Trees. Note that these trees save an entry per each scintillation photon. In our case, for each run (job) all the photons are generated in a single point, so the information saved in the PhotonsGenerated tree is redundant, making our files unnecessarily large. We will be only interested in the (x,y,z) point coordinates (one single entry of X, Y and Z branches) and the number of entries of this tree (we need to know the number of generated photons at each position), and this is what we will copy/save, together with the AllPhotons tree in our new output files, optimising our disk use and files management. An illustration of these two output formats is shown in the next figures:

This "reformatting" is made using the file new_format.C. The input to this macro is a text file with the list of PATH/FileNames of the output files from the grid.

Step 3: Finalize

Once we have our output files, then we will analyse them with the macro Semi_Mode_Gen.C. This macro reads two text files, one containing the LDS information id x y z, and another with the list of the Output file names.

  // path to the directory where the Output files are 
  string path_files = "/PATH_TO_INPUT_FILES/";
  // path + file containing the LDS id and positions (x, y, z)
  string positions = path_files + "opch_info.txt";
  // path + file containing the list of Output file names
  string lista_files = path_files + "list_file_names.txt";

Another important information you need to provide is the size of your optical detector sensitive windows, the (y, z) coordinates of the center of you detector active volume, and the LAr absorption length used in the Geant4 simulation. For the case of SBND they will be:

  // width and height in cm of arapuca active window    
  double arapuca_w = 9.3;
  double arapuca_h = 46.8;
  // 8" PMT radius
  double b = 8*2.54/2.;
  // Y-Z coordinates of the active volume center
  const double centerYZ[2] = {0., 250.};
  // LAr absorption length in cm
  // This needs to match with the value used in the full-geant4 simulation
  const double L_abs = 2000.; // in cm

The macro is able to calculate the correction curves for the Semi-analytic fast simulation mode for Optical Channels with Disk/Dome and Rectangular shapes. The user needs to specify the shape to use for the optical detectors by the boolean variable IsRectangular (i.e. true for ARAPUCAS and false for PMTs). In "hybrid" detectors like SBND you will need to add in the macro a way to select the devices you want to use. Our corrections are built by fitting with GH functions N angular-binned profiles of True_Signal/Rec_Signal vs distance. To do that we will need to define some related parameters. In our example:

  bool IsRectangular = false;

 //offset- angle theta binning
  const int N = 9;
  double delta_angulo = 90./N;

 //distance range and step for the profile to fit with GH
  double d_min = 0;
  double d_max = 500.;
  double step_d =25;

To account for the border effect we separate the corrections in "crowns" in the Y-Z plane at different distances to the center. The user must also define how many/big the crows will for the estimation of the different corrections. For example:

//Center distance bins
  const int M = 8;
  double range_d = 320;
  double delta_d = range_d/M;

Note that the user define the number of crowns, but the number of points in them will depend on our simulated scintillation points. This means that some crowns may end up being empty or very little populated, so take care with how you define them.

Finally, the user might need to play/fine-tune the Fit parameters (i.e. initial values, parameter limits, fitting options) to optimise the performance of the fit for each particular case. Note that we have fixed the value of the "parameter 4" (representing the starting point in x-axis) for the GH function. The reason is that this parameter is strongly correlated with "parameter 3" (representing the "width" of the function), and this option simplifies the convergency with minimal impact on the fit accuracy. We have also noticed that typically "parameter 3" varies very little with the angle, so in our approach, for simplicity, we have fixed also this parameter.

At this point we are ready to run the macro and get our results.

root Semi_Mode_Gen.C

As output we will obtain the plot and parameters we will need to define our correction curves:

In the figures above you can see the different profiles with their fits to a GH function. Different colours represent different angular bins, and different canvas are at different "crown" distance to the center (or border). This is all we need to model our VUV light component. From the fits we study the dependency of the GH "parameter 1" (N_max) and "parameter 2" (d_max) with the distance to the Y-Z plane center (d_yz):

The different colours represent the different angular bins, as in the previous figure. We can see that the dependency of N_max and d_max with d_yz is quite linear. If we plot the different slopes of the linear fit at each angle we obtain the following figure:

We find that the slopes are compatible within uncertainties. And for the sake of simplicity we will make that assumption and will use the same slope for all angles.
At this point we have all we need for out LArSoft simulation using the semi-analytic mode: the Gaisser-Hillas parameters for the (y,z) central value and the two slopes for the N_max and d_max corrections with d_yz.

Corrections to plug into LArSoft PhotonVisibilityServices

 GH p1:  1.29463, 1.29108, 1.3003, 1.26757, 1.2553, 1.16721, 0.950044, 0.79044, 0.5831
 GH p2:  91.5746, 92.4256, 91.7511, 94.523, 94.8573, 109.03, 141.138, 142, 124.494
 GH p3:  50, 50, 50, 50, 50, 50, 50, 50, 50
 GH p4:  -250, -250, -250, -250, -250, -250, -250, -250, -250

 slope1: -3.08584e-05
 slope2: -0.0242638