Dk2nu GENIE FluxDriver


This class presents itself as a concrete instance of the genie::flux::GFluxI interface. The purpose of this case is to read and interpret flux (really, hadron/muon decay) files generated by the various beam simulations. These simulations could come from gnumi (the initial GEANT3 based simulation), g4numi (a Geant4-based simulation) and flugg (a combination of a modern fluka and geant4), g4lbne or a reinterpretation of the FNAL Booster beam simulation.

The most comprehensive document on the basics of these beam simulations ntuples can be found in MINOS-DocDB-6316: The NuMI Beam Simulation with Flugg. It is important to note that while entries in the ntuple represent decays of hadrons or muons, they don't reflect every decay. Generally there is an importance weight attached to each entry that must be accounted for. Also while each ntuple entry has information about the neutrino from a randomly oriented decay and possibly decays that have the neutrino ray go through two fixed points in space, these are not particularly useful for any individual experiment. The random decays generally miss any detector and the two fixed points (one "near", one "far") fail to illuminate the entirety of the detector (if they are even relevant for that detector, i.e. NOvA vs. MINOS FarDet).

The GDk2NuFlux code uses information found in the ntuple to generate neutrino rays over a flux window by evaluating new weights and energies for a ntuple entry based on the position of the random point on the flux window in the beam coordinate system. This approach was first described in MINOS-DocDB-109. The flux window should normally be chosen to shadow the entirety of the detector accounting for the orientation of the beam with respect to the detector and any divergence in the beam.

Using the code

The basics of using the code is to create and configure an object:

  genie::flux::GDk2NuFlux* gdk2nu = new genie::flux::GDk2NuFlux();
  GFluxI* fdriver = dynamic_cast<GFluxI*>(gnumi);

and then make repeated calls to generate neutrino rays:

The two strings passed to LoadBeamSimData() are a pattern that specifies flux files to use and a parameter set name in a XML file.

specifying flux file names

The input files are expected to be ROOT ntuple files of the dk2nu format. Wildcards (normal shell globs, not regexp style) can be used.

XML parameter sets

Experiment specific configurations are encapsulated in <param_set> sections of an XML file; multiple <param_set name="myname"> instances can exist in a single file as long as each has a unique "myname". By default GDk2NuFlux will look for the file GDk2Nu.xml, but this can be overridden by use of the environment variable GDK2NUFLUXXML. The GDk2NuFlux.xml file must be found in a location defined by the environment variable GXMLPATH; include "." if you want the current location searched before the default.

Configuring a XML param_set

An example parameter set might look like:

  <param_set name="MINOS-FarDet">
    <units> m </units>
    <beamdir type="series">
      <rotation axis="x" units="rad"> +0.057184957 </rotation>
    <beampos> (0, 0, 0 ) = ( 0, 0, 735340.0 ) </beampos>
      <point coord="det"> -7.000, -9.474, -13.4796 </point>
      <point coord="det">  7.000, -9.474, -13.4796 </point>
      <point coord="det"> -7.000,  7.453, -14.4487 </point>
    <enumax> 120. 1.05 1.05 2500000 </enumax>
    <reuse> 10 </reuse>
    <upstreamz> -400.0 </upstreamz>

The string "MINOS-FarDet" identifies the configuration; this is the string that would get passed as the second argument to LoadBeamSimData().

The top level entities in a <param_set> are described below. If an entity is specified more than once in a parameter set, the last version trumps any prior settings for that parameter set. One will find in the default file the <param_set name="MINOS-NearDet"> specifies the <beamdir> and <beampos> tags multiple times: this is for pedagogical purposes. All instances give (to within precision) the same result but are there to demonstrate the various alternatives. Other parameter sets do not need to duplicate this; pick one specification and stick with it.

The <beamdir> and <beampos> tags are used together to define the coordinate transformation between the user (i.e. detector) system and the beam system. The <beamdir> tag must come before the <beampos> tag (n.b. to self: code should rigorously enforce this). These two tags and the <window> are the most critical for an individual detector setup.


This controls how noisy GDk2NuFlux is when interpreting the <param_set> and is mostly for debugging purposes.


The <units> tag identifies the user coordinate system length unit (e.g. "cm", "m" or "meter", etc). This default is meter (as is the GENIE standard units); the beam flux files are assumed to be in cm. This should match the units used by the detector geometry. This feature has not been extensively tested and it is probably best to stick with meter if at all possible.


The <beamdir> defines the rotation matrix between the beam and user coordinates. There are three ways of specifying this; use the one that is most convenient for the particular setup in question. If in doubt about the result use the PrintConfig() method to output the final result and verify that it is as you expect.

The first approach is to specify the transformation as a series of rotations around various axes. Getting this right is easiest if the transformation is a single rotation about one of the primary axes; going beyond this often leads to confusion about order and the correct sign. An example might look like:

    <beamdir type="series"> 
       <rotation axis="x" units="rad"> -0.0582977560 </rotation>
       <!-- additional rotations might follow -->

Rotation units for this tag can be either deg or rad and the rotation specifies an axis about which to rotate. A note about the rotation sign: the MINOS NearDet sees a downward going beam (mixing y and z) which is specified by a negative value as shown here to give the beam a negative y component in the user frame.

The second approach is to specify the beam coordinate's axes in the user system as a set of (θ,φ) pairs, ala a GEANT3 approach:

    <beamdir type="thetaphi3" units="deg">  
      ( 90,        0 )  ( 86.65978462612, 90 ) ( 3.34021537388, 270 ) 

Punctuation (i.e. parentheses and commas) aren't relevant and are only given for clarity; tags of this type need six values.

The third and most unambiguous specification is to give the complete 3x3 matrix that would transform a direction vector in beam coordinates into one in user coordinates, ala a Geant4 approach:

    <beamdir type="newxyz"> 
     [ 1  0               0              ]
     [ 0  0.998301167046 -0.058264739543 ]
     [ 0  0.058264739543  0.998301167046 ]

A column vector of directions in beam coordinates, when multiplied by this rotation matrix from the left should give the direction in the user system, i.e. dir user = Rot 3x3 * dir beam . Again punctuation and layout is irrelevant and shown for only for clarity; tags of this type need nine values.


The <beampos> specification is the second piece needed to complete the coordinate transformations. There are two variants. If specified with three values, these are the position of the beam origin in user coordinates:

    <!-- position of beam origin in user coords -->
    <beampos> 1.4828, 60.629225, -1034.72755 </beampos>

Positions are transformed between coordinate systems by: xyz user = ( scale Rot 3x3 * xyz beam ) + xyz beampos where scale is the ratio of user units (e.g. m) to beam units (cm).

Alternatively it is sometimes easier to know a single point in space in both systems (6 values):

    <!-- position given as ( user coords ) = ( beam coords )                   -->
    <!-- both should be in user coord units, though                            -->
    <!-- convenient for MINOS-NearDet because it is how the survey is reported -->
    <!-- XML config must have already set the rotation matrix                  -->
    <beampos> ( 1.4828, 0.2385, 0.0 ) = ( 0, 0, 1036.48837 ) </beampos>

As with <beamdir> punctuation is irrelevant but useful for clarity.


The flux window on which neutrino rays will originate is given by specifying three points in user coordinate space.

      <point coord="det"> -4.358, -2.903, -0.1835 </point>
      <point coord="det">  4.796, -2.903, -0.1835 </point>
      <point coord="det"> -4.358, 10.377,  0.5923 </point>

The order in which they are specified is important; the 2nd and 3rd value are used to define vectors relative to the first, so give the common point first (unless you do actually intend to sweep out a rhombus) ala:

flux-window specification

In principle is should not matter whether the window is square to the beam direction or the detector as long as it is large enough that rays passing through it completely shadow the volume of interest (including any beam divergence). (n.b. to self: still need to empirically verify that window orientation doesn't matter)


The rejection method used by the de-weighting (importance and x-y over the window) needs an upper estimate of the neutrino energy and a estimate of the maximum weight that might be generated. By default it will attempt to guess a conservative value by generating (but not reporting) rays and monitoring the maximum values it saw. CPU cycles might be saved by helping this process along; this can also help regulate file-to-file stability. This line has the parameters:

  elowlimit efudgefactor wgtfudgefactor nentriestouse

The elowlimit sets the lowest Enu that will be reported by the resulting file.

The efudgefactor sets a multiplier for the found maximum energy

The resulting reported maximum energy will thus be max(elowlimit,efudgefactor*enumaxscan)

The scan will also generate an estimate of the maximum weight. The wgtfudgefactor is a multiplier for that estimate.

The nentriestouse allows one to control how many entries from the dk2nu file are used to make these estimates (optional, default is 2500000)


Most entries are rejected (due to importance and x-y weights). The evaluation process can be sped up some if ntuple entries are reused by choosing a new position on the flux window a number of times. This increases the very low probability that the same decay entry might actually be accepted twice in a row. This is very rare and due to the choice of a new position on the flux window results in a different ray direction and energy, but is also undesirable. This can be tuned based on statistics given after a run by the PrintConfig() method. Typically a value of 10 is a quite safe choice.


By default neutrino rays originate on the specified flux window and GENIE in generating events only considers material in the geometry downstream of this starting position. Sometimes one would like to leave a flux window in an understood position and orientation and simply start neutrino rays elsewhere. If abs(upstreamz) < 10^30^ then each ray will be adjusted along its trajectory to start at that z position.


A convenient way to build off other configurations is to use the <using_param_set> element:

  <param_set name="MINOS-FarRock">
    <using_param_set> MINOS-FarDet </using_param_set>
      <point coord="det"> -15.0000, -17.4612, -13.0224 </point>
      <point coord="det">  15.0000, -17.4612, -13.0224 </point>
      <point coord="det"> -15.0000,  15.4402, -14.9059 </point>

This set takes as its base the "MINOS-FarDet" configuration and modifies the window size.


Sometimes you just want to put a "floor" on the estimated max weight (perhaps the distribution has a long tail that the estimator doesn't "find")

    <!-- force maxwgt to have an initial miminum of -->
    <!-- <minmaxwgt> 4.2e-07 </minmaxwgt> -->
    <minmaxwgt> 2.5e-07 </minmaxwgt>


What to do if the estimated max weight is exceeded

    <!-- what to do if exceeded [ "bump" ] "frozen" "abort" -->
    <maxwgtfail> bump </maxwgtfail>

Generating single events after a "bump" is probably fine. But if generating (using the art nutools GENIEHelper) "pileup" (multiple events in a "spill") probably it incorrect to allow it to "bump". Better if that occurs regularly, to set a reasonable minimum (see minmaxweight above) and use "frozen".

General Discussion

A few points to remember if using this to generate gsimple flux files (see: )
  • for grid jobs, you can only reference files and paths in PNFS (ie. no /dune/app or /dune/data)
  • generating with gsimple fluxes screws up the POT accounting if input gsimple flux files have different POTs
    • why? I don't know. it's a long running mystery that I still don't understand after looking at the code multiple times
  • one probably will want to tweak (or set non-default values) for: <reuse>, <minmaxwgt> and <maxwgtfail> as described above

the <reuse> value is useful to cut down on the dk2nu root TTree entry I/O which can contribute a lot of overhead. What it does is re-decay entries some number of times (and then de-weighting from the importance weight and the position weight will reject most of those without making a gsimple entry) before moving on to the next entry. The NOvA-ND entry (see here: uses a value of 400, though being off-axis their rejection rate is probably higher than DUNE's, so perhaps dial this down a bit.

Too high a value ups the chance that the same dk2nu entry generates adjacent gsimple entries. And if those gsimple entries are on the high energy tail, that increases the chances that two adjacent interations (ie. in the same spill) could happen during event generation. Now, they aren't the same ray because they'll have different points on the flux window (and thus different directions through the geometry and slightly different energies). So in the end it's not a huge concern. Just some pedanticness.

During the initialization of the dk2nu flux driver it has to estimate the "maximum weight" it's going to find during the "deweighting" process (i.e. accounting for the importance weight stored in the entry and the calculated weight from choosing a position on the flux window). It does this by throwing a bunch of entries (without recording the results) and adding a fudge factor to the maximum value it saw (related to the additional parameters on the <enumax> entry in the XML file). The <minmaxwgt> sets a floor for that (i.e. it uses that value if it happens to be above what was found by that procedure). For any given run you can find what was calculated by looking for the line similar to:

GDk2NuFlux.cxx::ScanForMaxWeight (814)> : Maximum flux weight for spin = 0.777138, energy = 68.956 (2500000)

in the <filepattern>.log file. If you do some test runs, and then set <minmaxwgt> to the maximum you find that would smooth out some variations.

The <maxwgtfail> has to do with the strategy for when that "maximum weight" is exceeded. By default it take the "bump" approach, which is when an entry exceeds the estimated "maximum weight" it then keeps that entry but increases the estimate to account for that and makes appropriate changes to keep the POT accounting consistent. Again this is pedantically correct, but has the drawback that very rare cases can keep bumping up that estimate, which when using the monte carlo "rejection method" means that the mean time for generating accepted gsimple entries increase and things keep slowing down. This was problematic enough for NOvA that I implemented an alternative approach, "frozen", so that when an entry's wgt exceeds the maximum, it is accepted, but the estimate is not changed. This has the cumulative effect of slightly under-generating such rare cases (the wgt > estimate says the case happened with probability > 1 ... so should be more present in the output) but keeps the time more under control.

Cases where the estimate is exceeded show up in the <filepattern>.log as a line containing something along the lines of:

** Fractional weight = 1.34241 > 1 !!

with the rest of the line telling you what approach it took. I think for DUNE you can (with reasonable values for the max weight estimate, either from the exploration at the start or from <minmaxwgt>) also use "frozen". Though you should check the log files to see that the number of cases isn't "excessive" (whatever that means).
I admit that a lot of this seems very "handwavy" and empirical in the approach. That just seems to be a fundamental of the nature of how these calculations are done; there's no easy or simple way of making analytical calculations for things like the "maximum weight", as annoying as that is.