FLUKA interface with LArSoft
LArSoft team members, Erica Snider, Gianluca Petrillo and Ruth Pordes,
have at various times met with Stefania Bordoni on May 25, 2016.
Stefania is a CERN fellow under supervision by Marzio Nessi. She started
working with Paola Sala on March 2016.
The purpose of her work is to have some level of interface and
communication between Fluka and LArSoft.
Fluka is simulation software able to lead from a proton beam to LAr TPC
It includes beam simulation (protons on target to produce neutrinos),
event simulation (equivalent to GENIE and Corsika), propagation through
matter (equivalent of Geant4) and readout simulation (that LArSoft
performs with custom and experiment-specific code).
Fluka has been used in Icarus and it is of wide interest in ProtoDUNE.
Stefania's goal is toward some integration between the two packages for
A use case of interest in using Fluka is the evaluation of systematic
This can be achieved by replacing part of a "standard" LArSoft component
the corresponding one in Fluka.
We can expect the ability to compare Fluka and LArSoft to be of interest
to Icarus collaboration too, who could compare with previous results.
The design of LArSoft should allow to implement directly and with
moderate effort different models for readout and charge transportation
systematic uncertainties, and different theoretical models.
Where Fluka contribution is more valuable and unique is in the
propagation: LArSoft is tightly bound to Geant4 and the native
implementation of a complete different propagation model is not advisable.
Stefania has outlined a plan of interchange from Fluka to LArSoft at
many levels: after event simulation, after propagation through matter,
after transportation, after detector readout simulation. No interchange
from LArSoft to Fluka was proposed or discussed.
She plans to undertake the last step as the first deliverable: to
exchange fully simulated "raw data" so LArSoft takes over at the very
beginning of the reconstruction and no LArSoft simulation is invoked.
This is a complex task on her side, since its complete achievement
implies the translation of all the intermediate data products, in
particular the "truth" information critical in the estimation of
algorithm physics performance.
This also appears to be a very light onus on LArSoft, since a plug in
point (that is, a border between simulation and reconstruction) already
A point of concern is the translation of the geometry.
The statement from Paola Sala was that Fluka has a geometry
representation inherently different from the one described by the GDML
LArSoft relies on.
This requires in the best scenario a translation from the GDML geometry
to the Fluka representation.
This translation might need to be detector-specific; in this case, the
exploration of Fluka by the other experiment would be significantly harder.
In the worst scenario, two separate representations must exist and be
maintained in parallel.
Stefania agreed that this last scenario is highly undesirable.
LArSoft strongly recommends ProtoDUNE to consider a solution with a
single geometry source.
We don't have sufficient information about Fluka geometry representation
to have even a guess whether it is impossible to achieve the optimal
scenario of a single GDML source.
The issue on the geometry is not necessarily blocking Stefania's
progress pertaining ProtoDUNE at this time.
A solution can be pursued independently.
#3 Updated by Thomas Junk almost 4 years ago
A question from Adam Lyon via Katherine Lato: Why FLUKA is needed and why Geant is insufficient? Any thoughts?
It's not that GEANT is insufficient -- it's just fine, but the inclusion of a FLUKA model provides an
alternative simulation that can be used to estimate systematic uncertainty in measurements from
the physics and detector simulation model assumptions. GEANT has adjustable parameters that can
be used to evaluate systematics, as does FLUKA, so we always are asking questions of whether the
space of possible systematic uncertainty is fully explored with just one, or even two?
In a world in which GEANT and FLUKA make firm predictions without adjustable parameters (not our world),
we can use the data to prefer one model over the other one. More likely, the models will merely adapt to
predict the data better.
But we had more pressing issues. A student working with Robert Sulej on improving modeling the LArIAT
data (the point I brought up yesterday but it's kind of premature for a presentation), shows that the noise
and field responses are more problematic and cause mismatches between data and MC. Those aren't
modeled by GEANT or FLUKA, but rather in LArSoft code. And they would obscure any comparison of the
data with either of those models. So first things first.
Eventually we do want the ability to run either GEANT or FLUKA, and it takes a lot of upfront investment to
get FLUKA to work. As Gianluca says in the ticket, the geometry has to match in both simulations, otherwise
we'd not be able to disentangle differences that are due to GEANT vs. FLUKA's models, or whether they are
just using a different geometry. A big job!
#4 Updated by Katherine Lato almost 4 years ago
A summary of an email exchange about this issue:
We have been talking at a low level with various parties (ICARUS and DUNE) and Paola Sala about “integrating” FLUKA with LArSoft for about two years now. There was a person with the FLUKA team who met with us last year, and who drafted an initial strategy and conceptual plan based on those discussion.
The strategy we adopted in drafting the plan was to agree on the definition of various steps in the workflow, and on the data to be exchanged between each. Nothing in the plan specifies whether FLUKA is run in the same or a separate job from the LArSoft-based components. In either case, LArSoft will need to provide information to FLUKA about data formats and expectations on the data. To date, we’ve not been asked for these details. This integration strategy also avoids the need to have access to the FLUKA code, provided that we can really agree on the workflow step and data format definitions. This is exactly the strategy we have employed with the successful integrations of the Pandora and WireCell reconstruction packages. Consequently, we may avoid the problems reported by CMS due to the rather bizarre code access restrictions imposed by FLUKA.
An important point of the current plan is that, in the short term, no code in LArSoft needs to be modified in order to support the first steps to use FLUKA as proposed by DUNE.
The question of why FLUKA is needed is a matter for the experiments. Tom Junk provided information on where DUNE stands on their requirements with respect to FLUKA, which is that they eventually want the ability to run either Geant4 or FLUKA.The primary motivation seems to be to enable the study of systematic errors due to modeling the detector or ray tracing and particle interactions in LAr. It will take a lot of up-front investment to get FLUKA to work, however, so they are focusing at present on things they consider to be more problematic in getting data and MC to agree, such as noise and field response modeling. Those elements are part of LArSoft and experiment-specific code, and are not modeled at all in either Geant4 or FLUKA. DUNE believes that the effects from mis-modeling either of these would likely obscure any differences between the FLUKA and Geant4 simulations themselves. So this is where the plan of work stands at the moment.
There is one other on-going project in LArSoft that will have an impact on FLUKA integration. Based on our earlier discussions, we understood that there was not complete alignment between the internally defined simulation workflow steps in LArSoft and FLUKA. The original plan called for changes in FLUKA to adapt to the steps steps defined in LArSoft. We (LArSoft) are currently engaged in a project (initiated for unrelated reasons) to re-architect the LArG4 module, which serves as the interface between LArSoft and Geant4. An important outcome of this work will be a re-factoring of the LArSoft workflow in a way that will expose an interface immediately after the energy deposition phase of particle ray tracing. FLUKA defines a workflow step boundary at the same point. The LArG4 work therefore promises the possibility of greatly simplifying the process of integrating FLUKA with LArSoft. We won’t know for sure until work on integration resumes.
Finally, there is an outstanding problem of how to deal with the geometry in FLUKA. At the time we first discussed how this is handled, it appeared that the geometry was hard-coded. We talked about having FLUKA learn how to read GDML in order to initialize that geometry, but early work in this direction stalled, and now it’s at the end of their plan rather than at the front where we proposed it to be. Tackling this is possibly quite difficult, and we’ve no idea whether the FLUKA team intends to abstract their geometry model (or maybe already have), or to just code up DUNE detectors inside FLUKA.