Project

General

Profile

Status of Commisioning » History » Version 46

« Previous - Version 46/47 (diff) - Next » - Current version
Paul Lebrun, 12/21/2015 11:22 AM


Status of Commisioning

.. With only one s in commissioning, because, if this typos is the only one in this project, we are good shape...Trust me! Following the e-mail from Steve Brice, verbatum:
" I think the right benchmark once you have everything working is to be able to reproduce in detail the flux figures from the MiniBooNE flux PRD (http://arxiv.org/abs/0806.1449). In particular figures 27, 28, 29, 30, 31, and 32. The numbers in these plots can be found on the data release web page - http://www-boone.fnal.gov/for_physicists/data_release/flux/"

As of March 24 2015, here is I think where we are:

With the .../source/jobOptions.in input file, the code now does not crash over 5 events, using the GDML file provided by Zarko P. two weeks ago. Thus, in principle, we could start running longer jobs and start using the grid. Again, this is using the rather lengthy set of parameters found in the option file.

On the geometry: The interface G3 to G4 has been entirely removed. New materials can be defined. The list of 4 "old" (probably dating back to the Geant3 era) material has been kept in the global structs in BooNEPartMatData.hh. A new private function uploads the G4Material pointers from either the new names for Beryllium, or the old name BERILLIUM (without space char for Fortran style padding.

First order checks: It has gone successfully through "valgrind": there are no use of uninitialized variables, and the memory leaks are small. At least on 5 events. Longer runs are needed. Quite a few initializations in the BooNEHadronPhysics class had to be done in the constructor of this class. In particular, the set of boolean flags that control the energy range for a given hadronic model had to be initialized. However, an exhaustive checking on what's happening when the old G4 models are turned on has not been done.. The models for the secondary interaction (i.e., not induced by the primary protons) is always "DEFAULT".

The BooNEHadronPhysics messenger is partial, and, worst, for many parameters, ineffective: the constructor of the BooNEHadronPhysics and the initialization of the corresponding physics is now issued in the same phase of the BooNERunManager, with no "UIApplyCommand" in between. This means that a G4UI directive may very well come when Geant4 is in the wrong state. We need to identify what G4UI directive are needed for further investigation. Our first priority is to build a working executable where the above benchmark can be conducted.

Contrary to the BooNEHadronPhysics class, the BooNEpBeInteraction class has "UIApplyCommand" that de-facto upload parameters value. These seems to be complete, in the sense that the coderuns without uninitialized values. However, the default values for the Sanford-Wang parameters c1-8 differ from those found in the jobOptions.in files. These latter ones do agree with the one found in the PRD paper referred above. Thus, Elena and myself agreed that the correct thing to do is to upload the published values as default in in the messenger class.

More later on..

Entries for March 25 2015,

Studied a bit the BooNEPBeInteraction class. Yesterday, we check that the code runs, produces particle. We now initiate the validation. Our near-immediate goal is to benchmark the code against table X of the PRD 79, 072002 paper. To do this we first turn off the annoying "had012" G4Exception, prompted by energy non-cpnversation, incoming proton of 8 GeV kinetic energy, and the ougoing event. In our case, because the BooNEPBeInteraction is a weighted Monte-Carlo, energy and momentum conservation occurs only in an average sense. So, doing this check, event by event, makes little sense. The virtual method "GetFatalEnergyCheckLevels" method of G4HadronicInteraction has been implemented with very conservative cuts, thereby avoiding bogus messages.

Verified that the weight do indeed depart from unity. But then, this brings a question regarding the existing BooNEOutput::Ntuple102, the so-called Production Ntuple. The weights, available as a data member of the SecondaryTrack is simply ignored.. Does this mean that it can simply be ignored in the analysis because the sum of the weights is always properly set to one, so we do not need them? This needs to be resolved.

Meanwhile, I created an ASCII Ntuple, as a std::ofstream, plain ASCII file, to study table X. I plan to write a very simple set of RScript for computing weighted averages of production angles, momenta, and so forth..

To speed up this validation, I installed a command to "stopAndKill" in the Stepping Action method all particle downstream a fixed Z. Depending on the calculation we do, no point generating neutrino if we are not gona to use them. I plan to use this to speed up the simulation of a detector in the decay pipe.

Next question: Does the work involved using alternate models for simulation the interaction of secondary pions (let us assume the primary beam contains no pions, for sake of simplifying the discussion), or, conversely, the BooNEpBeInteraction class only supports the simulation of the first (primary) interaction for an 8 GeV on Beryllium? Zarko pointed that the standard command file (jobOptions.in) invoke the command

# pion plus cross sections
/boone/crosssections/pionplus/totPipBeXsecPar    0.6  0.8138 3.418 ....
These parameters are indeed uploaded in the BooNEHadronCrossSection class. H. Tanaka has indeed carefully parametrized the elastic, quasielastic and ineslatic X-section for proton, neutron, pions, on Beryllium and Aluminium. However, the accessors's class are not invoked anywhere in the code. It looks like a valiant effort to start a more complete Hadronic sim package, but, for now, it seems this is only a stub. Not used in the neutrino flux calculation.

Thus, we can reasonably assume that the BooNEpBeInteraction class only simulate the primary interaction in the Beryllium target. All other hadronic interaction are handled by native Geant4 processes. Which change, quite a bit, since 2007. Thus, the accuracy of the benchmark needs to be discussed.. We should expect perfect, or new perfect (~%) agreement with the flux tables from the PRD paper.

This brings an other suggestion: If so, we could get rid entirely of the class BooNEHadronPhysics, and invoke the BooNEpBeInteraction from the Primary generator. This will make the code a lot easier to manage. We could easily distribute the z position of the primary vertex in the target, based on the total cross-section. And correct the proton inicdent energy for energy loss (Landau fluctuation in the energy in the target could be ignored. The "ApplyYoursel" only need a track, we can easily created in the the Primary generator, or rewrite the top level interface of the BooNEpBeInteraction class.
To be discussed...

Entries for March 26/27 2015,

Focusing on the Production Ntuple. Based on 100,000 PoT. Based on the ASCII version, streaming through the event numbers, we got 70,918 particle-produsing reaction. See R code in Validation/PLRSCripts/TableXCheck_1.r The cross-section that was used was the one computed by Geant4, LowEnergy Hadronic model. (flag crossSectionsDEC04 is false in the class BooNEpBeInteraction class). At P = 8.9 GeV/c, a value of 177.7 mb has been used. The PDG lists an Nuclear interaction length of 77.8 gr/cm^2 for Be, which corresponds to an X-section of 192 mb. Trusting the Geant4 value, one gets a nuclear interaction length of 84.2 gr/cm^2. The target length is 71.1 cm. The density is 1.848 gr/cm^3. So we have 1.56 interaction length. So 21% of the protons survive. An 8% difference with the G4 result, due probably to the proton scattering elastically, or quasi elastically off the target, or missing the target entirely. Credible.

We now focus on the content of the Production of particle: the probability for producing hadrons. Using a similar format to the one on TableX of PRD 79 0722002 (2009), one gets:

 
 Proton      1.563016      2.630562     0.441931
 Neutron     1.347218      1.582488     0.586892
 PiMinus     0.801532      0.848943     0.518582
 PiPlus      0.888225      1.106754     0.413510
 KPlus       0.068926      1.689811     0.329261
 KZero       0.024165      1.339010     0.411405
 KMinus      0.002876      1.244265     0.279272

There is overall good agreement, except for the pi minus. The multiplicity differs by 0.1, absolute. The average angle is also a bit off (7.2 %, relative). We have not yet investigated the difference.

Since the beam loss due to scattering in the target would not be modeled correctly if we invoke BooNEpBeInteraction::ApplyYourself directly from the Primary generator, or not easily simulated, we keep the current organisation of the code. We will verify that hadrons are absorbed or, reinteract in the target, conductor of the horns. This will be done in the Stacking action, again, simple ASCII Ntuple

Entries for March 30 2015,

Ran the same code with the following C-8 SW parameters for pi-minus, based on the Zarko's recommendation:

237.2 0.8986 4.521 1.154 1.105 4.224 0.06613 9.961

Got Now :

....
  PiMinus      0.910298      0.817483     0.559891 
  PiPlus       0.888442      1.109623     0.414517 
...

The pi - minus multiplicty is now in agreement with the PRD paper (0.9004 vs 0.910), a difference of 0.01, instead of 0.1. Average momentum and angle are now also in agreement We will upgrade the default values for c1-8 in the code, based on these values.

Starting to insert Dk2Nu option. Requested to Lynn Garren et al to install Dk2nu products, repo, whatever, under larsoft, to be available on lbne machine and the grid.

Entries for April 2 2015,

Installed Dk2nu. The relevant setup file for those of us working on LBNEgpvm0x machines is now in the file .../booster-neutrino-beamline/BooNEG4Fnalsetup.sh That is, we are now using the most recent version of Geant4.9.6, so-called patch 04a. and Dk2nu v1_01_03. All compiled with gcc v4_9_2. The setup files .../booster-neutrino-beamline/BooNEG4setup.sh is perhaps obsolete, depending on your install. While I don't think there are gcc version specific code fragments in this project, the compilation, link and run on small statistics has only been check for these version. If you plan to compile on other system, where the setups specifified in .../booster-neutrino-beamline/BooNEG4Fnalsetup.sh might not work, then, you might be on your own.

It is my understanding that this version will work on the grid. We can now generate Dk2nu Root Ntuple without intermediate software. The old Ntuples have been left in place, though. I had to fix a bug in the memory management for the UserTrackInformation class. Ran successful Valgrind checks. Minor memory leaks in the Hadron might cause problems for long runs. To be checked. Meanwhile, got a Dk2nu Ntuple for 20 kPoTs, and got a meaningful neutrino flux out of it. Also check the weights in the ancestry. Note that this work has been done using the following G4UI input file:

.../booster-neutrino-beamline/sources/macros/testBeam3.in

First check, with shamefully little statistics:

Filled Squares and circles are this work, lines are PRD paper. The scale is arbitrary, I have not found the (presumably disparagingly simple) way to get an absolute flux from Dk2nu. The spectra are for numu, this work vs PRD paper, are normalized at 1 GeV. There is good semi-qualitative agreement on numubar. But, but, we already have a discrepancy at 0.5 GeV, for numu. An anomaly. How not too surprising..

So we can resume benchmark activities. However, first we need to check the magnetic field in the horn with muon geantinos. Then, benchmark on large statistics..

Entries for April 3 2015,

Mostly started to investigate the strong distortion of in the normalized difference between the PRD neutrino numu spectrum, and the one observed running this code. Established that this is not a statistical fluke. Zarko suggested to look in the effect of smearing across detector positions, as Dk2nu assumes we are interested at the flux for a virtual detector of small size, located at a fixed position. Unfortunately, this does not explained the difference... More work is needed. Wrote some analysis code in .../booster-neutrino-beamline/Validation/Dk2nyAnalysis. Weighted Z distribution of the started position of the pi that gave us a numu looks O.K.

Entries for April 6 2015,

Installing more debugging code, for instance, dump the path length in the inner conductor. Also, implement tentative code (I hope I won't have to use it!) to resotre the parent track after a hadronic interaction, other than the primary proton. Not commissioned yet.

In testing all this, I just realized that I left the infamous "/boone/stepping/KillAllAtFixedZ 10.5 m" in the production job!. That is, no track could propagate past 10.5 m from the target. We did this to check the p-Be interaction class. Evidently, the BNB decay pipe is not ~ (10 - 6) m. long, but 50 m. long. No too surprising that we have a relative excess at low energy, deficit as high energy. Fixed this in the input macros, running again..

Meanwhile, commissioning the check on path length in the inner conductor.

Entries for April 10 -14 2015,

Crude checks of the geometry and path length in the horn successful.

In the midst of the difficult part: While running with the complete set of data cards and partially consistent initialization done prior to reading these data cards, and fixing the above bug in the input macros, we manage to get a nu mu flux that starts to make some sense. Note the intentional vagueness in stating this.. Anyway, shown here is the difference between the PRD result and this result, obtained on April 11 17:09 (FirstNuFluxCompareRatioNuMuProd3h_ProdBeam4b_4.pdf).

The blue circles plot, "no detector smearing", assumes that we have a point like detector. These values are taken striaght from the dk2nu root output The black squares are obtained re-computing the weight/energies, ray tracing over the volume of the MiniBoone detector (~ 6 m. radius). Note that this is now an absolute comparison. In average, we are 9 % off.

Quite unfortunate, but I am sure I can reproduce this result! Indeed, while re-organizing the initialization phase of the BooNEHadronPhysics and the BooNEpBeInteraction classes, things will got messed. But we will!... Meanwhile, various "dormant bugs" in the associated messengers were fixes, allowing for sequential re-initialization. That is, if the complete set of G4UI are executed, and executed only once, these classes were initialized correctly. However, we want the option to run from Geant4 native physics list, used by other experiment, for sake of comparison. To that effect, the Physics list of choice (QGSP_BERT, vs BooNE) must be selected at the pre-init G4 state. But this mean that the BooNE physics had also be initialized at this state. Since it assumes that the Primary Generator was already there, we got into a seious chicken and egg problem.

Not solved yet, but the executable will run now, assuming the complete set of physics commands that are in the "jobOptions.in" are include in the input macros file. This will allow for other test.
Note: The application wide Random seed is no longer set from the BooNEpBeInteraction class (or it's messenger), but via the

  /boone/rndm/setRndmSeed 123456

command card, because depending the physics list, the BooNEpOnBeInteraction class might not be instantiated.

Evidently, more work needs to be done on the pre-init/Run initialization stage. A new class will have to be coded up, let us call it BooNEPreInitPhysics. This class will own the BooNEHadronPhysics, the BooNEpBeInteraction and the BooNECmessengeBooNEHadronCrossSections messenger classes . These messengers will modify private data members of BooNEPreInitPhysics, not the three BooNE physics class per-se. This way, the data cards can be entered and be effective prior to the run-initialization command. The constructors, initializer blocks and process constructor will have to fetch the data from the BooNEPreInitPhysics class. This means that a thorough study of the initialization sequences of these three BooNE Hadron Physics classes will have to be done, and the side effect of updating on data cards will have to be understood. Quite a bit of work, but I don't see any other way to implement the capability of keeping the BooNE model and allow for the use of the G4 model, per se.

We clearly need to test at the various stage of the calculation. In particular, we need to use the existing "tracking screen", for instance, right after or before horn, to look at track multiplicity. The old Ntuple can/should be checked again, or, better more generic SteppingAction methods and dump this data on specific data files.

The other alternative is to keep the design we have, no such BooNEPreInitPhysics class, but make the constructor of the 3 physics class do not attempt to access the G4ParticleTable, as this table is incomplete at the pre-init table. We can in fact limit the application range to the Booster beam, i.e., 8 GeV proton incident beam target. And, if the only that is needed is the mass of proton, we can skip the need to interrogate the data base...

I need to play more with these two options..

Entries for April 17 -22 2015,

Kept working on the initialization. The relevant physics classes have now a "prepare" stage, invoked from the run initialization. Status of the last commit, April 21: Using the full set of option card,
because some (but not all!) of the cross section and energy range are recomputed, on ends with with a weighted Monte-Carlo. These weights, set by the ratio of requested cross-section in the BooNEpBeInteraction::SetxxxxxPhysicsModel, where xxxx is PiPlus, PiMinus, etc.. are computed based on the ratio of double differential cross section to the total cross-section. If the initialization sequence is such that the total cross-section and these differential cross-section are properly scaled, for the default case, at least, then these weights will be set to one. This will not be case if a set of the set of data cards obtained from the original "jobOptions.in", as the "setter" call backs from the G4UIMessenger ddo not consistently execute all the needed re-initialization. Again, for the default case, after a few trials and errors, I managed to have a consistent program flox such that the XSecRatios are all set to one. However, in the limit of large statistics, it should not make any differences. Using the complete set of G4UI cards, for NuMus, one has:

Using NO Hadron physics data cards at all:

Note that the scattering above ~2 GeV is larger, especially for the un-smeared data. Again, contrary to the above plot, all weights were found to equal to 1, easing the determination of statistical uncertainties.

The difference is minimum at an energy of ~0.875 GeV, about 7%, in average between 0.5 and 2.0 GeV, it is 21.7%. Using the complete set of data cards (weighted Monte-Carlo in BooNEpBeIntreaction), one gets at 0.875 GeV a difference of 9%, and in average between 0.5 and 2.0 GeV, one gets 20.4%. So, we still have work to do..

Next steps: (a) Test the G4 QGSP_BERT physics list. (b) Look at the Ntuples just after the Horn and Collimator.. Run the old code, and compare. Also the neutrino spectrum for pions/kaons that come directly from the target, versus those for 2n generation... We need the old code for this as well..

Final note for today: the "default", no weights, runs approximately 2.6 times faster.

Entries for July 12-13 2015,

Going back to this log, as it might be easier to document the lack of "retro-progress".. In mid-June, the status of commissioning discussed in the note dated June 1 was presented. Upon reading this note, Steve B. suggested a "more systematic" approach. Which is not always easy, as one is always concerned of breaking the old code, should a modification be made to install systematically debugging tools in the old code. We'll come back to this later on. Shortly after the BooNE beam line simulation meeting, June 17, one action item was to discussed and adopted: Since we suspect a significant change in the rate at which low energy pions are absorbed in the material (Beryllium, Aluminum, etc), we ought to run with less material in the way, and check the neutrino flux. Two new distinct geometry files were created by Zarko, June 22, which differ from the nominal one:

  1. The Aluminum in the horn is converted to air, as well as the fins of the Beryllium target, the collimator. Let us call this configuration it "BeTarget"
  2. Above, and, in addition, only the fist Beryllium slug (length 101.602 mm) is installed, at z = 85.72 mm. ("BeSlug").

In both cases, one expect difference between the old vs new MC, as we have relatively less absorption.

Zarko and myself ran the MC, old and new. Zarko also generated and concatenated the histogram Ntuple. Three small histogram root files was prepared by Zarko, they contained many one D and 2D histograms that document the neutrino flux for these 3 cases (Nominal, BeTarget, BeSlug). One of these histogram is labeled h503, and contains, with a bin of 50 MeV, the neutrino yield per PoT at the MiniBoone detector. Up to a normalization factor, that corrects for the effective surface of the MiniBoone detector, 6.1 m radius, this is the enutrino flux. h503 contains only the numu flux, from all provenance (pion, kaons, muons). For the nominal case, P.L. verified that this numu flux match, to within 0.2%, the flux published in the PRD paper.

Regarding the new MC, these two new configuration ran on fermigrid, and the Dk2nu ntuples were processed that same as was done for the nominal case, producing new neutrino fluxes. They are then directly compared to the neutrino flux from the old MC. One gets:

This is based on 39.75 (40.5) millions PoT for the BeTarget (BeSlug) for the NEW Monte-Carlo, respectively. These results are clearly unexpected. Recall that the difference at 0.9 GeV for the nominal case was +9%, for exactly the same quantity. Same analysis, simply different geometry GDML files. One expected a reduction of the relative difference New vs OLd, as the amount of scattering is less in air then in Alulminum or Iron. We observe bigger difference in both cases, but with different signs.

Thus, besides the difference in pion-Nuclean scattering cross-sections, between the two versions, there is very likely an other crucial difference. Suggestion: We ought to go back to the geometry issue. One has to re-analyze the so-called "screen Ntuples", located just after the horn, and/or after the collimator. They detect the all charged particles going through the beam line. We also have an ntuple that contains the 6D phase space for every hadron that Geant4 will track. One then can make semi-qualitative checks on the rate at which pions coming from a p-Be interaction (as the number of ancestor is also recorded) are lost, either due to re-interaction or decay. This has been done for the new MC. The soft pion decay rate and absorption rate in full geometry were found comparable, and consistent with a rough estimates based on PDG values for the pion life time and absorption length in Al and Be.

We need to go further. A quick check on the pion trajectories in Horn's magnetic field, for pions of interest, i.e., coming from a proton-Be interaction in the slug, can be made. Here, we select the pions that are intercepted by the screen located behind the Horn, at a Z-coordinate of 2 m. We pick up the 6D phase space coordinate of the pions just before G4 tracking starts for this pions. One can then compute the integrated momentum kick due to multiple scattering in Be (and air), and the magnetic field in the horn, and check these momentum kicks. This has been donw the BeSlug case, where we have a relatively short pion emission zone. For instance, for pion of a momentum of 1.0 (0.5) GeV/c, one has:

At very small angle (few degrees), there is in average no transverse momentum kicks, as pion are going straight into the inner part of the inside conductor of the horn. As the emission angle increases, one has more deflections, as the pions cross the inner conductors inside the field region. At ~8 to 10 degree (for one GeV/c pions) comes the confusing region, presumably because the pions are getting over-focused and cross the inner conductor more than once. At large angle, the pions enter the horn quickly, and probe in average a smaller fied because there trajectories are at a wider angle. This pattern repeats itself at lower momentum, with larger deflection angle, as it should

There is yet an other simpler test that ought to be done again: simply through muon-Geantino, and check the field map, as seen during tracking. For instance, the muon trajectories from (X=0., Y=0., Z= 45 mm) can be recorded, as well as the value of the field. For the New Code, one has:

One ought to be able to reproduce these plots for the Old code, on the miniBoone machine, with, perhaps minor code modifications.

Entries for November, December 2015

Two significant changes were made:

Accept all neutrinos candidate in the Dk2Nu output files..

Above some very small momenta, but not that small: 50 MeV was set too low for a fair sampling of the final neutrino spectrum. We removed this cut, but, ask for the parent particle not to be at rest.

Calculation of the effective boosted solid angle.

In the Dk2nu post Geant4 analysis (uploaded in the Validation/Dk2NuAnalysis), We implement the T2K re-smapling method, where all pions are allowed to decay 1,000 (or 10,000) times, thereby sampling the effective boosted solid angle over the finite surface of MiniBooNE. In principle, for a detecotr of 1 sq. meter, Dk2nu CalcWeight function does this correctly, however, it might not be quite correct for a decay pipe of meter, a detector of 6.1 m. located at 541 m, for a wide range of pion lab angles with respect to the beam, at beta of ~ 9.5. The Dk2Nu approximation seems to be valid for pion lab angle less then 10 mRad, and if MiniBooNE detector located at 5.4 km instead of 540 m.

We also re-run the one-slug, no material downstream. We re-ran the full target, full material . Using the T2K method to re-sample many time the Lorentz boosted effective solid angle, we find:

The plot on the left is for the one target slug, full geometry on the left.  This is for nu mus from pion decay only. (both version) The effect on the effective pion absorption in the complete system is clearly seen, and is now consistent with the comparison of the pion spectra after the horn. This remains a qualitative statement, the above plot are the result of a complex Monte-Carlo based integration.