Project

General

Profile

Support #22583

Help resolve argoneut simulation issue

Added by Tingjun Yang 4 months ago. Updated 4 months ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
-
Target version:
-
Start date:
05/13/2019
Due date:
% Done:

100%

Estimated time:
Spent time:
Experiment:
ArgoNeut
Co-Assignees:
Duration:

Description

Dear art experts,

We are getting the following error when simulating events in argoneut:

%MSG-s ArtException:  PostEndJob 13-May-2019 12:13:21 CDT ModuleEndJob
cet::exception caught in art
---- OtherArt BEGIN
  ---- EventProcessorFailure BEGIN
    EventProcessor: an exception occurred during current event processing
    ---- ScheduleExecutionFailure BEGIN
      Path: ProcessingStopped.
      ---- FatalRootError BEGIN
        Fatal Root Error: @SUB=TTreeCache::FillBuffer
        Inconsistency: fCurrentClusterStart=39903 fEntryCurrent=119700 fNextClusterStart=119709 but fEntryCurrent should not be in between the two
        cet::exception going through module 
      ---- FatalRootError END
      Exception going through path simulate
    ---- ScheduleExecutionFailure END
  ---- EventProcessorFailure END
---- OtherArt END
%MSG
Art has completed and will exit with status 1.

This is seen in larsoft/argoneutcode v08_19_01. I have attached a fcl file. It can be reproduced by running

lar -c standard_sim_mono_t962.fcl -n 100

The module is:
https://cdcvs.fnal.gov/redmine/projects/argoneutcode/repository/revisions/develop/entry/TGMuon/TGMuon_module.cc

Thanks,
Tingjun

standard_sim_mono_t962.fcl (2.32 KB) standard_sim_mono_t962.fcl Tingjun Yang, 05/13/2019 12:26 PM
full.fcl (2.04 KB) full.fcl Kyle Knoepfel, 05/22/2019 01:41 PM

Associated revisions

Revision 2181e178 (diff)
Added by Tingjun Yang 4 months ago

Fix issue #22583 following Philippe Canal's suggestion.

History

#1 Updated by Kyle Knoepfel 4 months ago

  • Assignee set to Kyle Knoepfel
  • Status changed from New to Assigned

Tingjun, in order to run the job, I need access to the ArgoNeuT GPVMs (the job requires access to /pnfs/argoneut/persistent/minos/rhc). I have requested an account, and I will let you know once I learn more afterward.

#2 Updated by Kyle Knoepfel 4 months ago

I have been able to reproduce the error...but not for every execution of the job. I suspect a memory problem. Stay tuned.

#3 Updated by Kyle Knoepfel 4 months ago

See report in issue #22589.

#4 Updated by Kyle Knoepfel 4 months ago

A similar error was encountered by ATLAS:

https://root-forum.cern.ch/t/ttreecache-fillbuffer-error-with-root-6-14-04/30914

We have asked Philippe Canal for guidance on how to proceed.

#5 Updated by Kyle Knoepfel 4 months ago

Tingjun, did you have any troubles running this in an older version of LArSoft?

#6 Updated by Tingjun Yang 4 months ago

This problem seems to happen during transition from v08_15_01 to v08_16_00.

#7 Updated by Kyle Knoepfel 4 months ago

I confirm the difference in behavior between LArSoft v08_15_01 and v08_16_00, which corresponds to the upgrade to art 3.02, and from ROOT 6.12.06 to 6.16.00. Although there are differences in how art sets up its custom error handler between art 3.01 and 3.02, I do not see anything fundamental in the art code changes that results in the behavior you're observing.

At this point, I'm stumped. I ask that you consult with Philippe Canal about this (who's been added as a watcher). Note that the behavior that is observed is not consistent per job execution. See below for what my latest job execution looked like (see the attached FHiCL file).

-bash-4.1$ art -c full.fcl -n 100 --no-output 
%MSG-i MF_INIT_OK:  Early 22-May-2019 12:54:26 CDT JobSetup
Messagelogger initialization complete.
%MSG
Info in <TGeoManager::Import>: Reading geometry from file: /argoneut/app/users/knoepfel/argoneut-devel/build_slf6.x86_64/argoneutcode/gdml/argoneut.gdml
Info in <TGeoManager::TGeoManager>: Geometry GDMLImport, Geometry imported from GDML created
Info in <TGeoManager::SetTopVolume>: Top volume is volWorld. Master volume is volWorld
Info in <TGeoNavigator::BuildCache>: --- Maximum geometry depth set to 100
Info in <TGeoManager::CheckGeometry>: Fixing runtime shapes...
Info in <TGeoManager::CheckGeometry>: ...Nothing to fix
Info in <TGeoManager::CloseGeometry>: Counting nodes...
Info in <TGeoManager::Voxelize>: Voxelizing...
Info in <TGeoManager::CloseGeometry>: Building cache...
Info in <TGeoManager::CountLevels>: max level = 5, max placements = 244
Info in <TGeoManager::CloseGeometry>: 754 nodes/ 131 volume UID's in Geometry imported from GDML
Info in <TGeoManager::CloseGeometry>: ----------------modeler ready----------------
%MSG-i GeometryCore:  Early 22-May-2019 12:54:30 CDT JobSetup
New detector geometry loaded from 
    /argoneut/app/users/knoepfel/argoneut-devel/build_slf6.x86_64/argoneutcode/gdml/argoneut.gdml
    /argoneut/app/users/knoepfel/argoneut-devel/build_slf6.x86_64/argoneutcode/gdml/argoneut.gdml

%MSG
%MSG-i StandardGeometryHelper:  Early 22-May-2019 12:54:30 CDT JobSetup
Loading channel mapping: ChannelMapStandardAlg
%MSG
%MSG-i GeometryCore:  Early 22-May-2019 12:54:30 CDT JobSetup
Sorting volumes...
%MSG
%MSG-i ChannelMapStandardAlg:  Early 22-May-2019 12:54:30 CDT JobSetup
Initializing Standard ChannelMap...
%MSG
%MSG-i RANDOM:  TGMuon:tgmugenerator@Construction 22-May-2019 12:54:30 CDT  ModuleConstruction
Instantiated HepJamesRandom engine "tgmugenerator:0:" with seed 0.
%MSG
%MSG-i NuRandomService:  TGMuon:tgmugenerator@Construction  22-May-2019 12:54:30 CDT ModuleConstruction
Seeding default-type engine "tgmugenerator:" with seed 861276387.
%MSG
search paths: /pnfs/argoneut/persistent/minos/rhc

pattern [  0] N*root
list of 216 will be randomized and pared down to 500 MB
  0] => [ 67] keep    152 MB /pnfs/argoneut/persistent/minos/rhc/N00017178_0000_1258043035_1258123082.root
  1] => [ 49] keep    307 MB /pnfs/argoneut/persistent/minos/rhc/N00017136_0000_1257272982_1257359379.root
  2] => [ 34] keep    437 MB /pnfs/argoneut/persistent/minos/rhc/N00017104_0000_1256583539_1256654910.root
  3] => [128] SKIP    550 MB /pnfs/argoneut/persistent/minos/rhc/N00017394_0000_1261612801_1261670554.root
final list of files:
    [  0]    /var/tmp/ifdh_45271_4020/N00017178_0000_1258043035_1258123082.root
    [  1]    /var/tmp/ifdh_45271_4020/N00017136_0000_1257272982_1257359379.root
    [  2]    /var/tmp/ifdh_45271_4020/N00017104_0000_1256583539_1256654910.root
Number of entries in the tchain = 611435
%MSG-w Geometry:  BeginRun 22-May-2019 12:54:33 CDT run: 1
cannot find sumdata::RunData object to grab detector name
this is expected if generating MC files
using default geometry from configuration file

%MSG
Begin processing the 1st record. run: 1 subRun: 0 event: 1 at 22-May-2019 12:54:33 CDT
Begin processing the 2nd record. run: 1 subRun: 0 event: 2 at 22-May-2019 12:54:34 CDT
Begin processing the 3rd record. run: 1 subRun: 0 event: 3 at 22-May-2019 12:54:34 CDT
Begin processing the 4th record. run: 1 subRun: 0 event: 4 at 22-May-2019 12:54:34 CDT
Begin processing the 5th record. run: 1 subRun: 0 event: 5 at 22-May-2019 12:54:34 CDT
Begin processing the 6th record. run: 1 subRun: 0 event: 6 at 22-May-2019 12:54:35 CDT
Begin processing the 7th record. run: 1 subRun: 0 event: 7 at 22-May-2019 12:54:35 CDT
Begin processing the 8th record. run: 1 subRun: 0 event: 8 at 22-May-2019 12:54:35 CDT
Begin processing the 9th record. run: 1 subRun: 0 event: 9 at 22-May-2019 12:54:36 CDT
Begin processing the 10th record. run: 1 subRun: 0 event: 10 at 22-May-2019 12:54:36 CDT
Begin processing the 11th record. run: 1 subRun: 0 event: 11 at 22-May-2019 12:54:36 CDT
Begin processing the 12th record. run: 1 subRun: 0 event: 12 at 22-May-2019 12:54:36 CDT
Begin processing the 13th record. run: 1 subRun: 0 event: 13 at 22-May-2019 12:54:36 CDT
Begin processing the 14th record. run: 1 subRun: 0 event: 14 at 22-May-2019 12:54:36 CDT
Begin processing the 15th record. run: 1 subRun: 0 event: 15 at 22-May-2019 12:54:37 CDT
Begin processing the 16th record. run: 1 subRun: 0 event: 16 at 22-May-2019 12:54:37 CDT
Begin processing the 17th record. run: 1 subRun: 0 event: 17 at 22-May-2019 12:54:37 CDT
Begin processing the 18th record. run: 1 subRun: 0 event: 18 at 22-May-2019 12:54:37 CDT
Begin processing the 19th record. run: 1 subRun: 0 event: 19 at 22-May-2019 12:54:37 CDT
Begin processing the 20th record. run: 1 subRun: 0 event: 20 at 22-May-2019 12:54:37 CDT
Begin processing the 21st record. run: 1 subRun: 0 event: 21 at 22-May-2019 12:54:38 CDT
Begin processing the 22nd record. run: 1 subRun: 0 event: 22 at 22-May-2019 12:54:38 CDT
Begin processing the 23rd record. run: 1 subRun: 0 event: 23 at 22-May-2019 12:54:38 CDT
Begin processing the 24th record. run: 1 subRun: 0 event: 24 at 22-May-2019 12:54:38 CDT
Begin processing the 25th record. run: 1 subRun: 0 event: 25 at 22-May-2019 12:54:38 CDT
Begin processing the 26th record. run: 1 subRun: 0 event: 26 at 22-May-2019 12:54:38 CDT
Begin processing the 27th record. run: 1 subRun: 0 event: 27 at 22-May-2019 12:54:38 CDT
Begin processing the 28th record. run: 1 subRun: 0 event: 28 at 22-May-2019 12:54:39 CDT
Begin processing the 29th record. run: 1 subRun: 0 event: 29 at 22-May-2019 12:54:39 CDT
Error in <TTreeCache::FillBuffer>: Inconsistency: fCurrentClusterStart=0 fEntryCurrent=79800 fNextClusterStart=79806 but fEntryCurrent should not be in between the two
Error in <TTreeCache::FillBuffer>: Inconsistency: fCurrentClusterStart=39903 fEntryCurrent=119709 fNextClusterStart=119919 but fEntryCurrent should not be in between the two
Begin processing the 30th record. run: 1 subRun: 0 event: 30 at 22-May-2019 12:54:39 CDT
Begin processing the 31st record. run: 1 subRun: 0 event: 31 at 22-May-2019 12:54:39 CDT
Error in <TTreeCache::FillBuffer>: Inconsistency: fCurrentClusterStart=79946 fEntryCurrent=119700 fNextClusterStart=119919 but fEntryCurrent should not be in between the two
Begin processing the 32nd record. run: 1 subRun: 0 event: 32 at 22-May-2019 12:54:39 CDT
Begin processing the 33rd record. run: 1 subRun: 0 event: 33 at 22-May-2019 12:54:40 CDT
Begin processing the 34th record. run: 1 subRun: 0 event: 34 at 22-May-2019 12:54:40 CDT
Begin processing the 35th record. run: 1 subRun: 0 event: 35 at 22-May-2019 12:54:40 CDT
Begin processing the 36th record. run: 1 subRun: 0 event: 36 at 22-May-2019 12:54:40 CDT
Begin processing the 37th record. run: 1 subRun: 0 event: 37 at 22-May-2019 12:54:40 CDT
Error in <TTreeCache::FillBuffer>: Inconsistency: fCurrentClusterStart=79806 fEntryCurrent=119700 fNextClusterStart=119709 but fEntryCurrent should not be in between the two
Begin processing the 38th record. run: 1 subRun: 0 event: 38 at 22-May-2019 12:54:41 CDT
Begin processing the 39th record. run: 1 subRun: 0 event: 39 at 22-May-2019 12:54:41 CDT
Begin processing the 40th record. run: 1 subRun: 0 event: 40 at 22-May-2019 12:54:41 CDT
Begin processing the 41st record. run: 1 subRun: 0 event: 41 at 22-May-2019 12:54:41 CDT
Begin processing the 42nd record. run: 1 subRun: 0 event: 42 at 22-May-2019 12:54:41 CDT
Begin processing the 43rd record. run: 1 subRun: 0 event: 43 at 22-May-2019 12:54:41 CDT
Begin processing the 44th record. run: 1 subRun: 0 event: 44 at 22-May-2019 12:54:41 CDT
Begin processing the 45th record. run: 1 subRun: 0 event: 45 at 22-May-2019 12:54:42 CDT
Begin processing the 46th record. run: 1 subRun: 0 event: 46 at 22-May-2019 12:54:42 CDT
Begin processing the 47th record. run: 1 subRun: 0 event: 47 at 22-May-2019 12:54:42 CDT
Begin processing the 48th record. run: 1 subRun: 0 event: 48 at 22-May-2019 12:54:43 CDT
Begin processing the 49th record. run: 1 subRun: 0 event: 49 at 22-May-2019 12:54:43 CDT
Error in <TTreeCache::FillBuffer>: Inconsistency: fCurrentClusterStart=119919 fEntryCurrent=159612 fNextClusterStart=159892 but fEntryCurrent should not be in between the two
Begin processing the 50th record. run: 1 subRun: 0 event: 50 at 22-May-2019 12:54:43 CDT
Begin processing the 51st record. run: 1 subRun: 0 event: 51 at 22-May-2019 12:54:43 CDT
Begin processing the 52nd record. run: 1 subRun: 0 event: 52 at 22-May-2019 12:54:43 CDT
Begin processing the 53rd record. run: 1 subRun: 0 event: 53 at 22-May-2019 12:54:43 CDT
Begin processing the 54th record. run: 1 subRun: 0 event: 54 at 22-May-2019 12:54:43 CDT
Begin processing the 55th record. run: 1 subRun: 0 event: 55 at 22-May-2019 12:54:43 CDT
Begin processing the 56th record. run: 1 subRun: 0 event: 56 at 22-May-2019 12:54:44 CDT
Begin processing the 57th record. run: 1 subRun: 0 event: 57 at 22-May-2019 12:54:44 CDT
Begin processing the 58th record. run: 1 subRun: 0 event: 58 at 22-May-2019 12:54:44 CDT
Begin processing the 59th record. run: 1 subRun: 0 event: 59 at 22-May-2019 12:54:44 CDT
Begin processing the 60th record. run: 1 subRun: 0 event: 60 at 22-May-2019 12:54:44 CDT
Begin processing the 61st record. run: 1 subRun: 0 event: 61 at 22-May-2019 12:54:45 CDT
Begin processing the 62nd record. run: 1 subRun: 0 event: 62 at 22-May-2019 12:54:45 CDT
Begin processing the 63rd record. run: 1 subRun: 0 event: 63 at 22-May-2019 12:54:45 CDT
Begin processing the 64th record. run: 1 subRun: 0 event: 64 at 22-May-2019 12:54:45 CDT
Begin processing the 65th record. run: 1 subRun: 0 event: 65 at 22-May-2019 12:54:45 CDT
Begin processing the 66th record. run: 1 subRun: 0 event: 66 at 22-May-2019 12:54:45 CDT
Begin processing the 67th record. run: 1 subRun: 0 event: 67 at 22-May-2019 12:54:45 CDT
Begin processing the 68th record. run: 1 subRun: 0 event: 68 at 22-May-2019 12:54:46 CDT
Begin processing the 69th record. run: 1 subRun: 0 event: 69 at 22-May-2019 12:54:46 CDT
Begin processing the 70th record. run: 1 subRun: 0 event: 70 at 22-May-2019 12:54:46 CDT
Begin processing the 71st record. run: 1 subRun: 0 event: 71 at 22-May-2019 12:54:46 CDT
Begin processing the 72nd record. run: 1 subRun: 0 event: 72 at 22-May-2019 12:54:46 CDT
Begin processing the 73rd record. run: 1 subRun: 0 event: 73 at 22-May-2019 12:54:46 CDT
Begin processing the 74th record. run: 1 subRun: 0 event: 74 at 22-May-2019 12:54:46 CDT
Begin processing the 75th record. run: 1 subRun: 0 event: 75 at 22-May-2019 12:54:47 CDT
Begin processing the 76th record. run: 1 subRun: 0 event: 76 at 22-May-2019 12:54:47 CDT
Begin processing the 77th record. run: 1 subRun: 0 event: 77 at 22-May-2019 12:54:47 CDT
Begin processing the 78th record. run: 1 subRun: 0 event: 78 at 22-May-2019 12:54:47 CDT
Error in <TTreeCache::FillBuffer>: Inconsistency: fCurrentClusterStart=119919 fEntryCurrent=159612 fNextClusterStart=182070 but fEntryCurrent should not be in between the two
Begin processing the 79th record. run: 1 subRun: 0 event: 79 at 22-May-2019 12:54:48 CDT
Begin processing the 80th record. run: 1 subRun: 0 event: 80 at 22-May-2019 12:54:48 CDT
Begin processing the 81st record. run: 1 subRun: 0 event: 81 at 22-May-2019 12:54:48 CDT
Error in <TTreeCache::FillBuffer>: Inconsistency: fCurrentClusterStart=0 fEntryCurrent=79800 fNextClusterStart=79806 but fEntryCurrent should not be in between the two
Begin processing the 82nd record. run: 1 subRun: 0 event: 82 at 22-May-2019 12:54:48 CDT
Begin processing the 83rd record. run: 1 subRun: 0 event: 83 at 22-May-2019 12:54:48 CDT
Begin processing the 84th record. run: 1 subRun: 0 event: 84 at 22-May-2019 12:54:48 CDT
Begin processing the 85th record. run: 1 subRun: 0 event: 85 at 22-May-2019 12:54:48 CDT
Begin processing the 86th record. run: 1 subRun: 0 event: 86 at 22-May-2019 12:54:48 CDT
Begin processing the 87th record. run: 1 subRun: 0 event: 87 at 22-May-2019 12:54:49 CDT
Begin processing the 88th record. run: 1 subRun: 0 event: 88 at 22-May-2019 12:54:49 CDT
Begin processing the 89th record. run: 1 subRun: 0 event: 89 at 22-May-2019 12:54:49 CDT
Begin processing the 90th record. run: 1 subRun: 0 event: 90 at 22-May-2019 12:54:49 CDT
Begin processing the 91st record. run: 1 subRun: 0 event: 91 at 22-May-2019 12:54:49 CDT
Begin processing the 92nd record. run: 1 subRun: 0 event: 92 at 22-May-2019 12:54:50 CDT
Begin processing the 93rd record. run: 1 subRun: 0 event: 93 at 22-May-2019 12:54:50 CDT
Error in <TTreeCache::FillBuffer>: Inconsistency: fCurrentClusterStart=79806 fEntryCurrent=119700 fNextClusterStart=119709 but fEntryCurrent should not be in between the two
Begin processing the 94th record. run: 1 subRun: 0 event: 94 at 22-May-2019 12:54:50 CDT
Begin processing the 95th record. run: 1 subRun: 0 event: 95 at 22-May-2019 12:54:50 CDT
Error in <TTreeCache::FillBuffer>: Inconsistency: fCurrentClusterStart=159612 fEntryCurrent=199500 fNextClusterStart=199515 but fEntryCurrent should not be in between the two
Begin processing the 96th record. run: 1 subRun: 0 event: 96 at 22-May-2019 12:54:50 CDT
Begin processing the 97th record. run: 1 subRun: 0 event: 97 at 22-May-2019 12:54:50 CDT
Begin processing the 98th record. run: 1 subRun: 0 event: 98 at 22-May-2019 12:54:50 CDT
Begin processing the 99th record. run: 1 subRun: 0 event: 99 at 22-May-2019 12:54:50 CDT
Begin processing the 100th record. run: 1 subRun: 0 event: 100 at 22-May-2019 12:54:51 CDT
%MSG-i NuRandomService:  TriggerResultInserter:TriggerResults@EndJob  22-May-2019 12:54:51 CDT ModuleEndJob

Summary of seeds computed by the NuRandomService
Random policy: 'random'
  master seed: 727642913
  seed within: [ 1 ; 900000000 ]
   Configured value          Last value   ModuleLabel.InstanceName
          861276387              (same)   tgmugenerator

%MSG

TrigReport ---------- Event  Summary ------------
TrigReport Events total = 100 passed = 100 failed = 0

TimeReport ---------- Time  Summary ---[sec]----
TimeReport CPU = 17.697310 Real = 17.821098

MemReport  ---------- Memory  Summary ---[base-10 MB]----
MemReport  VmPeak = 789.922 VmHWM = 274.256

Art has completed and will exit with status 0.

#8 Updated by Kyle Knoepfel 4 months ago

A little more background--the job uses the --no-output option to disable art's RootOutput module, thus decoupling art's ROOT usage from the TGMuon module's ROOT usage. The RootOutput module sets the error handler, which converts most ROOT errors to fatal errors. Since the output module has been disabled in the above job, the error message is reported, but the job continues until all 100 events have been processed.

Tingjun, I suggest that you try to get ahold of Philippe directly and see what guidance he has. You may be encouraged to open a JIRA ticket. I can help you do that if Philippe thinks that is what's best.

#9 Updated by Philippe Canal 4 months ago

Can you give me access to the first file to see if we can reproduce the inconsistency error in bare root?

#10 Updated by Kyle Knoepfel 4 months ago

I've moved the files to cluck.fnal.gov. In the order listed in the job log above:

[0] /home/knoepfel/scratch/for-philippe/N00017178_0000_1258043035_1258123082.root
[1] /home/knoepfel/scratch/for-philippe/N00017136_0000_1257272982_1257359379.root
[2] /home/knoepfel/scratch/for-philippe/N00017104_0000_1256583539_1256654910.root

Please let me know if you have trouble accessing them.

#11 Updated by Philippe Canal 4 months ago

The problem does not seem to reproduce with bare ROOT (at least on the version I have). To double run:

auto file = TFile::Open("/home/knoepfel/scratch/for-philippe/N00017178_0000_1258043035_1258123082.root");
TTree *minitree; file->GetObject("minitree", minitree);
for(long i = 0; i < minitree->GetEntries(); ++i) minitree->GetEntry(i)

If there is no error message there, then we will debug a version of LArSoft v08_16_00 with a debug version of ROOT to see why this is happening.

Cheers,
Philippe.

#12 Updated by Kyle Knoepfel 4 months ago

Confirm no error message with bare ROOT (art-distributed 6.16/00). I will work on getting a debug version of LArSoft 08.16 available--we already have one for ROOT 6.16/00.

#13 Updated by Lynn Garren 4 months ago

I've just installed larsoft v08_16_00 on cluck. Both debug and prof are available.

source /products/setup
setup -B larsoft v08_16_00 -q e17:debug 

#14 Updated by Kyle Knoepfel 4 months ago

I am able to reproduce the bug using bare ROOT. The crucial aspect is that the entries are not retrieved in sequential order, but using an index randomizer:

#include "TFile.h" 
#include "TChain.h" 

#include <iostream>
#include <random>

int main() {

  auto chain = new TChain("minitree");
  chain->Add("/home/knoepfel/scratch/for-philippe/N00017178_0000_1258043035_1258123082.root");
  chain->Add("/home/knoepfel/scratch/for-philippe/N00017136_0000_1257272982_1257359379.root");
  chain->Add("/home/knoepfel/scratch/for-philippe/N00017104_0000_1256583539_1256654910.root");

  std::default_random_engine engine;
  std::uniform_int_distribution<long long int> dist{0, chain->GetEntries()-1};
  for(long i = 0; i < 10000; ++i) {
    auto entry = dist(engine);
    chain->GetEntry(entry);
  }
}

I get the following error after a few seconds of processing:

Error in <TTreeCache::FillBuffer>: Inconsistency: fCurrentClusterStart=0 fEntryCurrent=119700 fNextClusterStart=119709 but fEntryCurrent should not be in between the two

In the TGMuon module, the randomization is done using the CLHEP random number utilities.

#15 Updated by Philippe Canal 4 months ago

In the full case, are the entry also read out-of-order?

#17 Updated by Philippe Canal 4 months ago

The TTreeCache is designed for uni-directional reading. In a random walk it is actually likely a pessimisation (re-loading from disk multiple the same bytes). So the solution might be simply to disable it.:

chain->SetCacheSize(0);

Cheers,
Philippe.

#18 Updated by Kyle Knoepfel 4 months ago

  • % Done changed from 0 to 100
  • Status changed from Assigned to Resolved
  • Tracker changed from Bug to Support

Tingjun, based on Philippe's guidance, the solution is to add the following line in TGMuon_module.cc:

  chain = new TChain("minitree");
+ chain->SetCacheSize(0);

Please let us know if you still have problems after making the change.

#19 Updated by Tingjun Yang 4 months ago

Indeed, adding that line fixed the problem. Thank you so much, Kyle and Philippe.

#20 Updated by Kyle Knoepfel 4 months ago

  • Status changed from Resolved to Closed


Also available in: Atom PDF