Support for reading secondary input files in a special scenario
Mu2e has the following use case:
1) Run the simulation chain to the point that it has produced simulated raw data.
2) The output files from 1) contain the full MC truth chain plus the simulated raw data.
3) Extract the simulated raw data into a format that can be injected into the daq/trigger development system
4) Run the daq/trigger development system reading the input from 3.
5) Inside 4) we run the trigger code and write output files that contain the simulated raw data plus
data products produced by the trigger code
We would like to read the output of 5) and access the MC truth available in 2.
I believe that we can do this with event mixing. It would be cleaner if we can do it with
secondary input files. What would it take to support reading via secondary input file?
We have a lot of freedom to modify our workflows to support this.
#1 Updated by Kyle Knoepfel over 3 years ago
- Status changed from New to Feedback
With your described workflow, we interpret there to be an intentional disconnect between step 2 and step 3. In other words, for step 3, the intention is to create a raw data that acts as an input source to the trigger system. If that is the case, then the secondary input file facility will not be able to support your use case due to mismatches in provenance. Event mixing, as you point out however, will work.
If the reason for wanting to use secondary input files is to continue provenance, then we can discuss what would be involved and whether a feature request is sensible.
#2 Updated by Rob Kutschke over 3 years ago
I am aware that we will likely lose provenance and we can live with that.
Could we bias the job counter that goes into product IDs so that data products produced by the trigger start at some number bigger than 1? Then the product IDs can be unique across the full set of jobs?
#3 Updated by Marc Paterno over 3 years ago
- Tracker changed from Feature to Idea
- Status changed from Feedback to Accepted
After discussion with Rob, this involves more work than is reasonable right now. We will keep this need in mind as we look at metadata reorganization, as part of the multithreading tasks.