Project

General

Profile

Meeting 5102010

Marc's notes

Met with Brian Rebel and Rob Kutschke to discuss requirements: specifically, we had a discussion of what is the sensible unit of work for the framework to work upon.

In previous meetings we had discussed some enhancements of the "event loop". In particular, we discussed adding some complexity to the middle level of aggregation (what is currently called the LuminosityBlock). This extra complexity was to allow the lowest-level loop (the event processing itself) to be used for the lowest level of data processing: the processing of individual interactions.

It turns out that the amount of processing done on these individual interactions (not including the effort necessary to identify them) is small. Thus it does not seem worthwhile to complicate the framework with another level of "event processing loop" in order to make the framework handle this level of work.

The area in which most of the processing effort takes place is in the identification of interactions. Thus it seems the Spill is the natural unit of work for both mu2e and the LAr experiments.

Our conclusion is to have the Spill be the unit of work (which is potentially parallelizable within the framework). Essentially, we propose renaming the class Event to Spill.

We also decided that LuminosityBlock could be renamed Subrun.

Rob's notes

Rob's notes from the meeting on May 10, 2010 to discuss what changes are needed to the framework to accomodate the idea of "spills" for both Mu2e and the neutrino experiments.

For the nu experiments, and probably for Mu2e, the fundamental unit of DAQ and of offline processing is the spill. There is a small chance that Mu2e will decide to go another way; more on this later.

For the nu experiments, the processing of data from one spill will produce multiple reconstructed interactions (I am using "reconstructed interactions" instead of "events" to avoid problems with
overloading "event" ). The last step of Offline processing needs to hold in memory, at one time, all of the reconstructed interactions for one spill. This step will sort through all of the
reconstructed interactions and decide which ones are too close together to properly distinguish. Mu2e has a similar requirement except that the reconstructed objects are reconstructed tracks, not
reconstructed interactions.

The requirement that the reconstructed objects be able to communicate with each other kills the model discussed last week ( in which "reconstructed interaction maps to an edm::Event object, there is a
new SpillSource input module, and there is a new type of module that can create edm::Event objects from edm::Spill objects).

The model that does work is: Run/SubRun/Spill. In this model the collection of reconstructed interactions is held as a data product within a Spill.

In this picture the pair (Run/SubRun) is just a bookkeeping device used to define intervals of validity for conditions data, to help with file level bookkeeping, to define intervals of bad data, and
so on. It is highly likely that a 2 level ID is good enough for all experiments.

The is one potential problem with this model. Does it work if Mu2e decides on a hardware trigger, which makes the fundamental unit of DAQ an event, not a spill. If a trigger causes an entire spill to
be read out, the above model works fine. If there can only ever be zero or one trigger per spill, the above model works fine. The problem arises if:
  1. we have a DAQ that allows more than one triggered event within one spill? and
  2. we want to be able to hold both events in memory at once in order to look for correlations among them.

The Mu2e proposal says to read out something like a 200 ns window for one trigger but it does not address multiple triggers within one spill.

I have started a discussion within Mu2e to learn what assumptions we can make about this.

Additional notes:

  1. Marc wanted to know what processing step dominated the CPU time since he wanted make sure the framework could address any-subspill parallelism that might be useful down the road. The trick is to
    design the framework so that the work required to implment parallelization can be done by the framework, not by user code inside a module. However the CPU intensive work is to find the reconstructed
    objects within a spill ( true in both experiments); the step of looking at reconstructed objects afterwards is not time consuming and not a natural candidate for parallelization.
  2. By focusing on offline, are we closing off opportunities for parallelism that may be appropriate in a software trigger?