Proposal for replacement for
Motivation.¶There are several different contexts in which a module may execute:
- as part of a path in a (possibly parallel) schedule;
- as a (possibly parallel) part of the end path;
- triggered on-demand by a module in either of the first two contexts.
These contexts also incorporate not only the event loop, but also other barriers such as end of run or end job.
In order to manage properly access to resources such as services or products, a centrally-available facility must be able to provide, at all times, a representation of the current state of the art program from the point of view of the entity requesting such state. This must be achieved in the face of TBB task-based operation which may involve preemption i.e. suspension of a running task to allow the execution of another task in that thread.
Discussion.¶As of 2013/09/25 or so, services may be
PER_SCHEDULE. Signals may be global or local. Modules have no classification on this axis, but it is envisaged that in the future they may have properties that dictate whether they may be multiply-instantiated, or access a resource whose use must be serialized (such as ROOT histogramming). In this environment then:
- Algorithm code must be able to access services safely, whether they are executing in the multi-schedule or end-path sections of the event loop, or some other serialized or naturally-serial stage like the input module or run or subrun context.
- Infrastructure code must be able to identify the currently-executing module and the context in which it is operating.
regardless of the parallelism model in operation at the time, or the particular task or thread appropriate.
Given the difficulties caused by Intel TBB's task::self() function not necessarily returning a usable pointer to a task, it is necessary to keep track of when it is permissible to call this function, or avoid calling it at all. In addition, we must bear in mind that TBB may execute a task which preempts the runnign task. If it is a user task or a high-level TBB construct like
parallel_for(), then we have no way of updating the information. We should therefore make the rule that users should not make art infrastructure or service calls from within their own parallel structures.
ExecutionInfoobjects. These objects provide the following information:
- Access to the
ModuleDescriptionof the currently-running module, if available.
ScheduleID(will be 0 if not in
- "Stage" -- e.g.
Everything that pushes something on to the stack should however pop it off again at the end of the operation, (e.g.) using RAII if appropriate. A class may be provided for this purpose.
There will be a mechanism to push a new info object containing ony a change of module description or stage with respect to the previous one. This is likely to be most useful in the
Each art infrastructure task implementation should push its context onto the stack at the start of its
extecute() function, and pop it off again at the end.
This will be a global entity, but the question arises: should it be a global service, or a singleton? The former has a better-defined scope of validity; the latter removes the overhead of the
ServiceHandle. My initial proposal is to go the singleton route unless we discover limitations. There is no obvious need discernible at this time for
ParameterSet or any other level of configuration.