Project

General

Profile

Improvements to monitoring

Features discussed at the May-14-2016 meeting

  • A flexible number of consumers (as opposed to simply the online monitoring aggregator), capable of appearing / disappearing during datataking without causing disruptions, which, when they appear, announce their presence to the DAQ and tell it what types of events it wants to receive.
    - Not an absolute requirement that consumers receive all requested events (though obviously we would like them to)
    - A consumer is actually an art process which uses an art input source capable of receiving events originating in the DAQ
  • A flexible number of dispatchers (as opposed to simply the diskwriting aggregator), capable of checking for consumers, and sending events to them
    - Not OK for dispatchers to not receive all events (100% reliability)
    - Perhaps I (JCF) missed this: does the term "dispatcher" simply refer to an artdaq process (BoardReaderMain, etc.) which sends events to consumers, or is it a new, special type of process separate from BoardReaderMain, EventBuilderMain and AggregatorMain?
    - The actual act of sending the events could be done in an art output module (for those processes which contain art threads, e.g., EventBuilderMains and AggregatorMains) or elsewhere (BoardReaderMain, which doesn't have an art thread).
  • For a given communication line, the ability to select one or more different transport implementations: shared memory, multicast, RTI-DDS, etc.
  • Ultimately, these different transport implementations could be used not just for sending data to consumers, but sending data throughout the DAQ - i.e., serving as a replacement for MPI

Update, Jun-12-2016

With artdaq-demo's monitoring branch, the head of which at this point is commit SHA ee1645430159b14c27a8430aea3a66d204a29b60, along with artdaq's monitoring branch, the head of which is 14fde3f6b9e72b41b46a73476ef27360ed4a2e8a, the available functionality is described by the following table (explanation to follow):

standard art, no monitoring aggregator art and monitoring aggregator start in middle of run
shmem Yes No No No
RTIDDS Yes Yes Yes No
multicast No No No No

Here, each row describes a different implementation of the new transfer plugin. Note that shmem is the traditional method of broadcasting data from the diskwriting aggregator, but that the shared memory details are now implemented separately from the aggregator code, in the shmemTransfer class. The columns describe different types of monitoring configurations: "standard" implies the traditional configuration, in which a second aggregator receives data from the first. "art, no monitoring aggregator" means that there is no second aggregator, and rather, all monitoring is performed by running art using the new TransferInput source. "art and monitoring aggregator" means that there's both a second aggregator and a separate instance of art, potentially running a separate set of art modules from the second aggregator. "start in middle of run" means that it's possible to launch an art process using the TransferInput source in the middle of datataking. While no work has yet been done on multicast, a row for it has been included as it was suggested as a method to broadcast data in our May-14-2016 meeting.

A quick aside: note that the artdaq-demo/tools/fcl/TransferInput.fcl file, added to artdaq-demo to allow a separate art process to use the TransferInput source to monitor, had to be edited to switch between shmem and RTIDDS, as did generateAggregator.rb. With the commit described above (ee1645430159b14c27a8430aea3a66d204a29b60) I needed to add the transferImplementationType value to the TransferInput source so it would know which implementation to use (RTIDDS vs. shmem).