Project

General

Profile

Midas Calo-Tracker merge

After the 16th Jan 2015 DAQ meeting, we have discussed the notes that were previously on this page and decided on the following tasks. They are order in terms of when they are "due".

Note that the numbering doe not relate to the old numbering (there is no one-to-one mapping).

Tasks for now

1) Tim to update gm2daq (makefiles, etc) to have more genric paths based on environment variables etc to avoid hardcoding, and remove references to $MIDASSYS/linux64 (use $MIDASSYS/linux).

2) Tom to send modified version of event builder from tracker repository to Wes, then Wes to check if it meets his requirements and if so we can discuss merging to gm2daq.

3) Although we are keeping separate master frontends for now during development, Tom to check that the two systems will merge easily in the future by ensuring tracker master trigger is using same RPC system as calorimeters, etc.

4) Wes and Tom to look at how master frontend slaves are configured, and to propose a new common system that removes hardcoding and provides a more user friendly way to adding and removing slaves from the system. Maybe this will involve using an ODB array variable for the master frontend where the slave frontend indices or names can be dynamically specified. Needs also to work systems where the same equipment is being multiply instantiated as a number of frontends.

5) A ROME example is needed soon so that everyone can start to learn to use it. It would be good if possible to implement a simple version of ROME DQM for the tracker test beam in June 2015. Dubna to provide.

6) Decide how to perform logging from frontends. Probably want a common method for this so that the whole DAQ can write out in the same format to the same stream. This should have toggles for where to write to (e.g. to screen, file, etc). This should be done sooner rather than later, as otherwise we will need to change code we are writing now in the future. We need to investigate the MIDAS logger to see if this provids a good framework for this. Tom and Wes to investigate.

Tasks for the short term

Here the short term means next few months, in particular by the June 2015 tracker test beam.

7) Add any new code that is useful to multiple frontends to the common frontend tools to gm2daq/frontends/common. This is anticipated to include things like RPC tools (already currently in the common directory, although the tracker frontend needs to start using these versions), ODB tools, IPBus/AMC13 tools, etc.

8) Investigate where the AMC13 tools and the tracker IPBus tools (IPBusDeviceManager, etc) overlap and see where reproduction may be avoided.

9) Work towards having a running AMC13 frontend at UCL, investigating the possiblity of having a shared frontend with the calorimeter, or any other AMC13 systems in experiment.

10) Need discussions on what happens to the data after it written to a MIDAS bank. This includes how the DQM will run, and how the data will be converted to the offline format. This conversation will also need to consider how DQM and slow control information (presumably stored to databases) will be linked to the "per run" DAQ data, in particular so that an offline user is able to check if a run is any good before using it.

11) Need to decide how slow control frontends will run. One option proposed would be for them to run within the same MIDAS experiment (e.g. same web interface) as the DAQ frontends, but running in a polling mode that is independent of the runs. This is possible in MIDAS, and the data could be written periodically to a database independently of the run .mid file writing, and would continue even when DAQ is not running. Using the same MIDAS experiment has the advantage that it makes it easy for DAQ and slow control frontends to interact (e.g. so that slow control can stop/inhibit runs, and so that tracker configuration data can be passed down the slow control link before a run starts, etc), otherwise something new might need to be developed. It should be noted that the actual slow control frontends don't need to run on the same machine to be part of the same MIDAS experiment, which avoids performance and configuration issues. These kind of things need discussing with Mike Eads and the various slow control users however, as there are many requirements. It was proposed that a slow control workshop on the workshop day before the next collaboration meeting might be useful, maybe combined with the clock/trigger workshop. Becky will start developing a slow control frontend for the tracker at UCL.

12)Nneed to consider in more details what steps the online data will need to go through to end up in an offline analysis format that is commong with simulation. This needs to be done as part of the tracker test beam simulation and analysis development.

13) Begin discussion with auxillary detectors once they begin developing frontends etc so that we keep as much in common as possible.

Tasks for the final system

Final system here really means the first time that the calorimeters and trackers will be run together under a single MIDAS installation in prepration for the full experiment.

14) Merge the makefiles in various directories in gm2daq and gm2-tracker-readout-daq into as few files as possible to avoid reproduction and make things as simple as possible or users.

15) Merge various master frontends across gm2daq and gm2-tracker-readout-daq etc to form one final DAQ master trigger.

16) Harmonise scripts, e.g. those for starting/stopping MIDAS, environment scripts, etc.

17) Harmonise MIDAS environment variable usage (e.g. do we specified experiment name with "-e <name>" arg to frontends, or with MIDAS_EXP_NAME environment variable, etc)

18) Harmonise how frontends are launched, e.g. with screen or not, etc.

19) Develop an event display system, possibly using Paraview. This is currently under investigation by Adam Lyon.

Other notes

  • Have decided that there is no need to rename various event builder names, as it isn't causing problems and much of it is hardcoded in MIDAS.
  • We all agree that messages sent to the main MIDAS message page (the ones that pods uo in the web interface) should only be for things that the user needs immediately to be aware of. Other logging should be done via different channels.

Old notes

These are the old notes from 07/01/15, e.g. before the DAQ meeting where they were discussed. The notes above reaplce these, but they are kept here for short time for safety.

Below is a list of tasks and notes relating to the merging of the tracker and calorimeter MIDAS systems.

1) gm2daq has many distinct makefiles with lots of reproduction between them. Might be better to move to top-level single makefile for the repoistory with seperate targets for different frontends, event builder, etc

[Wes] Personally, at this point, I think it makes sense to have a Makefile for each frontend. When debugging new code, I think it is easier to just deal with a specific Makefile than to look at a global one that applies to many different codes. I think your suggestion would be a good thing to implement eventually, but I would not do it yet.

2) Would be useful to set up commons tools in a separate directory and files in gm2daq. These would allow common functions such as RCP handling, ODB variable R/W etc to be shared most easily between frontends

[Wes] Some of these already exist (see frontends/common). It is a good idea to make as much of the code as possible fit into this scheme.

[Tim] we might think about the organization of the IPbus tools and AMC13 tools and how to arrange this around different uTCA readout modules - TRMs versus WFDs. I presume the TRM and WFD config, etc, will amount to TRM/WFD specific functions that call the same low-level IPBus read /writes.

3) There are currently multiple master trigger frontends across gm2daq and gm2-tracker-readout-daq. It would be good to merge all into a single frontend if possible, with ODB variables switching between configurations. Much of the behaviour would be common to all configs (e.g. RPC trigger send, slave FE registratiion etc), with the main differeneces being how the triggers are generated. Current cases include dummy (internally generated) triggers, parallel port inputs (data and interrupt), raspberry pi ethernet trigger server (connected to accelerator signals/pulse generator).

[Wes] We will converge on a single master. Right now we have masters set up for specific logic that we needed at MTest, SLAC, etc. I did at some point modify MasterSLAC to it can be run in PP or noPP mode based on an odb variable (I think) -- I would suggest starting with this and adding your suggestions, raspberrypi mode, etc.

[Tim] I think the really messy part is the frontend_index's. I believe I'm the reason for this mess - along with the history of using the daq in many different configs with different hardware for various studnies and test runs. It would be really good to better organize this - thru the ODB or a config file?

4) Currently the master trigger frontend slaves are hardcoded in the master trigger code. Might be nicer to make this more dynamic, e.g. read from ODB variable etc.

[Wes] Definitely.

5) Event builder has been tweaked slightly in tracker system to make it slightly more robust. Adding/removing a frontend no longer requires that event builder fragments be disabled etc, instead the event builder scans for frontends that are enabled. Discuss with Wes/Tim if this would be a suitable system for merged event builder.

[Wes] It is annoying sometimes, but for debugging I think it is also nice to have the ability to enable frontends but not include them in the eventbuilding.

6) Would be helpful for new users if naming of event builder was cleaned up into only one single name. Currently it is separatedly referred to as EB / Ebuilder / eventbuilder / mevb.

[Wes] We certainly can discuss a naming convention at the meeting.

7) Scripts for control of MIDAS, frontend coinfiguration etc should be merged between gm2daq and gm2-tracker-readout-daq. Some that are directly overlapping (start,stop,restart MIDAS), plus additional tracker scripts including error recovery tools, ODB variable generation for new frontends, python trigger server deployment/start/stop (to raspberry pi or locally), etc. Maybe other ones in gm2daq too that I don't know about.

[Wes] These scripts are inherently going to be machine specific. I see those included in the repository as examples, and expect that each user will modify them to suit their needs. Of course if we can make them as general as possible that would be great.

8) Implement common system for how frontends are to be run as MIDAS programs. For the tracker we are currently launching them using "screen" so that their output can be checked. I think this may be the same for the calo, but not sure. The screen technique isn't ideal when an FE exits though, as lose the stdout output. Maybe need to think about saving to file etc.

[Wes] We use screen also, but when debugging we will often just run the frontend manually and pipe the output to a log file. It is a good idea to build in the logging so we can turn it on and off in the ODB.

9) Need common system for logging output to file, and for error handling. In tracker have logger class that prints to stdout and file, but MIDAS output (cm_msg etc) doesn't enter this. Needs some thought.

[Wes] Good point.

[Tim] some sort of short "best practices" document for where to write messages, how to toggle between debugging / regular running either with or without these messages would be nice. We learnt the lesson a couple of months back that excessive messaging kills the calo readout performance.

10) Might be good to have a common philosophy of error messages published by MIDAS frontends to the MIDAS logger. This means we can make sure an operator sees important messages, without having the output polluted by less important stuff that could go in he other logger streams.

[Wes] Also a good point. I know at the tracker test beam we had a lot of unnecessary messages, so this would be useful to control.

11) Should have same system for environment variables etc for both calo and tracker systems. For example, do we pass experiment name to each frontend with "-e <exp name>", or with MIDAS_EXP_NAME env variable? Similar for exptab, MIDAS_SERVER_HOST, etc.

[Wes] Probably a good idea. We do use the -e <exp name> for our work.

12) Need to decide what happens to data after initial storage in MIDAS banks. We we run an analyzer that automatically writes raw data 'as is' to an offline format (art ROOT file)? What will the procedure be for starting the chain to generated more processed data formats for users?

[Wes] We will need to discuss this and subsequent DQM suggestions with our ROME experts.

13) linux vs linux64 in in gm2midas build paths. Seems to build "linux" on UCL system despite running on 64 bit machine, needs checking.

[Wes] I saw the same. It works okay as long as we do not intend to run anything on a 32 bit machine (which I do not expect we will do).

General DAQ chain

A few thoughts more generally on the DAQ chain resulting from the MIDAS work.

14) Need to decide system for ROME DQM. Will it read raw data directly from MIDAS banks, or se the first level offline format? If it uses MIDAS banks directly, need to figure out a way to avoid reproduction of raw data processing code that will be required in both the offline production of processed data formats and in ROME DQM. For example, will want to unpack straw TDC hit times from raw data packets in both DQM (to perform checks on hit time ranges etc) and in offline processing. Also want to consider best way to set up such that there is maximum code sharing between tracker/calos.

15) How are we going to add together information from ODB dumps and detector data streams in the offline data processing (as an example, for straws want to add channel masks from ODB dumps to straw hit data so that an end user can filter out masked off channels).

16) What do we want to put in ROME DQM? There will be multiple relevent data streams, including the per run data (e.g. for hit rates etc) and the periodic slow control data. How much "physics" to go in DQM?

17) What kind of event diplay do we want? I guess this will be based on the offline system, and use geometry info from data bases / gm2ringsim and display track and hit events (as produced by art producers in the offline framework). If we want this to be an online thing, we need to automate some of the offline processing.

[Wes] Adam is working on a nice display for the offline using paraview, and I think it would be great to use the same for our Online display if it works.

18) Interface between slow control and run control MIDAS installations. For example, might want slow control side to shutdown/inhibit run control side or vice versa. For tracker case, also are using slow control link to send some config data to the tracker, which needs to be done at start of run which relates to the other MIDAS installation. What features does MIDAS have for talking between seperate running mservers etc? What other connections between installations do we need (e.g. inhibit run start on DAQ MIDAS if problem in Slow Control MIDAS, etc)?

[Wes] We will need to bring Mike in on this discussion -- maybe at one of our DAQ-centric detector meetings.

19) Think about how to best integrate auxillary detectors into the MIDAS/DQM systems.

[Wes] And Fred. It will depend largely on which electronics we use to read out the auxilliary detectors, which I do not think has been decided.