Project

General

Profile

MicroBooNE Discussion 30Jun2011

Notes from the meeting of Ron, Gennadiy, Jim, and Kurt on 30-Jun-2011 to follow up on our code walk-through of 14,15-Jun. The general goal of the walk-through was to gain a better understanding of the code, and we all feel that this was accomplished.

Code Walk-Through

The questions that we wanted to address during the review:

What functionality is already available and what still needs to be developed?

It was difficult to determine the complete set of required functionality for the DAQ from the TDR because it described a fair bit of implementation details and design decisions. For example, we have heard that a operator interface (e.g. run control GUI) is desirable, but it was not mentioned in the TDR. A requirements document, from the perspective of the operator, would be useful.

In addition, the plan for the use of EPICS to provide parts of the DAQ functionality was not clear at the time of the walk-through.

To answer this question, we broke the functionality into two parts. What is is defined by the TDR ("core" functionality), and what we feel is important, but was not specifically described in the TDR.

Functionality described in the TDR:
  • We did not find a trigger loop process (that runs on the SEB watching for triggered data to arrive).
  • A fake SEB sender exists.
  • The existing fake assembler receives data from multiple SEBs. It writes data to disk, but this will need to be changed to write data to the Streamer.
  • The Streamer process that writes data to disk does not yet exist.
  • A Shared memory viewer exists.
  • The supernova trigger will be handled by an email process on uBDPC. For this trigger, a process on the uBDPC will pull data from the SEBs. We did not see any code to handle this.
  • The uBNPC will concatenate beam, slow controls and triggered data. We did not see code to handle this.
Desirable functionality what was not found in the existing code base:
  • Run control operator and programmatic interface, distributed control. However, what we've observed is that there is a daemon similar to rshd to handle arbitrary commands sent over the network.
  • Distributed performance and data flow monitoring. We see no distributed system monitoring tools. The shared memory viewer only works on the central host.
  • Robust distributed message logging with throttling. However, there is a simple logging daemon.
  • Overall system state management with application-specific timeouts and error handling.
  • Process management and monitoring. We have heard about scripts to create one fake_assembler with a shared memory viewer and 10 fake SEB senders.
  • Build system, release management system, unit test framework.
  • Developer-level integration test environment.

At the time of the walk-through, there was no device driver code to be reviewed.

Also, at the time of the walk-through, the use of EPICS to display information from the shared memory was being developed. And, what we've seen so far did not fit with the documented use of EPICS that was described in the TDR.

What needs to be changed in the imported miniBoone code to have it work with uboone DAQ?

The shared memory viewer has been included, but the other physics data processing code may not be applicable.

General note: there is some amount of imported miniBoone code that is "left over".

Does the existing code have clearly defined components with reasonable interfaces?

The directory structure of the code should be split up by component or function.

The identifiable library components are the shared memory interface and the logger.

The code is not decomposed into libraries that are individually testable, unlike the other large systems that we've worked on.

Can the system be configured , or are the configuration parameters hard-wired?

It seems that things are hard-wired.

Is any code or are any libraries shared with other experiments/systems?

Not that we saw.

Conclusions

It is not clear how the work would be divided up given the structure that we see.

It is not clear how the existing code would meet the perceived requirements of the experiment. (The TDR is the basis for our knowledge of what is needed, along with the meetings that we've attended.)

Backup information