Below is a picture of how the DAQ may look. They initially want to know if a modern multicore node can ingest 300MB/s and perform simple compression on the data, forming an event output stream at the back. So what is required is the following:
- digitizer input examples (they say 10G is available)
- an event builder layer that demonstrates async fragment generation and collection
- an algorithm layer that processes many events in parallel (n parallel streams)
- an algorithm running in each stream that compresses the data, event-by-event
- an output layer that collects full event data from each parallel stream and writes events in a time order
For this experiment, we will simulate the input fragment layer by splitting the event file data into file fragment channels, and identifying each fragment appropriately so that they can be reassembled.
The mu2e event builder evaluation framework can be reused also in its entirety for this purpose. That framework would need an algorithm section and output collection layer added. This work is likely to take one week-long scrum session with a total of three people.
This project is well-aligned with the architecture we want to complete for our generic DAQ toolkit in FY12, and can serve as a kick-off of that work, introducing additional members of CET to HPC tools currently used in the evaluation framework.