Project

General

Profile

DiscussionWithNIU31Aug2011

talked with Nick K, Sergey, and George about the NIU proton tomography DAQ system. Nick thinks this is an application for our "streaming DAQ" and the study framework we want for mu2e. Nick also thinks there is a hole in the system they are constructing - mainly the intermediate processing stage (on the DAQ side) that takes the raw data, extracts the tracks (events), and writes them onto disk.

In the end, we will meet George next week at FNAL when he is here.
We will meet Sergey at the end of Sept when he is back from vacation.
We will talk to Paul R. about the electronics aspects sometime next week.

Facts we collected.

  • event = track, which is about 4 hits
  • 1-2 billion events in a run
  • a run is one full scan and lasts about 10 minutes
  • there is about 52 bytes per event output event
  • an event is 4 (x,y) pairs and an energy, plus a rotational angle position (of the device)
  • raw size is about 700GB
  • DAQ output size for run is about 100GB
  • input to DAQ system is 10Gbit/s
  • identification of events involved simple pattern recognition, finding ghosts

Nick needs the processed output to do reconstruction at the cluster.
The DAQ workstation processing on the 700GB file takes 4-10 hours. Don't know the bottleneck.
Goal is < 1 hour to do everything.
Concentrator included in plan - many channels in, 10Gb/s out, > 100 channels, linux processor onboard.
Thinking of one workstation writing the 700GB in 10 minutes.
Thinking of memory buffers for the data do making writing easier.
Real-time processing not needed in this first phase.
Sounds like they can get about 300MB/s using raid 5 on a workstation.

They are thinking of doing everything serially: read 700GB of scan and write into file, read file and do reduction step into new file, write reduced file out to NIU (file transfer).

Our study framework - could be used to test and evaluate the reduction processing on the DAQ computers.

700GB/4 hours = ~48 MB/s processing in the DAQ reduction step.