Project

General

Profile

The DAQ system (still to be reviewed)

This page is aimed to give you, the DAQ expert, all the basic knowledge required for your task, and some links for extra reading, in case you're interested.
sit back, hit play.mp3 , have fun, and feel free to edit and improve this page for the common good.

DAQ Tales (introduction to MicroBooNE DAQ)

Data Acquisition is the process of sampling the signals from an experiment, converting them into digital data and saving them to disk.
In MicroBooNE as in many other particle physics experiments DAQ refers to the servers and software responsible for acquiring the data from the readout hardware, assembling events, applying software trigger, writing the events to disk and transferring them to long-term storage. In addtion, DAQ is responsible for configuring and controlling the readout electronics and monitoring the data flow and detector conditions.

All the "jargon" terms used here should be explained in detail below. If you forget and/or want a quick reference, you may be interested in the Glossary page on the uboonedaq wiki

The data flow in MicroBooNE

There are two streams of data in MicroBooNE:
  • The "neutrino" / "triggered" / "NU" stream, as its name implies : data coming from readout passing a trigger. This stream is compressed (Huffman/ lossless)
  • The "supernova" / "continuous" / "SN" stream: data coming continuously from readout. This stream is zero-suppressed.

Signals are coming from detectors sub-systems (TPC and PMT) according to the following data flow:

Before we dive into the details, it is worth mentioning that the crates are under the responsibility of the readout team, and the rest of the DAQ team.

MicroBooNE has 9 TPC readout crates and one PMT readout crate, total of 10. Each crate is connected to a dedicated server, (called a Sub-Event Buffer, SEBs) via optical fibers, and specifically to a dedicated card, named PCIe. Each SEB. Luckily for us, the crate numbering scheme is the same as the DAQ PC: TPC crate 1 --> SEB01, crate 2 --> SEB02, etc. SEB10 handles the trigger/PMT crate.

A real-time application, sebApp places these data in an internal circular buffer, collects all segments belonging to an event, and creates a sub-event fragment.

For the NU data readout stream, in which the data arrives with every Level-1 trigger, these fragments are sent to the event-building machine (EVB) over an internal network. Full events are checked for consistency and written to local disk on the EVB before being sent offline for further processing. A high-level software trigger, is applied to determine whether events should be written locally or ignored.
For the SN stream the data remains on the SEBs where it is written to disk and only sent for offline analysis on explicit requests, e.g. on receiving a Super Nova Early
Warning System (SNEWS) alert.

Note that the network bandwidth bottleneck is 10 Gb/s, but the data writing to either NU or SN streams is limited by the RAID6 disk write speeds, roughly 300 MB/s, and so the maximum aggregate rate at which all SEBs fragments can ship data to the EVB without loss of data. Here you can read more about the Writing to MicroBooNE disk rate.

After data is written to disk, it is copied to another server on the internal DAQ network, where the raw data is further compressed, shipped, and queued to be stored on tape
and disk cache using the Fermilab central data management system known as SAM. Offline applications then begin swizzeling, which means processing the RAW DATA and converting the binary data format into the LarSoft ROOT-based format which can be used as input for reconstruction algorithms.

A duplicate copy of the data is also stored offsite at Pacific Northwest National Laboratory (PNNL). This data along with a database monitoring the state of the data flow is known as the Python/Postgres for MicroBooNE Scripting system (PUBS). PUBS can also monitor the state of the SN stream data, held locally on the SEBs. A separate offline PUBS instance controls the processing of the data, including applying newly calculated calibration constants as part of data quality management.

When we trigger, data is pushed from the readout crates to the SEBs. When enough data has been collected, data on each one of the SEBs is processed to find where the data from one event are, and that data is sent over a local network to the event builder PC (EVB (Event Builder), or assembler). The EVB collects all of these fragments, and waits until it receives one from every event, and then it writes that event to disk.

It is customary to divide the data flow to:
  • Upstream: the readout electronics (TPC and PMT crates, PCIe cards in the servers, etc.)
  • Downstream: data management/nearline processing of data (including Online Monitor)

Bonus: The CRT standalone DAQ

The CRT subsystem, added to MicroBooNE two years after the DAQ started working, has a completely stand alone DAQ system.
The data CRTRawData is in binary formed and is stored on disk in that way. It is swizzled separately and only then (offline) will be merged to the rest of the data coming from the DAQ described above.
The entire CRT DAQ is under the responsibility of the CRT experts team.

DAQ processes control (configs)

A run control application issues configuration and state-progressing commands to the SEBs and EVB. When you want to start a run you have to give a configuration file number (see below for more information about config files!) and a run time (in minutes). In normal running, the run time should be set to 420 minutes, or 7 hours. It is possible to choose a run time lower than this (if you want) but not higher. The reason for this is that we only have 24 bits for the frame number, which allows only slightly more than 420 minutes per run to have a unique frame number.

Configuration states are stored in a dedicated run configuration
database, which allows for the setting and preserving of configuration information for the DAQ, readout, and additional components. This database not only allows for creating the large (∼200 parameters) intricate DAQ run configuration files, but also enforces certain conditions which must hold for consistency.
Here you can read about the RunConsoleDAQ
Here is everything you wanted to know and didn't dare to ask about the config files:
  • List of config files and their purposes -- A page to keep track of when and why we switch to new config files. Please update when you create a new config.
  • Config file parameters -- What every parameter in the config file means, with example excerpts from config files.
  • Hardware Trigger Bits -- how to configure which hardware triggers we are accepting
  • RunType Convention -- the RunType convention agreed between DAQ and DM groups. You have to choose a RunType when uploading a config file, and this page will tell you which to use!
  • The Run Configuration Database Tool -- how to download and set up the run configuration database tool, and how to check, upload, and download config files. A quick reference for normal use (when you simply want to upload a new config file, and are not creating a new subconfiguration) is:
    1. Log into ws01 as uboonedaq: ssh uboonedaq@ubdaq-prod-ws01.fnal.gov
    2. cd DBTool
    3. To see existing configs: list_main_cfg [expert] ("expert" is optional -- use it to see expert configs)
    4. To get the fcl file of an existing config: print_fcl <config_name>
    5. To upload a new config from a fcl file: .sbin/upload_fcl <fcl_name>. Uploaded configs will be set to "expert" by default.
    6. To move a config to the non-expert list: .sbin/nonexpert_main_cfg <config_name or config_id>
    7. To archive an existing config (remove it from the database, if it's old or wrong and shouldn't be used): .sbin/arxive_main_cfg <config_number>

DAQ monitoring (Ganglia)

Ganglia is monitoring basic system states (such as CPU, memory, and network usage) as well as allows use of custom metrics to monitor the data flow and status of the readout electronics These metrics are sampled and collected by the Experimental Physics and Industrial Control System (EPICS) slow monitoring and control processes, which archives desired quantities and provides alarms when pre-defined thresholds are exceeded.

DAQ applications

here are some details regarding the DAQ applications.
and here is a full list of all of the DAQ processes.

The Data Format

more about the Data format

Online machines

List of MicroBooNE online machines (including brief description of what each machine does)
All servers are being maintained by the SLAM team lead by Bonnie King.

MicroBooNE DAQ electronics

here you can find more information about the Electronics used for MicroBooNE DAQ