Project

General

Profile

WC MiniDAQ Data Processor

The Wire Chamber MiniDAQ Data Processor is located in /home/nfs/ftbf_user/experiments/mc7/LLcodes/miniDAQProcessor.

As of August 22, 2014, it reads input ASCII ".dat" files that contain spill information collected by the 4 wire chambers and outputs a root TTree with this spill information. The miniDAQProcessor directory can function independently of its location in a directory tree, so a user may copy the miniDAQProcessor directory into a new location and modify elements according to his or her specific needs.

Note: Please do not modify the copy in /home/nfs/ftbf_user/experiments/mc7/LLcodes.

How To Run

Find a directory filled with ".dat" files that you want analyzed. ALL files will be analyzed and their TTrees and histograms merged. Go to the miniDAQProcessor directory. Pass the ".dat" file directory name (see the note immediately above) to the run script. An example is shown below.

bash-3.2$ cd experiments/mc7/LLcodes/miniDAQProcessor
bash-3.2$ ./run.sh ../../dataDirectoryName

The root file with the TTree you want is in the directory you passed as an argument: "[directoryName].root"
The root file with histograms made is also in the directory you passed as an argument: "[directoryName]_hist.root"

You can pass a single file to the processor:

bash-3.2$ ./run.sh ../../fileName

The two output files will appear in the directory safe_root_files.

Organization

Within the main directory, miniDAQProcessor, there are located a small framework for spill handling and a group of files for processor execution. The general class organization is as follows:

--> A Spill is the largest object possible, and contains RawEvents. There may be multiple spills per file.
--> A RawEvent contains Hits.

Directories:
  • include/src:
    --> Time.hh/Time.cc - header/source for a class used to store and access a second, minute, hour, day, month, and year. In this case, it applies to the spill.
    --> Hit.hh/Hit.cc - header/source for a class used to store hit information: the hit ID relative to the event containing it, the ID of the event to which the hit belongs, the ID of the TDC on which the hit was found, the hit's wire ID on that TDC, and the time (in TDC ticks, each ~1.18 ns) of the hit relative to its event start
    --> RawEvent.hh/RawEvent.cc - header/source for a class used to store and access event information: the event ID, the ID of the spill containing the event, the time in TDC ticks relative to the spill start, and a vector of hits within that event.
    --> Spill.hh/Spill.cc - header/source for a class used to store and access spill information: the spill ID, the spill time/date, and a vector of events within that spill
  • safe_root_files: if the run script is called on only one file, this directory is where the output ".root" files are stored.
Files:
  • Processor.cc: this is where the bulk of the data conversion happens. The functions within are:
    --> main(): takes command line arguments, parses them, and finds the name of the input file (without extension) from the absolute path
    --> FillObjects(): makes a vector of spills, and fills it with the spills, events, and hits of the file
    --> TimeBitShift(): manages construction of proper event times from ASCII file.
    --> UserModTree(): this function creates and writes a filled TTree to an output ".root" file. This is also where any user-specific code can be implemented.
    --> UserModHist(): this function creates and writes filled histograms to an output "_hist.root" file. This is where any user-desired histogramming should be implemented.
    Note: For simple user-specific projects, it is best to leave main(), FillObjects(), and TimeBitShift() as they are, and only modify UserModTree() or UserModHist().
  • run.sh: This is the script used for running the processor. It takes only one argument, and behaves as follows.
    --> The argument must be either a ".dat" file or a directory name.
    --> If the argument is a ".dat" file, the processor acts on this file, produces a TTree in a ".root" file and histograms in a "_hist.root" file, and stores those root files in the directory "safe_root_files"
    --> If the argument is a directory, the processor acts on all ".dat" files in this directory, merges their individual TTrees into a single TTree, which is then stored in a root file with a name identical to the basename of the directory. Histogram merging occurs similarly. The root files can be found within the argument directory after processing.
    Note: if a directory name is passed to "run.sh," at least a base name must be present. In other words, one cannot pass "./" or "../" or "../../" (or similar patterns) as the directory name. The minimum required name is something similar to, "../../directoryName"

Input files are ASCII *.dat files in /home/nfs/ftbf_user/experiments/mc7/data, and sym-linked in directories giving the run conditions (See DataFiles.)