Project

General

Profile

All things lariat-artdaq

Installing lariat-artdaq

Instructions for installing and building lariat-artdaq used to be located here; however, as of 11/3/14, now that all Lariat code is built under CMake, follow the instructions at Building LArIATOnline with CMake; please keep in mind that by default the LariatRun.sh script assumes a profile (i.e., non-debug) build. To achieve this, wherever a "-d" argument is given in Building LArIATOnline with CMake, replace it with a "-p" argument.

lariat-artdaq building blocks

lariat-artdaq began as a fork off of the artdaq-demo package, but unlike the demo, contains the following experiment-specific components:

  • Lariat, a fragment generator which reads directly from Lariat hardware (CAEN V1740s, WUTs, etc.) by providing a wrapper around the LariatReadout::readSpill() function
  • SpillFileReader, a fragment generator that packages raw binary files (typically found in the /daqdata directory) and sends them through lariat-artdaq's event builders (useful for development/debugging)
  • EVB, an Art module written by Pawel which displays many different kinds of info concerning the various Lariat fragments within a given spill
  • WFViewer, a modified version of artdaq-demo's WFViewer Art module capable of displaying CAEN ADC fragments

Preparing for a first run with lariat-artdaq

The script controlling lariat-artdaq is located, relative to the lariat-online/daq directory, in lariat-artdaq/tools/LariatRun.sh . Running it is very simple:

source $LARIATONLINE_DIR/source/daq/lariat-artdaq/tools/LariatRun.sh 2

where <nspills> is the number of spills you wish to process before the DAQ exits out. Before you run it, however, you'll want to perform some edits of the files in the package. Please keep in mind that when you edit a file, you'll want to edit it in the code directory (i.e., a directory in the subtree of <BASEDIR>/lariat-online, where <BASEDIR> is the directory into which you checked out the lariat-online package), but then rebuild the package (again, see Building LArIATOnline with CMake) since it's not the files in the code directory which are actually used when you run, but the files in the product directory (referred to by the $LARIATONLINE_DIR variable).

LariatRun.sh calls various utility scripts which perform behind-the-scenes work to get the DAQ up and running. The most important of these, start1x1x2System.sh and manage1x1x2System.sh, are located in lariat-online/daq/lariat-artdaq/tools . start1x1x2System.sh will fire up four artdaq processes: a BoardReaderMain, an EventBuilderMain, and two AggregatorMains. These are covered in more depth here , but for now suffice it to know that BoardReaderMain is the process which interfaces with the front-end (e.g., the Lariat hardware), EventBuilderMain serves as a router, sending the data to both AggregatorMains, and the first AggregatorMain wraps up the spill data into an artdaq::Fragment object which it saves to file and the second processes it using Art modules, capable of making plots, summarizing statistics, etc.

If you take a look at the manage1x1x2System.sh script, you'll see that it primarily serves as a wrapper around a script called "DemoControl.rb", in whose argument list you see the names of FHiCL documents passed via the options "--brfcl", "--ebfcl" and "--agfcl". It's these documents which will be used to initialize the artdaq processes; they can be edited in the lariat-online/daq/lariat-artdaq/tools/fcl directory (please note, however, that it is the product directory versions of the files which are actually passed to DemoControl.rb). The name of the FHiCL document used to initialize the BoardReaderMain process will either be "BoardReader_for_Lariat_nohardware_1x1x2.fcl" or "BoardReader_for_Lariat_hardware_1x1x2.fcl" ; the first will result in the use of the SpillFileReader fragment generator, the second, the hardware-interfacing Lariat fragment generator. For starters, it's probably safest to go with the non-hardware option. To switch between hardware and non-hardware options, you'll want to set the "hardwareMode" variable in the manage1x1x2System.sh script to 1 or 0. Whether you choose to run using SpillFileReader or Lariat fragment generators, you'll want to edit the FHiCL documents before you actually begin running. In particular, you'll want to perform the following edits:

  • Aggregator1_for_Lariat_1x1x2.fcl
    Of the two AggregatorMain processes which will run, the first will simply take the data from upstream and package it in a Root file which can be readable by Art modules in the future. The name of this file (including its full path) will be in the "fileName" variable; you'll want to change the path to one to which you have write access. Please pick a directory on a disk with plenty of storage space; this is because several hundreds of megabytes of data will typically be generated for every spill.
    If you wish to disable the writing of complete spills to root files, put a hash mark (the FHiCL comment symbol) in front of the "my_output_modules: [ normalOutput ]" line and remove the hash mark from the line below, "# my_output_modules: [ ]". Please do not disable this in the event of real datataking -- otherwise data will not be saved!!
  • Aggregator2_for_Lariat_1x1x2.fcl
    The purpose of the second AggregatorMain process is to run Art modules capable of analyzing the incoming data: creating plots, printing summaries, and so forth. As of this writing, there are two primary such modules in use on Lariat: the WFViewer module and the EVB module. You can control which of these modules are used by editing the "a1" analysis path in Aggregator2_for_Lariat_1x1x2.fcl:
     a1: [ app, evb, wf ]
    

    where is should be pointed out that for the WFViewer module, and possibly the EVB module as well, the "app" token is required. To drop a module from the analysis path, delete either or both of "evb" and "wf" above.
    Each module has its own set of controllable parameters:
    • EVB
      To be written
    • WFViewer
      • prescale: If this quantity is set to N, and N > 1, then only plot the data in the Nth CAEN ADC fragment (where a count is kept across spills). Note that it generally takes several seconds to paint a canvas, so a prescale on the order of 100 would be typical; if this is too low, WFViewer will be unable to paint and repaint the canvas, data will get backed up, and backpressure will result
      • use_timing: Set this to 0 or 1. If set to 1, then the time it takes the WFViewer module to process a spill will be printed; useful to determine whether "prescale" is set too low
      • live_paint: Set this to 0 or 1. If set to 1, then the plots will be painted to a live Root canvas as well as to file. Setting this to 0 can alleviate potential backpressure
      • digital_sum_only: Set this to "true" or "false". If set to "true", then only plot a cumulative distribution of the total ADC values received, otherwise also create plots of individual triggered CAEN ADC events within a spill
      • histos_filename: The name of the Root file in which to save the plots
      • graphics_dirname: The directory in which to save pdfs of the plots ( basename format: "CAENV1740_R<run number>_sR<spill (subrun) number>.pdf")
      • num_samples: The number of 64-channel samples in a CAEN fragment, s.t. the total number of ADC counts is num_samples*64. Needed for plotting purposes.
  • BoardReader_for_Lariat_nohardware_1x1x2.fcl
    Here, the parameter you'd want to change is "input_filelist", used by the SpillFileReader fragment generator. "input_filelist" names the file which itself contains a list of binary files, each of which will constitute an artdaq fragment. Please note that, when running using the SpillFileReader fragment generator, you can't have more spills than there are files listed in the file referred to by "input_filelist"; note also that it's perfectly legal (and common during development) to list the same binary file multiple times. Please also note that the spill data should include a RunInfo lariat fragment (not the case with some older files), otherwise an exception will be thrown if and when the WFViewer Art module is used. Remember to specify the full path of the filename in "input_filelist".
    Additionally, this file contains a parameter called "throttle_usecs". This refers to the number of microseconds the fragment generator should pause before sending another spill downstream. In the case of SpillFileReader, with no delay it would send a chunk of hundreds of megabytes of data downstream multiple times a second, which the Art modules (WFViewer in particular) would have difficulty handling. Note that this quantity should be both greater than and an integer multiple of the value contained in the parameter "throttle_usecs_check", used to specify how often during the pause the program should check to see if a stop command was issued to the DAQ system.

Running lariat-artdaq

A first walkthrough

To get used to running with LariatRun.sh, it is recommended that you begin by running using the SpillFileReader rather than the Lariat fragment generator, i.e., by reading in raw binary files from disk rather than interfacing with actual hardware. This is for two reasons: one, if a mistake is made then you won't need to worry about affecting the hardware and thereby needing to reset boards; two, the amount of output to screen is reduced, as running in hardware mode will output everything that the LariatReadout::readSpill() function outputs. This makes it easier to spot diagnostic output related to LariatRun.sh itself. Concerning this point, it may also be a good idea to adjust Aggregator2_for_Lariat_1x1x2.fcl so that it doesn't run any Art modules (EVB, WFViewer) which would output to the screen.

With this in mind, to run on two spills, from lariat-online/daq you can execute the following:

source $LARIATONLINE_DIR/source/daq/lariat-artdaq/tools/LariatRun.sh 2

and you should see something like the following:

----------- check this block for errors -----------------------
----------------------------------------------------------------
Will use artdaq build in /home/nfs/jcfree/build_artdaq; adding to the $PATH and $LD_LIBRARY_PATH variables
Starting up the Process Management Tool, pmt.rb
Log file name: /tmp/pmt/pmt-18812.1-20141009154453.log
[2014-10-09 15:44:53] INFO  WEBrick 1.3.1
[2014-10-09 15:44:53] INFO  ruby 1.8.7 (2011-06-30) [x86_64-linux]
[2014-10-09 15:44:53] INFO  WEBrick::HTTPServer#start: pid=18812 port=5200
Running init...
Done
Checking /tmp/masterControl/dsMC-20141009154458-init.log for errors
Running start...
Done
Checking /tmp/masterControl/dsMC-20141009154459-start.log for errors
Running stop w/ an expectation of 2 spills...

as well, for each of the four artdaq processes, a diagnostic message like so:

STARTING:lariat-daq02.fnal.gov:BoardReaderMain:5205
BoardReaderMain on lariat-daq02.fnal.gov is starting.

Let's break this down in a little more detail. You'll want to make sure that there are no error messages in the body of

----------- check this block for errors -----------------------
----------------------------------------------------------------

If you do see this, it's most likely because packages on which lariat-artdaq depends have already been set up using a version which lariat-artdaq wasn't expecting; the easiest way around this is to log into a clean terminal and try again.

Next, the following output,

Running init...
Done
Checking /tmp/masterControl/dsMC-20141009154458-init.log for errors
Running start...
Done
Checking /tmp/masterControl/dsMC-20141009154459-start.log for errors
Running stop w/ an expectation of 2 spills...

simply informs you of the transitions being sent to the artdaq processes. The full output of these transitions are sent to logfiles in the /tmp/masterControl directory; in order to reduce screen clutter, LariatRun.sh will not print this output to screen verbatim but rather only print any error messages it sees in the logfiles. If it DOES see an error message, it will trigger a shutdown of the DAQ; if the output it sends to the screen is insufficient, you can always look at the actual logfiles, whose names are printed as shown above. Note that the "stop" transition's behavior here might not be completely intuitive; the "stop" transition, if supplied with a requested number of spills (here, two), will mean "run the DAQ until two spills have been processed and THEN stop".

Next, along with some diagnostic messages, you'll see output like the following:

Started run 99999999

Here, the run number is this rather contrived value since we're running the SpillFileReader, and therefore this is basically offline processing. If we were running hardware, the true, online run number stored in /home/nfs/lariat/config/runNumber.dat would be used instead.

Also, each AggregatorMain application will print a warning if it sees that a module was declared within the FHiCL document it was initialized with, but then not actually put in a path for processing. This is for information purposes only, and should not be a concern (unless, of course, the user actually did intend to use the listed module!) E.g., in the first AggregatorMain, if the user edited Aggregator1_for_Lariat_1x1x2.fcl so as to remove the RootOutput module (labeled with "normalOutput") from the path, you would see:

The following module label is not assigned to any path:
'normalOutput'

Once things are humming along, whenever the DAQ briefly halts in order to write out the spill to a *.root file, you will see something like the following:

%MSG-i AggregatorCore:  Aggregator-lariat-daq02-5265 MF-online 
Starting automatic pause...
%MSG
%MSG-i AggregatorCore:  Aggregator-lariat-daq02-5265 MF-online 
Pausing run 99999999, 1 events received so far.
%MSG
%MSG-i AggregatorCore:  Aggregator-lariat-daq02-5265 MF-online 
A subrun in run 99999999 has ended.  There were 1 events in this subrun, and there have been 1 events so far in this run.
%MSG
%MSG-i AggregatorCore:  Aggregator-lariat-daq02-5265 MF-online 
Run 99999999 has an overall event rate of 0.0 events/sec.
%MSG
%MSG-i AggregatorCore:  Aggregator-lariat-daq02-5265 MF-online 
Starting automatic resume...
%MSG
%MSG-i AggregatorCore:  Aggregator-lariat-daq02-5265 MF-online 
Resuming run 99999999
%MSG
%MSG-i AggregatorCore:  Aggregator-lariat-daq02-5265 MF-online 
Done with automatic resume...

What's happening above is the program is performing a pause of datataking so as to save the spill in a Root file, and then resuming datataking.

When the program is complete, the summary is presented:

TrigReport ---------- Event  Summary ------------
TrigReport Events total = 2 passed = 2 failed = 0
TrigReport ------ Modules in End-Path: end_path ------------
TrigReport  Trig Bit#    Visited     Passed     Failed      Error Name
TrigReport     0    0          2          2          0          0 netMonOutput
TimeReport ---------- Time  Summary ---[sec]----
TimeReport CPU = 0.619717 Real = 0.632904
TimeReport> Time report complete in 32.6815 seconds

Finally, everything shuts down, and you should see something like the following:

[2]+  Done                    manage1x1x2System.sh -n $nrootfiles stop > /dev/null
Done
EXECUTING FULL PROCESS CLEANUP
kill 4930
Signal of Class 15 received.  Exiting
Cleaning up.  Please wait for PMT to exit...
[2014-10-10 16:10:31] INFO  going to shutdown ...
[2014-10-10 16:10:31] INFO  WEBrick::HTTPServer#start done.
[1]+  Done                    pmt.rb -p $LARIATARTDAQ_PMT_PORT -d $pmtFile --logpath /tmp --display $DISPLAY | sed -r -e 's/^.*20[[:digit:]][[:digit:]]\:\s*//' 2> $pmterrlog  (wd: ~/test/lariat-online/daq/build-lariat-artdaq)
(wd now: ~/test/lariat-online/daq)
CLEAN EXIT: ALL PROCESSES NORMALLY KILLED
For more information, examine the following logfiles: 
/tmp/pmt/pmt-4930.1-20141010160948.log
/tmp/masterControl/dsMC-20141010160953-init.log
/tmp/masterControl/dsMC-20141010160954-start.log
/tmp/masterControl/dsMC-20141010160954-stop.log
/tmp/masterControl/dsMC-20141010161027-shutdown.log

The two primary processes which were controlling the DAQ were pmt.rb (a script in artdaq tasked with starting up the individual artdaq processes -- BoardReaderMain, etc.) and manage1x1x2System (specifically, having been called with the stop transition, described above). When you see "EXECUTING FULL PROCESS CLEANUP", what LariatRun.sh is doing is making sure that both these primary processes are killed before it exits, via a function called "full_cleanup", defined in the script. If it can kill all of them using the Bash TERM signal (15), it will print "CLEAN EXIT: ALL PROCESSES NORMALLY KILLED"; however, if it needs to resort to using the Bash KILL signal (9) for at least one process, it will print "UNCLEAN EXIT: AT LEAST ONE PROCESS FORCIBLY KILLED".

Finally, the logfiles produced since LariatRun.sh began running will be printed to the screen, to make it easy to find them if further examination of the run is desired.

Exiting out: soft ("q") and hard ("ctrl-C")

Often, you'll want to stop the running of lariat-artdaq even if it hasn't finished processing your requested number of events. In fact, this may be your plan; if you'd like lariat-artdaq to run indefinitely, you can simply set the requested number of spills to a very large number. There are two ways to exit: by hitting the "q" button, or by hitting "ctrl-C".

  • Hitting the "q" button
    First, please note that this keystroke only registers once the DAQ has begun running; therefore, if "q" is hit immediately after the script is begun, you'll need to wait a few seconds for the artdaq processes to be created, initialized and started before they register the keystroke. When this happens, you'll see the following printed to screen:
    "q" APPEARS TO HAVE BEEN HIT; WILL PERFORM SOFT EXIT
    

    At this point, the "stop" transition (without a specified number of spills, i.e., "stop immediately") will be issued to the artdaq processes, followed by the "shutdown" transition. Only after this point will the LariatRun.sh script call its full_cleanup function, which, as described above, will kill all the DAQ processes immediately.
  • Hitting the "ctrl-C" button
    A quicker, less-clean way of exiting out is to hit the "ctrl-C" button. If LariatRun.sh sees you've hit the "ctrl-C" button, it will immediately call the full_cleanup function and then return. Note that what this means is that artdaq processes which may be in the process of collecting data will be summarily ended, meaning, e.g., that files may be left unwritten, etc. When you hit this button, you'll see the following:
    "CTRL-C" APPEARS TO HAVE BEEN HIT; WILL PERFORM HARD EXIT
    

Examining the output

Whether the program has come to an end because it's processed the requested number of spills or because it was ended using one of the two methods described above, there are various types of output that may have been produced. They are as follows:

  • Entire Root-packaged spills
    As long as the path in Aggregator1_for_Lariat_1x1x2.fcl is set to include a RootOutput Art module (see above for more), each spill will get stored in an artdaq::Fragment object which in turn is stored in an Art-readable Root file, whose name is also specified in Aggregator1_for_Lariat_1x1x2.fcl (again, see above). This allows one to process Lariat data offline. If you wish to run Art modules on the data contained within the file, you can do the following:
    art -s <ROOT_FILE_NAME> -c RootFileReader.fcl
    

    where RootFileReader.fcl is located in the lariat-online/daq/lariat-artdaq/tools/fcl directory, meaning you should be in that directory if you wish to pass RootFileReader.fcl as above without specifying its path; just as in the case of the Aggregator*.fcl files, you can add/remove Art modules in this file.
  • Plots
    If the WFViewer module is used in Aggregator2_for_Lariat_1x1x2.fcl, then plots are saved to whichever directory was specified in that module's "graphics_dirname" parameter. EVB information?
  • Logfiles
    Already discussed to some extent, these are the files which save the output of the DAQ itself. In /tmp/pmt, the logfiles correspond to the pmt.rb process, and include a history of the transitions sent to the artdaq processes as well as the output from the fragment generator used. Hence, output from the actual CAEN libraries, as sent through the Lariat fragment generator, would be seen here. In /tmp/masterControl, in depth information about the individual transitions is presented. While an error will result in just one or two lines of output in real time explaining what went wrong, a more in-depth examination can occur by inspecting the logfiles.

Troubleshooting and failure modes

There are different ways that the DAQ system can run into trouble. A few examples are:

-The user writes a FHiCL document containing illegal code
-Someone inadvertently kills off an artdaq process
-Ports requested for the use of artdaq processes in communication are already taken

LariatRun.sh will scan the logfiles for reports of serious errors such as the ones described above. In the event that it finds such an error, it will give a quick, one line report describing what happened, and then call its "full_cleanup" function. As an example, witness what happens if illegal FHiCL code is sent to an artdaq process:

----------- check this block for errors -----------------------
----------------------------------------------------------------
Already appear to be using artdaq build in /home/nfs/jcfree/build_artdaq
Starting up the Process Management Tool, pmt.rb
Log file name: /tmp/pmt/pmt-12241.1-20141010163229.log
[2014-10-10 16:32:29] INFO  WEBrick 1.3.1
[2014-10-10 16:32:29] INFO  ruby 1.8.7 (2011-06-30) [x86_64-linux]
[2014-10-10 16:32:29] INFO  WEBrick::HTTPServer#start: pid=12241 port=5200
Running init...
Done
Checking /tmp/masterControl/dsMC-20141010163234-init.log for errors
/tmp/masterControl/dsMC-20141010163234-init.log:16:2014/10/10 16:32:34: Aggregator on lariat-daq02.fnal.gov:5266 result: Exception when trying to initialize the program: ---- Parse error BEGIN
Problem found during initialization
EXECUTING FULL PROCESS CLEANUP

The logfile where the error appears is printed, as well as the first line (here, line 16) where the error is reported, before the cleanup begins.

Notes for developers

Compilation and recompilation

The lariat-online code is effectively divided into two parts: the original Lariat code, and then the lariat-artdaq code. Relative to the lariat-online/daq directory, the original Lariat code can be found in the "include" and "src" directories; the lariat-artdaq code can be found in the "lariat-artdaq/" subdirectory. While the lariat-artdaq code depends on the original Lariat code, the opposite is not true. When making changes to the original lariat code, go to the lariat-online/daq directory and execute "make"; if you wish to do a clean rebuild from scratch (probably a good idea if you have components built using a version of gcc different than the one set up for the lariat-artdaq environment), first run "make clean" and then run "make". To compile the lariat-artdaq code, you can either run "make lariat-artdaq" from the lariat-online/daq directory or cd into lariat-artdaq/ and run "buildtool"; however, currently the only way to perform a clean rebuild of the lariat-artdaq code is to cd into the lariat-artdaq directory and run "buildtool -c"

Working with git repositories

If you wish to sync your area with the central repository (e.g., if someone else has committed changes to the master branch in the central repo and you wish to include those changes before any further development), run the following:

git fetch origin
git merge origin/master

Be warned, however, that there's the possibility of conflicts between your edits and someone else's edits (e.g., if you both modified the same file). In this case, a warning will appear during the merge, and it will be up to you to clean up the files in conflict; look at online git documentation for details. If you're worried about this happening, instead of running "git merge origin/master" in the instructions above you can run "git log origin/master", which will give you the commit history of the master branch in the central repository. If you've already merged, conflicts have arisen, and you're regretting your decision, you can learn about the "git reset" command online (or contact John for help, ).

n.b. If you're working with another branch besides "master", just substitute the name of the branch of interest for master in the instructions above

If you've edited the local source and wish to build, you can cd to the lariat-online/daq directory and execute "make lariat-artdaq". Note, however, that this is an incremental build, meaning that source which hasn't been edited won't be recompiled. If you wish to perform a clean build, where EVERYTHING gets recompiled (this actually only takes a minute or two), you'll want to cd into lariat-artdaq and execute "buildtool -c". Note that you can also perform an incremental build by typing "buildtool". If you've added a source file, you'll need to edit the appropriate CMakeLists.txt file; please contact John if you wish to do this.

Running more than one lariat-artdaq system at once

Note also that it's possible for there to be more than one user of lariat-artdaq at the same time; the limiting factor (beyond the obvious issues that arise when actual Lariat hardware is read simultaneously by two people) is that the ports through which the artdaq processes communicate can't be shared. As the instructions provided will result in the default set of ports being used, it's necessary to know how to use a different set of ports in case another user is employing the defaults. Simply edit the start1x1x2System.sh and manage1x1x2System.sh scripts so that instead of the line:

source `which setupDemoEnvironment.sh`

we instead have the line
source `which setupDemoEnvironment.sh` -p <YOUR_BASE_PORT_NUMBER>

where <YOUR_BASE_PORT_NUMBER> typically defaults to 5200, but you should set it to another value (e.g., 5300). Once the edits have been made, as usual in such cases, perform a build of lariat-artdaq. If you try running it and get an error message, try logging out of lariat-daq02, logging back in, setting up the environment and trying again -- this seems to work.

Storing pre-existing binary files in Art-readable root files

As the storing of the pre-existing raw binary output in a Lariat spill into an artdaq::Fragment is such a common action, a script has been created to facilitate this. In order to execute it, first open up the source file, "wrapper.py", located in lariat-artdaq/tools. As described in the comments at the top of the file, before users employ this script they should first make sure that the "basedir" variable points to the base directory of their version of the lariat-artdaq code (e.g., "/home/nfs/jcfree/lariat-online/daq"), and that the "outputdir" variable points to where you wish to store the root files (it's probably a good idea to put this directory in the /lariat/data/users area, for space reasons). Once you've done this, remember to perform an incremental build of the package, either via "make lariat-artdaq" or "buildtool" (see earlier on this Wiki for more).

Once you've made these changes to wrapper.py, cd to the "fcl" subdirectory, i.e., lariat-artdaq/tools/fcl. What you'll see is a file called "wrapper.fcl.in". The wrapper.py script will use this file as a template for a FHiCL document used to perform the actual wrapping of the Lariat binary file. From this directory, you can execute the script; an example of this would be the following:

wrapper.py 2997 5

What this would do is, for run 2997, take the first five files in /daqdata from the run (chronologically, so almost certainly spills 1-5), and package them into root files placed in the area pointed to by "outputdir". Each file will take 15-20 seconds to process. Note that if you leave off the second argument, ALL files in the run get processed (72 files in the case of this run in question).

Once you have the files, you can then perform offline processing on them. For example, if you're still in the fcl/ direcotry, you could try run Pawel's EVB module on a file:

art -s /lariat/data/users/jcfree/lariat_r2997_sr1.root -c RootFileReader.fcl 

…where here, you'd want to substitute the Root argument to "-s" with one of your own files.

Old instructions on how to work with the "driver" executable

cd over to lariat-online/daq/lariat-artdaq/tools/fcl after getting set up as described at the top of this document, and run the following

driver -c SpillFileReader.fcl

What this will do is take each raw binary file listed in the datalist_combo.txt directory and stash it in its entirety as the payload in an artdaq::Fragment object. The individual Lariat fragments are then picked out of the payload and processed using the WUTDump, TDCDump and V1495Dump modules (CAENADCDump is excluded since in this file there are 228 events of 8 CAEN V1740 fragments each, which creates a big mess on screen). It will also display the ADC values from the CAEN fragments in plots thanks to the WFViewer module, and will write out the plots to a root file whose full name is specified by the "output_histofilename" variable in SpillFileReader.fcl; the "output_rawfilename" variable will name a root file which contains the complete payload. You'll want to change these names from their initial values so they refer to a directory to which you have write access.

You can open up the histogram file and take a look at the plots; just run

root -l <output_histofilename>

where <output_histofilename> is the output root file of histograms. Inside the root environment, execute

 wf.cd() 
and then
 .ls 
to see which histograms have been written to file; they can then be drawn using standard root syntax.

With the payload file, you can take advantage of the fact that Art modules can process data not only directly sent from artdaq but also contained within a root file; simply try

art -c RootFileReader.fcl -s <output_rawfilename>

where <output_rawfilename> is the name of the payload root file you created when you ran SpillFileReader. You'll see basically the same output as you saw when you ran the driver executable with SpillFileReader.fcl .

If you wish to read the actual hardware, first make sure no-one's using them already (check who may be using the optical link to the DAQ via the "wholink" command). Once you've done this, simply run the following (though first read below on how to format the WFViewer module's FHiCL code):

driver -c Lariat.fcl

and the plots you'll see on screen will be of CAEN ADC fragments being sent from the V1740 boards, as opposed to being read from binary files. Note that in some cases the program will hang; generally resetting the V1740 boards via "lariatReset 0 1" should do the trick, although first you'll need to perform a hard kill of the program; assuming the program has Linux process ID <procid>, this would be:

kill -9 <procid>
.

Very similar to SpillFileReader.fcl, Lariat.fcl will produce a raw root file and a histogram root file; these can be named and examined in the same manner as in SpillFileReader.fcl.