Project

General

Profile

Supernova Assembly

Here are some rough notes...

To run this, the code needs to either be installed as an official release on all the DAQ machines, or built and installed in your own relocatable UPS directory in ~/development_daq/install/. Either works OK.

The first version of this code is set up with:

$ setup uboonedaq v6_22_00 -q e7:debug

Create a working directory where temporary files (and your ubdaq file) can be created, and cd to that directory.

$ sn-assemble.py --help

There are two ways of requesting data: by absolute timestamp, or by run number and time-in-the-run. The former is what SNEWS alerts will use; the latter is useful for testing readout. For example:

This will attempt to assemble data from 104342, starting 3.0 seconds into the run, and extracting 10.0 seconds worth of data. The python script then uses the data stored in /home/uboonedaq/RunInfoLogs to get try to guess the frame numbers you want:

$ sn-assemble.py --run 10432 --offset 3.0 --dur 10.0

This call will simply build 200 frames starting with frame 100 in that run:

$ sn-assembly.py --run 10432 --framestart 100 --frames 200

This call will build a 0.2 second Supernova file start that begins close to 12:43:02.001 PM UTC on June 2, (7:43 AM Fermilab time).

$ sn-assemble.py --timestamp 2017-06-02T12:43:02.001+00:00 --dur 0.2

(This is an ISO 8601 standard time format. Only UTC times are supported - not local time! To get a timestamp in this form, this is a useful command:

$ date u +"%Y%m-%dT%H:%M:%S.%NZ" 
2017-06-02T13:14:35.561194509Z

)

You can also specify the time in seconds-since-the-epoch with --eopch, which may be more convenient.

PROBLEM: The time-lookup code works efficiently when run from ubdaq-prod-evb, but slowly from other machines. That's because it has to troll 10s of thousands of files in /home/uboonedaq/RunInfoLogs which is on localdisk on evb. On other machines, the glob operation is extremely slow due to NFS lag, and probably burns bandwidth. We could run the sn-assembler from evb, OR we could sync those files someplace, OR we could have a constantly-running job that harvested that file data into a database that is faster to access. Maybe part of the DAQ scripts?

WORKAROUND:

Create your temporary directory. SSH to evb, and on that machine run the time finder, but dont' run the assembler:

$ sn-assemble.py --dryrun --timestamp ... --dur ...

One of the final lines of output will look like this:

To repeat this request: sn-assemble.py --run 11578 --framestart 1333316 --frames 126

Then go to near2, go to the same working directory, and run the 'repeat this request' line, which will skip the glob operation.

TO DO: Supernova times spanning run numbers is not yet implemented; only the first run will be taken. This should be a major issue, since supernova spans shouldn't be much longer than 20 seconds, but it might get the run number wrong if near a run boundary.

WARNING: Logjams may occur if multiple users are doing this at the same time.

CLEANUP: It's sometimes possible that the sn-server jobs don't shut down. You can run

$ sn-kill-servers.sh

to clean up the servers.

This python script does several thing:
- It looks through the start-of-run text files in the uboonedaq user area to correlate run numbers to GPS times, to select the best run number.
- It looks through the subrun text files for that run to correlate GPS time to frame numbers. It builds a map of these, and exports it to a JSON file in the working directory.
- It starts up a set of sn-server processes on each of the seb machines, setting them up in the same DAQ configuration as the assembler. These processes pull data from the local seb0X hard drives.
- It starts up an sn-assembler binary that then opens client sockets to all 10 machines, and requests each time frame of data from those servers, which in turn scan through the existing supernova cache to find the requested frames.

The servers run through the binary data, skimming crate headers, and trying to find the requested record. When they have it, the squirt it back to the assembler, which then creates a ub_EventRecord in a file.

This method is nice because only the requested data is staged on the final disk: there are no intermediate SEB files staged.

The Online Monitor can read these files: to generate plots, run
$ offline-monitor Supernova_xxxx.ubdaq -n <frames to process>
to create plots.

The swizzler is in the works too..

--Nathaniel