Project

General

Profile

Support #13174

Updated by Kyle Knoepfel over 4 years ago

I have run some artdaq tests of the new RootOutput automatic-file-closing features that are available in art v2.x.y, and I would like to share some observations and ideas.

But first, I should apologize for not being more explicit in some of our earlier discussions. In the online, it is very important to always close all open files at EndRun time, independent of whatever other file-closing criteria are in effect for a particular run. I’ve noticed some situations in which this doesn’t happen with the new functionality, and I’ll describe those below. (For reference, I should also point out that multiple online data-taking runs can take place using single instantiations of art.)

As an example of the behavior that is desired, if the file-closing condition is set to 100 events, and we have two runs of 325 events, we would expect 8 files to be created by artdaq. For each run: three files with 100 events and one file with 25 events.

My first observation is that the only fileSwitch.boundary that seems useful online is “Event”. In all of the cases that I can think of at the moment, we want the requested condition (file size <= X kB, number of events <= 123) to be acted upon immediately, and it seems like “fileSwitch.boundary: Event” is the correct way to configure that. If I’ve misunderstood that, it would be great to have that clarified.

Additional observations are based on specific test cases:

h2. File size and event count conditions can store events from multiple runs in a single file.

h3. —————
Example 1:

Here is the RootOutput configuration:

normalOutput: {
module_type: RootOutput
fileName: "/tmp/artdaqdemo_r%06r_sr%02s_%#_%to.root"
maxEventsPerFile : 200
#maxSize : 500
fileSwitch : {
boundary : Event
#force : true
}
compressionLevel: 0
#tmpDir : "/home/biery"
}

Run 708 had 523 events, run 709 had 473 events.
* One issue is that the first events from run 709 went into one of the same files as events from run 708.
* Another issue is that the last file did not get renamed until the system was shutdown.

Here is the file list after run 709 was ended but before the system was shutdown.

<pre>
[biery@mu2edaq01 tmp]$ ls -altF | head
total 49040
drwxrwxrwt. 151 root root 159744 Jul 8 13:11 ./
drwxrwxrwx 2 biery mu2e 12288 Jul 8 13:08 masterControl/
-rw-r--r-- 1 biery mu2e 127427 Jul 8 13:08 RootOutput-5416-0713-11a5-3290.root
-rw-r--r-- 1 biery mu2e 573430 Jul 8 13:08 artdaqdemo_r000709_sr01_4_20160708T180749.root
-rw-r--r-- 1 biery mu2e 574505 Jul 8 13:07 artdaqdemo_r000708_sr01_3_20160708T180701.root
-rw-r--r-- 1 biery mu2e 573430 Jul 8 13:07 artdaqdemo_r000708_sr01_2_20160708T180641.root
-rw-r--r-- 1 biery mu2e 573430 Jul 8 13:06 artdaqdemo_r000708_sr01_1_20160708T180622.root
drwxrwxrwx 2 biery mu2e 12288 Jul 8 13:06 aggregator/
drwxrwxrwx 2 biery mu2e 12288 Jul 8 13:06 eventbuilder/

—————————

</pre>

h3.
Example 2:

<pre>
FHiCL configuration:
normalOutput: {
module_type: RootOutput
fileName: "/tmp/artdaqdemo_r%06r_sr%02s_%#_%to.root"
#maxEventsPerFile : 200
maxSize : 500
fileSwitch : {
boundary : Event
#force : true
}
compressionLevel: 0
#tmpDir : "/home/biery"
}
</pre>


Similar behavior:

<pre>
[biery@mu2edaq01 tmp]$ ls -altF | head
total 54036
-rw-r--r-- 1 biery mu2e 190415 Jul 8 13:46 RootOutput-ce0b-3fef-1779-ae2b.root
drwxrwxrwx 2 biery mu2e 12288 Jul 8 13:46 masterControl/
drwxrwxrwt. 152 root root 159744 Jul 8 13:46 ./
-rw-r--r-- 1 biery mu2e 1112307 Jul 8 13:45 artdaqdemo_r000711_sr01_4_20160708T184443.root
-rw-r--r-- 1 biery mu2e 1113170 Jul 8 13:44 artdaqdemo_r000710_sr01_3_20160708T184330.root
-rw-r--r-- 1 biery mu2e 1112307 Jul 8 13:43 artdaqdemo_r000710_sr01_2_20160708T184231.root
-rw-r--r-- 1 biery mu2e 1112307 Jul 8 13:42 artdaqdemo_r000710_sr01_1_20160708T184133.root
</pre>


———————————

h2. A Run-based boundary produces extra small empty files.

Here is the FHiCL:

normalOutput: {
module_type: RootOutput
fileName: "/tmp/artdaqdemo_r%06r_sr%02s_%#_%to.root"
#maxEventsPerFile : 200
#maxSize : 500
fileSwitch : {
boundary : Run
force : true
}
compressionLevel: 0
#tmpDir : "/home/biery"
}

I took two runs, numbers 712 and 713. Here are the files on disk:

<pre>
[biery@mu2edaq01 tmp]$ ls -altF | head
total 56068
drwxrwxrwt. 153 root root 159744 Jul 8 13:50 ./
drwxrwxrwx 2 biery mu2e 12288 Jul 8 13:50 masterControl/
-rw-r--r-- 1 biery mu2e 186172 Jul 8 13:49 artdaqdemo_r-_sr-_4_20160708T184953.root
-rw-r--r-- 1 biery mu2e 1006811 Jul 8 13:49 artdaqdemo_r000713_sr01_3_20160708T184901.root
-rw-r--r-- 1 biery mu2e 186172 Jul 8 13:48 artdaqdemo_r-_sr-_2_20160708T184846.root
-rw-r--r-- 1 biery mu2e 699186 Jul 8 13:48 artdaqdemo_r000712_sr01_1_20160708T184817.root
drwxrwxrwx 2 biery mu2e 12288 Jul 8 13:48 aggregator/
drwxrwxrwx 2 biery mu2e 12288 Jul 8 13:48 eventbuilder/
drwxrwxrwx 2 biery mu2e 12288 Jul 8 13:48 boardreader/
</pre>


Back