Project

General

Profile

Nu Mu Trigger

The Nu Mu triggers are aimed at selecting muon like topologies for use in timing, calibration, background studies etc. They consist of a series of sorter and slicer modules which identify and group spatially and causally correlated hits. These are followed by a tracker module which reconstructs 2D tracks in the X-Z and Y-Z plane. These tracks are then combined into a collection of 3D tracks which are input to the NuMuTrigger module which applies a series of hypothesis cuts aimed at selecting muons and rejecting backgrounds.

As a general rule tracking is the latency defining step. Therefore it is advisable to reject as much background as possible before the tracking module is run.

Quick start guide

The Nu Mu trigger can be run out of the ddt box using the numutriggerjob.fcl fcl file:

setup_novaddt
ddt-filter -c numutriggerjob.fcl

A version tuned for running on the FD for the conditions at the start of first beam is provided in numutriggerjob-FD_first_beam.fcl.

Running on offline files

Numu triggers can be run in the offline world on any file which contains daq hits using the numutriggerjob_straightline_offline.fcl and numutriggerjob_tracker_offline.fcl fcl files. The first of these implements tracking using TrackFit and the second using HoughTracker. For more information on running triggers on offline files see this page.

Assessing rejection and latency of a given setup

Rejection

Unfortunately this is rather complicated at the moment. To do this one must first take an appropriate file recorded online. e.g:

/nova/ana/trigger/data/DDTData-fardet_r00011047_s05_t02.raw

Then convert this to a root file:

setup_nova
nova -c daq2rawdigitjob.fcl -s /nova/ana/trigger/data/DDTData-fardet_r00011047_s05_t02.raw -o /nova/ana/trigger/data/DDTData-fardet_r00011047_s05_t02.raw.root

Then add in daqhits:

nova -c rawdigit2daqhitjob.fcl -s /nova/ana/trigger/data/DDTData-fardet_r00011047_s05_t02.raw.root -o /nova/ana/trigger/data/DDTData-fardet_r00011047_s05_t02.raw.daq.root

This will take quite some time to run. A 100 event example can be found here:

/nova/ana/trigger/data/DDTData-fardet_r00011047_s05_t02.100.raw.daq.root

Then install the online modules in an offline release as described in this page. Then run one of the offline fcl files:

nova -c job/numutriggerjob_straightline_offline.fcl /nova/ana/trigger/data/DDTData-fardet_r00011047_s05_t02.100.raw.daq.root

The resultant output will tell you the rejection of the each stage trigger. The important number is the number that pass the final cut of the NuMu trigger:

Number that pass cosmic PID:  52

This tells you that for every 100 events 52 triggers will be issued. The data in the given file were recorded on the far detector at a per buffer node milliblock input rate of 7 Hz, meaning that if we run this trigger our output rate will be ~3.5 Hz per buffer node. Similarly if this trigger were running on every buffer node then the output rate would be 100 Hz (as the full input rate is 200 Hz).

The Hough Tracker trigger run on the same file using:

nova -c job/numutriggerjob_offline.fcl /nova/ana/trigger/data/DDTData-fardet_r00011047_s05_t02.100.raw.daq.root

Yields an output rate of:

Number that pass cosmic PID:  7

Which translates to a total trigger rate of 14 Hz using the above maths. The two triggers are roughly as efficient as each other so based on this arguement alone we'd run the Hough tracker. However there are other factors to consider.

Latency

Due to the current setup of the shared memory, triggers which cannot handle the input rate will lead to a backing up of events in memory and hence have symptoms of leaking memory. In the above example the input rate was 7 Hz, meaning a trigger chain has to execute in less than 1/7 (~0.14) s. It is important to keep the latency of the triggers below this (or alternatively add more buffer nodes so that the available latency increases). The timing service run offline as part of the above examples will give you a good idea of how close to this we are offline, although this won't tell you the full story due to differences in buffer node and offline machine architecture and due to the fact that online we have to run the hit producer. The output of the straightline tracker job is:

TimeReport ---------- Module Summary ---[sec]----
TimeReport             per event        per module-run      per module-visit 
TimeReport        CPU       Real        CPU       Real        CPU       Real Name
TimeReport   0.070389   0.070379   0.070389   0.070379   0.070389   0.070379 tdcsort
TimeReport   0.017327   0.017297   0.017327   0.017297   0.017327   0.017297 timeslice
TimeReport   0.016358   0.016247   0.016358   0.016247   0.016358   0.016247 removenoise
TimeReport   0.014808   0.014730   0.014808   0.014730   0.014808   0.014730 spaceslice
TimeReport   0.010148   0.010170   0.010148   0.010170   0.010148   0.010170 singletonrejection
TimeReport   0.001010   0.000981   0.001010   0.000981   0.001010   0.000981 removeonedslices
TimeReport   0.006449   0.006404   0.006449   0.006404   0.006449   0.006404 track
TimeReport   0.005559   0.005606   0.005559   0.005606   0.005559   0.005606 merge2dtracks
TimeReport   0.004129   0.004779   0.004129   0.004779   0.004129   0.004779 numutrigger
TimeReport   0.000470   0.000454   0.000470   0.000454   0.000470   0.000454 TriggerResults
TimeReport        CPU       Real        CPU       Real        CPU       Real Name
TimeReport             per event        per module-run      per module-visit 

T---Report end!

TimeReport> Time report complete in 15.094 seconds
 Time Summary: 
 Min: 0.131247
 Max: 0.428039
 Avg: 0.15094

Which as you can see is close to the 0.14 s target. The Hough tracker gives:

TimeReport ---------- Module Summary ---[sec]----
TimeReport             per event        per module-run      per module-visit 
TimeReport        CPU       Real        CPU       Real        CPU       Real Name
TimeReport   0.070579   0.071667   0.070579   0.071667   0.070579   0.071667 tdcsort
TimeReport   0.017167   0.016984   0.017167   0.016984   0.017167   0.016984 timeslice
TimeReport   0.016018   0.015981   0.016018   0.015981   0.016018   0.015981 removenoise
TimeReport   0.014458   0.014331   0.014458   0.014331   0.014458   0.014331 spaceslice
TimeReport   0.009209   0.009107   0.009209   0.009107   0.009209   0.009107 singletonrejection
TimeReport   0.001140   0.001099   0.001140   0.001099   0.001140   0.001099 removeonedslices
TimeReport   0.069319   0.069286   0.069319   0.069286   0.069319   0.069286 houghtracker
TimeReport   0.010198   0.010103   0.010198   0.010103   0.010198   0.010103 merge2dtracks
TimeReport   0.006439   0.006659   0.006439   0.006659   0.006439   0.006659 numutrigger
TimeReport   0.000430   0.000413   0.000430   0.000413   0.000430   0.000413 TriggerResults
TimeReport        CPU       Real        CPU       Real        CPU       Real Name
TimeReport             per event        per module-run      per module-visit 

T---Report end!

TimeReport> Time report complete in 21.9371 seconds
 Time Summary: 
 Min: 0.17423
 Max: 0.648523
 Avg: 0.219371

Which is a bit longer. This is due to the increased complexity of the Hough tracker. If you find that you're running into latency issues then there are a number of things you can do as explained below.

Dealing with latency issues

To do this we must move rejection to earlier in the trigger chain. This can be done by tightening the slicer cuts. e.g. by adding some varient of the following:

physics.filters.timeslice.TimeWindow:                   10
physics.filters.removenoise.MinHits:                    20
physics.filters.spaceslice.MinHits:                     20
physics.filters.removeonedslices.MinHitsPerView:        5

or in the case of the Hough tracker, tightening up the track definition requirements:

physics.producers.houghtracker.minimum_points_to_tag:   20
physics.producers.houghtracker.minimum_hits_per_track:  10

However all of the above will likely affect the efficiency of the trigger and you should be careful of being too aggressive.