Bug #4605

FarDet DATA reconstruction processing: memory leak

Added by Gavin Davies almost 7 years ago. Updated almost 7 years ago.

Start date:
Due date:
% Done:


Estimated time:


Here I provide an example of an aborted reconstruction job that exceeded 4GB memory usage during GRID processing.

One can see how the memory usage changes (limited view) over time in the log file here:


The job can be reproduced interactively in development (I've added a toggle to Calibrator.fcl to UseMCcalib - Use MC calibration constants, so it's simply a fcl change.

setup_nova -b maxopt
cd <test_release>;
nova -c recoproductionjob.fcl -s /nova/data/novaroot/FarDet/S13-07-22/000109/10976/cosmic/

Before running the nova command, they will need to copy the .fcl to their test release and add the following line:

services.user.Calibrator.UseMCcalib: true

Someone will need to use google-perftools. There are some instructions here:

Another useful utility is the SimpleMemoryCheck service:


#1 Updated by Gavin Davies almost 7 years ago

use the following input file instead (unpacked in a new tag, but no changes to the daq2rawdigit):


#2 Updated by Gavin Davies almost 7 years ago

The recoproductionjob can run through to completion if I remove the KalmanTrack and KalmanTrackMerge modules pre- Nick's commit 6700

4.2s CPU/event, with discretetrack taking longest at 2.87s/event
Total time is 38571s --> ~10hours!!!

for the log/err/out files.

Memory usage topped out at 1.71GB, well below the 4GB limit that we have been hitting.

Testing now with latest Kalman changes.

#3 Updated by Gavin Davies almost 7 years ago

  • Status changed from New to Resolved

After recent KalmanTrack changes/face-lifts this "leak" no longer exists, so I will mark this as resolved/closed.

#4 Updated by Gavin Davies almost 7 years ago

  • Status changed from Resolved to Closed

Also available in: Atom PDF