memory spike at the end of a job
A Mu2e production job that tries to merge our "beam flash" dataset into a single file fails with "bad_alloc", even on a lightly loaded detsim machine with 60GB of RAM. I ran massif on a subset of the inputs. The output indicates a large memory spike at the end of job.
Attached are a test fcl file and the formatted massif output.
That was run with Mu2e Offline v5_4_7 (art v1_15_00).
Input data files can be found in
#1 Updated by Christopher Green over 5 years ago
- Category set to Metadata
- Status changed from New to Resolved
- Target version set to 1.17.00
- % Done changed from 0 to 100
- SSI Package art added
- SSI Package deleted (
New source option
true) and new output option
true) are for use only when one is ABSOLUTELY SURE that historical parameter set information will never be needed. The only legitimate case for this currently known is production of mixing (overlay) files.
#3 Updated by Christopher Green over 5 years ago
This mitigation has also been pushed to v1_16-branch, meaning that you should be able to clone this repository on any machine where art
v1_16_02 is installed, build for installation into a private products area, and build a mu2e against it.
Let me know if you wish to do this and need any help or pointers.