- Table of contents
- Debugging code
- Debugging speed and memory issues
- Checking your code for non-compile time errors with cppcheck
The current default setup of LArSoft is in the debug mode, so all debugging symbols are available.
Instructions for gdb command line debugging¶
If one wants to see all compiler/link options during compilation, set the VERBOSE variable when building a package. For example
gmake VERBOSE=t TrackFinder.all
To run the interactive debugger:
run -c example.fcl
where, .... # etcetera
Please see the LArSoft instruction page here.
There are a couple of tools available through Allinea, the DDT debugging tool and the MAP profiling tool. Both can be helpful. The DDT tool will help you identify the location of segfaults and other coding problems while the MAP tool can help you identify which pieces of code are leaking memory or taking the most time to run.
Debugging speed and memory issues¶
Valgrind is a way to determine which method is using the most resources in a job. One needs the very latest version of valgrind (3.6.1 or higher), which is available on neither the FNAL nodes, Scientific Linux, nor in the EPEL repository (for those of you with installations at other sites). The simplest thing to do is download and install valgrind in your home directory:
# Download valgrind
tar -xjvf valgrind-<version>.tar.bz2
This will install valgrind in ~/bin and ~/lib of your account.
~/bin/valgrind `which lar` -c prodgenie.fcl | tee memcheck.txt
Breaking down the above line:
~/bin/valgrind = running valgrind from the current directory
`which lar` = returns the location of the lar executable as an argument to valgrind
-c prodgenie.fcl = typical lar arguments; can be whatever you want
| tee memcheck.txt = display the output on the screen and write it to file memcheck.txt
Warning: The above command will take many hours or days to execute; valgrind is slow. A better strategy is edit your .fcl scripts so you're only executing the particular module in question (largeant, simwire, etc.).
If you're looking for which part of your program is using large chunks of memory, the appropriate tool is massif.
~/bin/valgrind --tool=massif --time-unit=B `which lar` -c prodgenie.fcl | tee massif.txt
This (very slow) process will produce an additional file in the directory from which the command was executed:
massif.out.<pid>, where <pid> is the process id of that run of lar. To interpret the contents of this file, use valgrind's ms_print utility:
~/bin/ms_print massif.out.<pid> | less
If one is looking for which portions of the program are being called most often, the appropriate tool is callgrind.
~/bin/valgrind --tool=callgrind `which lar` -c prodgenie.fcl | tee callgrind.txt
As with massif, the file
callgrind.out.<pid> will be produced. Use callgrind_annotate to interpret this file:
~/bin/callgrind_annotate callgrind.out.<pid> | less
However, callgrind is the slowest of these tools; for example, if it is run it on a "vanilla" prodgenie.fcl script, it will probably take a week to execute. Fortunately, one can obtain a snapshot of the information accumulated by a running instance of callgrind. First, find the process id (pid):
ps -u $USER | grep callgrind
The first number is the pid. Then use callgrind_control to get a snapshot:
~/bin/callgrind_control <pid> --dump
ls -blarth callgrind.*
One will see a callgrind.out.<pid>.N file on which one can run callgrind_annotate.
Checking your code for non-compile time errors with cppcheck¶
cppcheck is a very handy tool available for checking your code that gcc can't catch like array out of bound error and can also be used for optimization. The tool is documented here.
It is setup at Fermilab using ups. To do so you can
$setup cppcheck v1_58 <\pre> You then test the code in directory MyDir by doing <pre> $cppcheck --enable=style MyDir </pre> It will print out details for optimizing or fixing your code.