Adam Aurisano





12:13 PM NOvA-ART Revision 35971 (svn): Fixed debug message to only print the appropriate number of ADC values for a given nanoslice version. This should fix the errors seen in the FD miniproduction tests using single point runs.


12:17 PM NOvA-ART Revision 35715 (svn): This is the ray tracing package I developed to construct the photon collection templates used by ImprovedTransport. Instructions on how to use this package are in the included README file.


03:16 PM NOvA-ART Revision 34900 (svn): Based on discussions at the conveners and DetSim meetings, I've removed the randomization of pixel gains. Instead, each pixel is assigned the mean of the gain distribution for that pixel position. This accounts for the fact that pixels have systematically different gains depending on their position on the APD without injecting excess variation that the currrent calibration procedure can not remove.


04:17 PM NOvA-ART Revision 34467 (svn): Updated the FD and ND attenuation profiles to match average functions derived from fiber stringing data, as shown in docdb-34702.
04:16 PM NOvA-ART Revision 34466 (svn): Changed Birks constant to 0.01155 as found in bench measurements shown in docdb-34223.
04:14 PM NOvA-ART Revision 34465 (svn): Turned on Robert's code that models the time structure of the NuMI beam.
04:13 PM NOvA-ART Revision 34464 (svn): If the flux is dk2nu, SpillData is now filled from the flux metadata. This should allow for reco to be able to tell whether a file is FHC or RHC.


03:08 PM NOvA-ART Revision 34241 (svn): Changing the random gain generation procedure sample different distributions based on pixel position. Since CMap isn't initialized at BeginRun, gains are now redrawn each event. This shouldn't matter for any practical purpose.


03:04 PM NOvA-ART Revision 33445 (svn): First commit of scripts to tune light levels and Cherenkov parameters.


11:55 AM NOvA-ART Revision 27198 (svn): A simple script to read in a levelDB database and write an equivalent LMDB database. Some caveats: I haven't yet added command line options to specify the input and output paths, so you have to manually edit the script. The levelDB and LMDB python interfaces must be installed on the machine you are using (which is not true on the VMs). Finally, LMDBs require a "map size" to be specified at creation, which is the maximum size of the memory mapped file. The database cannot exceed this value. I currently have the map size set to 1 TB, which should be enough for most purposes (but beware of this if you are adapting this script for another experiment). Also, the internet says that Windows and MacOS actually reserve the full map size on disk at creation, regardless of the real size of the data, so on those operating systems, you may want to make the map size as small as possible.

Also available in: Atom