Matthew, Jonathan, Bruno, Dominick, Susan, Nick, Satish, Chris, Kanika, Jeny, Gareth

News (Matthew)

  • Matthew re-wrote official DS page, please send feedback if you have any. * At Friday's all analysis meeting there was some discussion about bad channels and reco keep-up lag. It was decided to delay keep up by ~ one day to ensure it runs after bad channels.

FA production status (All)

  • FD data - pid-part & lemsum nearly done - but not much progress has been seen in NuMI since last week. Dominick says that the jobs to finish these off have just been submitted. Chris notes that LEM might take at least a week (Chris only has 500 nodes) - due to loose nu_e preselection (see docdb-11284 for more information). Tightening the preselection will be discussed at the next Nu_e meeting. Action item: Matthew, Dominick and Chris will work out how the LEM processing time compares to the other stages to motivate preselection changes.
  • FD beam MC 14 DB - awaiting SIM sign off.
  • FD cosmic MC - time to top up? Matthew will contact Nate about this.
  • ND data - no change, awaiting new tag to fix know issues
  • ND beam MC - as above
  • ND cosmics - Satish will start reco on these now.

Other related discussion included:

  • SpillDQ should be included in all jobs (although it won't do anything for the FD). The high priority sample for this is the ND data. * Post-shutdown data was brought up again. Are we ready to start including this? MC aside, production needs bad channel masks to be in place before we can reconstruct these. Matthew will contact Jon to work out if these are currently being generated automatically. * Dominick asked for some help with ND rapid turn around. Matthew volunteered Bruno to help out (although Bruno wasn't there to confirm he could do this).

Executive summary of changes since FA14-11-11 (Jonathan)

The new method of summarising changes looks to work really well. We then discussed what to do about freezing reco. It was decided that we'd freeze everything except channel info as this is seeing active development. The fact that generation is broken was brought up here, but the consensus was that until we know what’s going on with nu tools and geant and where the problem lies, then we may as well move forwards, as things are already broken in FA14-11-11.

Are we ready to tag? (All)

Everything is in place except the FEB flasher filter, we also don't currently have an ETA for this. Gavin will see what he can find out. It was decided to press forward and tag already despite this absence. This tag can then be used to process ND files and start the reco keep-up again. It was noted that it is important to validate spill level DQ quickly.

RHC simulation validation (Gareth, Nate)

Gareth has run validation on these, and the results are currently being looked over by validation committee. Gareth and Jim have signed off, waiting on Alex and Adam and Nate. We'll hear either way soon, likely in the next day or so.

Removing CFS/CVMFS redundancy (Kanika)

Currently have two systems for distributing software, NovaCFS and Oasis (CVMFS is a catchall for both of these). Oasis is visible externally and internally, CFS is only visible inside FNAL. Why do we still support both? It is unclear who uses NoVACFS. It was decided to put together a proposal for dropping one of these, then circulate to offline, then CD. Jonathan will do this.

Longer sub runs (Jeny, Matthew, Paola)

The artdaq files for these runs have been processed. Jeny has changed from using 200 jobs per file, to 10 files per jobs (not because of this subrun length change) and currently the jobs take about an hour for a days worth of data. Currently Jeny is having a problem with files not being delivered to SAM projects. She will open a ticket. Paola's pclist jobs are stuck in the same form of failure as Jeny saw last week. So we can’t tell yet if the sub-run length affects these yet. She will continue to work on this. We are now ready to run some data reco, PID/CAF on these files. Matthew will update the production tests and Dominick will try some batch jobs soon.

FTS error handling (Dominick)

Files that land in the drop box that have a copy in n-store but different file sizes just spit out error files, is there a better way to handle this? Robert I. doesn’t like the idea of deleting files. Chris et al have previously discussed moving them to their own directory like “duplicate” or “bad_metadata”. It would be good to come up with an idea to forward to Robert et al. Nate tried to do this 6 months ago then didn’t get sorted out. Move off to the side, but sorted by error? Delete but count them? We didn't converge on a plan and it was decided that maybe we should arrange a phone conversation to work on this.


  • Are most offsite nodes SL5? How to pressure offsite to change?