Project

General

Profile

Best practices for configuring POMS production projects

How to configure larbatch xml files.

About larbatch versions.

The version of larbatch used by MicroBooNE has diverged from larsoft as a result of python 3 migration (larsoft is using python 3 by default. MicroBooNE is still using python 2). The last common larbatch version is v01_51_12. The latest MicroBooNE-specific version is currently v01_51_14. By preference, production projects should generally use the latest MicroBooNE-specific version.

General advice

  • Configure your POMS launch template to use the latest and greatest MicroBooNE-specific larbatch version.
  • In your xml file, <outdir> should point to /pnfs/uboone/scratch.
  • Element <logdir> should point to /ponfs/uboone/scratch. <logdir> can be the same as or different from <outdir>.
  • Element <workdir> should point to /pnfs/uboone/resilient.
  • Element <bookdir> should point to /uboone/data (rarely used for production).
  • Add <dirlevels> and <dirsize> to avoid directory overload, e.g., <dirlevels>1</dirlevels> and <dirsize>100</dirsize>.
  • Include element <check>1</check> at the project level of the xml file.
  • Include element <copy>1</copy> to directly copy output the FTS dropbox. For testing, if you don't want to save your output, specify <copy>0</copy>.
  • Include option "--subgroup=prod" in <jobsub> and <jobsub_start> to gain access to production priority scheduling.
  • To limit jobs to FermiGrid (even if offsite is included in <resource>), include option "--site=FermiGrid" in <jobsub> and/or <jobsub_start>. It is usually a good idea to always add this option in <jobsub_start>.
  • To require access to regular cvmfs and stash cache, include option "--append_condor_requirements='(TARGET.HAS_CVMFS_uboone_opensciencegrid_org==true)&amp;&amp;(TARGET.HAS_CVMFS_uboone_osgstorage_org==true)'" in element <jobsub>.
  • To require access to regular cvmfs only, include option "--append_condor_requirements='(TARGET.HAS_CVMFS_uboone_opensciencegrid_org==true)'" in element <jobsub_start>.

About recursive datasets

For more information about using recursive datasets, refer to this wiki article. The short version is as follows.

  • Specify the static input dataset definition using stage element <inputdef>.
  • Specify the recursive input dataset definition using <recurdef>.
  • Use child recursion (<recurtype>child</recurtype>).
  • Include stage element <activebase>. The value of <acitivebase> should match <recurdef>, or a truncated version of <recurdef>.
  • Include <dropboxwait>3</dropboxwait>.
  • Include <prestart>1</prestart).
  • Include <filelistdef>1</filelistdef>. Without this element, recursive datasets may be too complicated for sam to handle.
  • The recursive definition specified in <recurdef> does not need to be created beforehand (project.py will create it for you). To force project.py to recreate the recursive definition, change the name of the recursive dataset (remember to also change <activebase>). If for some reason the automatic recursive definition is not adequate, it is possible to create it manually.

About multiple artroot output streams

Information in this section is only relevant if you are using a recursive input dataset.

  • Include a stage element <datastream> for each artroot output stream. The value of this element should match the data_stream sam metadata.
  • Include stage element <endscript>filter_duplicates.py</endscript>. This allows makeup jobs where only some output streams are missing.

Merging sparse artroot output streams.

  • Consider merging if average artroot output file size is less than about 500 MB.
  • Enable merging by including stage element <merge>1</merge>.

Merging can be used with one or multiple artroot output streams. If one artroot output stream is merged, all artroot output streams will be merged.

Merging plain root output files.

  • Consider merging plain root files if all of the following are true.
    • Files have file_format "root."
    • Files contain histograms and ntuples.
    • Files can be merged using hadd.
  • Enable merging by including stage element <anamerge>1</anamerge>.

About CRT (re)merging

  • Stand alone CRT remerging fcls for different epochs are listed on the MCC9 wiki.
  • Include stage element <startscript> for the correct epoch (refer to the MCC9 wiki).

About prestaging

Always prestage input data. This applies to both main and secondary (CRT) input. Configure automatic prestaging in your xml file as follows.

  • Include stage element <prestagefraction>1</prestagefraction>.

It is always safe and best to specify automatic prestaging. This does not preclude manual prestaging.

About non-artroot output streams

In general, nothing special needs to be included in xml files to handle non-artroot output. Non-artroot output files can be produced as the only output of a production batch job, or as a side effect of producing artroot output files. Some things to keep in mind are as follows.

  • Non-artroot output files do not participate in recursive input datasets (unless you manually define your recursive input dataset). You can use automatic recursive input datasets, as long as the production job produces as least one other artroot output stream.
  • Do not add <datastream> elements for non-artroot output.
  • In general, non-artroot output streams that are smaller than the recommended enstore file size (that is, most non-artroot output) should be stored in SFA (small file aggregation) areas of dCache.

About copying and streaming

  • The default behavior or the MicroBooNE sam station and art framework is to stream input root files using xrootd.
  • To copy any kind of input files using gridftp, include xml stage element <schema>gsiftp</schema>.
  • To copy root input files using xrdcp, include xml stage or substage element <initsource>copy_xrootd.sh</initsource>.

Current thinking is that best practice is to copy root input files using xrootd (last bullet). Non-root files must be copied using gridftp (second bullet).

Throttling job submission

  • Use element <submitscript> to decide whether to submit more jobs depending on exit status.
  • Predefined submit scripts available to uboonepro are maxjobs.sh and maxjobs2.sh.