Project

General

Profile

User guide » History » Version 64

« Previous - Version 64/86 (diff) - Next » - Current version
Herbert Greenlee, 01/13/2015 09:56 AM


Overview

Larsoft common batch and workflow tools are contained in ups product larbatch (this redmine), which is built and distributed as part of larsoft. Larbatch tools are built on top of Fermilab jobsub_client batch submission tools. For general information about jobsub_client and the Fermilab batch system, refer to articles on the jobsub wiki and the fife wiki.

No other part of larsoft is dependent on larbatch, and larbatch is not setup as a dependent of the larsoft umbrella ups product. Rather, larbatch is intended to be a dependent of experiment-specific ups products (see this article for instructions on configuring larbatch for a specific experiment.

After setting up ups product larbatch, several executable scripts and python modules are available on the execution path and python path. Here is a list of the more important ones.

  • project.py
    An executable python script that is the the main entry point for user interation. More information can be found below.
  • project_utilities.py
    A python module, imported by project.py, that implements some of the workflow functionality. End users would not normally interact directly with this module. However, a significant aspect of project_utilities.py is that is supplies hooks for providing experiment-specific implementations of some functionality, as described in an accompanying article on this wiki.
  • condor_lar.sh
    The main batch script. Condor_lar.sh is a general purpose script that manages a single invocation of an art framework program (lar executable). Condor_lar.sh sets up the run-time environment, fetches input data, interacts with sam, and copies output data. It is not intended that end users will directly invoke condor_lar.sh. However, one can get a general idea of the features and capabilities of condor_lar.sh by viewing the built-in documentation by typing "condor_lar.sh -h, or reading the file header.

Using project.py

Project.py is used in conjunction with a xml format project definition file (see below). The concept of a project, as understood by project.py, and as defined by the project definition file, is a multistage linear processing chain involving a specified number of batch workers at each stage.

Internal documentation

Refer to header of project.py or type "project.py --help". Internal documentation is always kept up to date project.py command line options are changed.

Use cases

In a typical invocation of project.py, one specifies the project file (via option --xml), tha stage name (via option --stage), and one or more action options. Here are some use cases for invoking project.py.

  • project.py -h or project.py --help
    Print built-in help (lists all available command line options).
  • project.py -xh or project.py --xmlhelp
    Print built-in xml help (lists all available elements that can be included in project definition file).
  • project.py --xml xml-name --status
    Print global summary status of the project.
  • project.py --xml xml-name --stage stage-name --submit
    Submit batch jobs for specified stage.
  • project.py --xml xml-name --stage stage-name --check
    Check results from specified stage (identifies failed jobs). This action assumes that the art program produces an artroot output file.
  • project.py --xml xml-name --stage stage-name --checkana
    Check results from specified stage (identifies failed jobs). This version of the check action skips some checks done by --check that only make sense if the art program produces an artroot output file. Use this action to check results from an analyzer-only art program.
  • project.py --xml xml-name --stage stage-name --makeup
    Submit makeup jobs for failed jobs, as identified by a previous --check or --checkana action.
  • project.py --xml xml-name --stage stage-name --clean
    Delete output for the specified stage and later stages. This option can be combined with --submit.
  • project.py --xml xml-name --stage stage-name --declare
    Declare successful artroot files to sam.
  • project.py --xml xml-name --stage stage-name --upload
    Upload successful artroot files to enstore.
  • project.py --xml xml-name --stage stage-name --define
    Create sam dataset definition.
  • project.py --xml xml-name --stage stage-name --audit
    Check the completeness and correctness of a processing stage using sam parentage information. For this action to work, input and output files must be must be declared to sam.

Project File Structure

The general structure of the project file is that it is an XML file that contains a single root element of type "project" (enclosed in "<project name=project-name>...</project>"). Inside the project element, there are additional subelements, including one or moe stage subelements (enclosed in "<stage name=stage-name>...</stage>." Each stage element defines a group of batch jobs that are submitted together by a single invocation of jobsub_submit.

Examples

Example XML project files used by microboone from ubutil product can be found here.

Internal documentation

Refer to header of project.py or type "project.py --xmlhelp". Internal documentation is always kept up to date when XML constructs are added or changed.

XML header section

The initial lines of an XML project file should follow a standard pattern. Here is a typical example header.

<?xml version="1.0"?>
<!DOCTYPE project [
<!ENTITY release "v02_05_01">
<!ENTITY file_type "mc">
<!ENTITY run_type "physics">
<!ENTITY name "prod_eminus_0.1-2.0GeV_isotropic_uboone">
<!ENTITY tag "mcc5.0">
]>

The significance of the header elements are as follows.

  • The XML version
    Copy the above version line exactly, namely,
    <?xml version="1.0"?>
    
  • The document type (DOCTYPE keyword).
    The argument following the DOCTYPE keyword specifies the "root element" of the XML file, and should always be "project."
  • Entity definitions
    Entity definitions, which occur inside the DOCTYPE section, are XML aliases. Any string that occurs repeatedly inside an XML file is a candidate for being defined as an entity. Entities can be substituted inside the the body of the XML file by enclosing the entity name inside &...; (e.g. &release;).

Project Element

Each project definition file should contain a single project element enclosed in "<project name=project-name>...</project>." The name attribute of the project element is required.

The content of the project element consists of other XML subelements, including the following.
  • A single subelement with tag "larsoft," which defines the run-time environment.
  • Option subelements.
  • One or more stage subelements.

Larsoft subelement.

Each project element is required to contain a single subelement with tag "larsoft" (enclosed in "<larsoft>...</larsoft>." The larsoft subelement defines the batch run-time environment. The larsoft subelement may contain simple text subelements, of which there are currently three:

  • <tag>...</tag>
    Larsoft release version.
  • <qual>...</qual>
    Larsoft release qualifier.
  • <local>...</local>
    Path of user's local test release directory or tarball.

The local subelement is optional. Here is how a typical larsoft subelement might appear in a project definition file.

<larsoft>
  <tag>&release;</tag>
  <qual>e6:prof</qual>
</larsoft>

Note in this example that the larsoft version is defined by an entity "release," which should be defined in the DOCTYPE section.

Project options

Project options are text subelements of the project element with tags other that "larsoft" or "stage." Here are some project options (this is the full list when this wiki was written). The full list of project options (and all defined XML constructs) can always be found by typing "project.py --xmlhelp."

  • <group>...</group>
    Should contain the standard experiment name (for microboone use "uboone"). If missing, environment variable $GROUP is used.
  • <numevents>...</numevents>
    Total number of events to process.
  • <numjobs>...</numjobs>
    Number of parallel worker jobs (default 1). Can be overridden in individual stages.
  • <os>...</os>
    Comma-separated list of allowed batch OSes (e.g. "SL5,SL6"). This option is passed directly to jobsub_submit command line option --OS. Default is jobsub decides.
  • <resource>...</resource>
    Specify jobsub resources (command line option "--resource-provides=usage_model="). Default is "DEDICATED,OPPORTUNISTIC". For OSG specify "OFFSITE." Can be overridden in individual stages.
  • <server>...</server>
    Specify jobsub server. Expert option, usually not needed.
  • <site>...</site>
    OSG site(s) (comma-separated list). Use with "<resource>OFFSITE</resource>." Default is jobsub decides, which usually means "any site."
  • <filetype>...</filetype>
    Sam file type (e.g. "data" or "mc"). Default none.
  • <runtype>...</runtype>
    Sam run type (e.g. "physics"). Default none.
  • <merge>...</merge>
    Histogram merging program. Default "hadd -T." Can be overridden in each stage.
  • <fcldir>...</fcldir>
    Specify additional directories in which to search for top-level fcl job files. Project.py searches $FHICL_FILE_PATH and the current directory by default.

Stage Sublements

Each project element should contain one or more stage subelements enclosed in "<stage name=stage-name>...</stage>." The name attribute of the stage subelement is required, and should be different for each stage. The stage element should contain stage options in the form of simple text subelements. Here are the stage options:

  • <fcl>...</fcl>
    Top-level fcl job file (required). Can be specified as full or relative path.
  • <outdir>...</outdir>
    Output directory full path (required). The output directory should be accessible interactively on the submitting node and grid-write-accessible via ifdh cp from the batch worker.
  • <numjobs>...</numjobs>
    Number of parallel worker jobs. If not specified, inherit from project options.
  • <targetfilesize>...</targetfilesize>
    If specified, this option may override the number of workers (option numjobs) in the downward direction to achieve the estimated target file size.

The following three options deal with where this processing stage gets its input data. Specify no more than one input option. You can also omit any input opiton, in which case, output data from the previous stage is pipelined to this stage.

  • <inputfile>...</inputfile>
    Specify a single input file full path.
  • <inputlist>...</inputlist>
    Specify input file list (a file containing a list of input files, one per line, full path).
  • <inputdef>...</inputdef>
    Specify input sam dataset definition.

The following options allow job customizations by user-written script. The script location should be specified as an absolute or relative path (relative to current directory). Any specified job customization scripts are copied to the work directory and from there are copied to the batch worker.

  • <initscript>...</initscript>
    Worker initialization script (condor_lar.sh --init-script).
  • <initsource>...</initsource>
    Worker initialization source script (condor_lar.sh --init-source).
  • <endscript>...</endscript>
    Worker finalization script (condor_lar.sh --end-script).

Additional options.

  • <defname>...</defname>
    Sam dataset definition name for output files.
  • <merge>...</merge>
    Histogram merging program. If not specified, inherit from project options.
  • <resource>...</resource>
    Specify jobsub resources (command line option "--resource-provides=usage_model="). If not specified, inherit from project options.
  • <lines>...</lines>
    Specify arbitrary condor command via @jobsub_submit --lines= (expert option).
  • <site>...</site>
    OSG site(s) (comma-separated list). If not specified, inherit from project options.