How to submit and monitor production jobs with POMS

  • First and most important steps is to make sure that your kerberos id is associated with the icaruspro account. Open and login with your kerberos id at this page: and check if you are able to open the page and edit any of the job type and campaigns. If you don't have any permission, you can request access this by submitting a ticket to service desk for permission to run production under icaruspro.
  • Create a configuration file. Do this by logging into and setup the environment necessary to run as icaruspro. This can be done with the following command:
    setup_icaruspro <icaruscode_software_version> <qualifier>

    setup_icaruspro v08_37_00 e17

    then cd to the poms configuration directory as follows:

$ ssh

Last login: Thu Apr 11 12:11:33 2019 from
                              NOTICE TO USERS

       This  is a Federal computer (and/or it is directly connected to a
       Fermilab local network system) that is the property of the United
       States Government.  It is for authorized use only.  Users (autho-
       rized or unauthorized) have no explicit or  implicit  expectation
       of privacy.

       Any  or  all uses of this system and all files on this system may
       be intercepted, monitored, recorded,  copied, audited, inspected,
       and  disclosed  to authorized site, Department of Energy  and law
       enforcement personnel, as  well as authorized officials of  other
       agencies,  both  domestic and foreign.  By using this system, the
       user consents to such interception, monitoring, recording,  copy-
       ing,  auditing,  inspection,  and disclosure at the discretion of
       authorized site or Department of Energy personnel.

       Unauthorized or improper use of this system may result in  admin-
       istrative  disciplinary  action and civil and criminal penalties.
       By continuing to use this system you indicate your  awareness  of
       and  consent to these terms and conditions of use.  LOG OFF IMME-
       DIATELY if you do not agree to  the  conditions  stated  in  this

       Fermilab  policy  and  rules for computing, including appropriate
       use, may be found at

[01:01:26 ~]$ setup_icaruspro v08_13_02 e17
Setting up LArSoft from "CVMFS":
 - executing '/cvmfs/'
 - appending '/cvmfs/'
Setting up artdaq from "CVMFS":
 - appending '/cvmfs/'
Setting up ICARUS from "CVMFS":
 - prepending '/cvmfs/'

[01:01:47 ~]$ cd /icarus/app/poms_test/cfg/

We currently have all the configuration file needed to run the SBN workshop production in this directory:

[01:02:31 /icarus/app/poms_test/cfg]$ ls -1 *workshop*


Most of the neutrino and single particle sample have different configuration files created for gen stage, but because both of these sample types have similar production workflow and similar memory, there is a skeleton configuration file that handles the production workflow from g4 stage to reco called the icarus_workshop_standard_singles_neutrino.cfg. When using this configuration file, each sample is differentiated by using a global parameter called: global.sample which will tag the directory the output file is written into with the name of the produced sample. For the purpose of re-running the production sample under the new icaruscode, you would not have to create a new configuration file, but can simply change the software version inside the configuration file using the current software version. For example, setting

 version = v08_37_00 

will set the icaruscode to thev08_37_00 version, and this will be the version used to run the sample production. IMPORTANT: make sure that you always change the software version inside the configuration file and via the POMS editor. Changing the software version inside the configuration file will ensure that the jobs are ran using the correct icaruscode version. And having the same software version inside the configuration file and the POMS editor will ensure the automatic triggering of the next stage in the production workflow [of course, there's also another problem that might disrupt the automatic triggering and might require manual intervention, but having a consistent software version between the configuration file and POMS editor is the first requirement for having automatic triggering.]

  • Create a campaign workflow. If you are re-running previously requested SBN sample then the campaign is already created.
    If you want to run the same campaign but with a different tag, then you can do so by using the clone function (click on the clone icon of the respective campaign (blue highlighted box on the picture below)) and then rename the name of the campaign. In the example below, I copied the whole name of the campaign and then add "with_CRTgeomfix" at the end of the campaign name.

This will copy the whole campaign production workflow. The next step is to make sure that the new sample is being written to a new directory. To do this, click the GUI editor icon of the respective campaign. This will open a GUI editor that will display the production workflow of a campaign. Click on the stage that you want to edit. In this case, edit the Oglobal.sample parameter for each stage of the production workflow with the name of the new tag for the new sample (e.g. "cosmics_muon_3ms_fixedCRTgeom").

If you forgot to add this parameter, then the file will be written to a “default” directory. Currently, this default directory is listed under: /pnfs/icarus/scratch/users/icaruspro/dropbox/mc1/poms_production/MCC1_poms_icarus_prod_numu_bnb_v08_13_02. This is because the default sample parameter in the configuration file is "numu_bnb" Please remember to use the exact same name for each sample that is being produced within a campaign (despite the stage). This will help to keep all of the sample files for different stages under the same directory.
  • Specify the memory needed for each stage. You can do this by changing the parameter Osubmit.memory. For gen and g4 stage (single particle/neutrino sample), 1000MB-2500MB usually is sufficient to run a job. Cosmic sample usually needs much larger memory and wall time. You can also see the memory profiling for the different samples in these pages:

to give you some idea about the size of the memory and disk that you should request when running this sample. A good rule of thumb to approach this is to run a test sample of ~10 jobs, using the new software version, through the whole production flow and collect the information on the maximum memory and walltime (Osubmit.expected-lifetime) to be used as a baseline when submitting jobs for each stage. This will give you a better estimate of the wall time and memory to request for the production jobs

  • (Not needed but will make your production life less complicated): use the POMS recovery options for jobs that are being held due to memory. POMS will run jobs based on the number of jobs we specify at the gen stage. For each of the stage downstream of that, I have added the following line into the configuration file: n_files_per_job = 1. This will ensure that when the files from the previous stage have been completed, POMS will only run the jobs that were located, and when the recovery option is running, it will only re-submit the missing files and not the number of samples that we submitted at the gen stage.
  • Now you have everything in place, you can start the campaign production by clicking “Launch” or the rocket symbol on POMS.