Submitting Simulation fcls

A lot of work has been done by Gavin, the Computing Sector, and other folks to make running production jobs as painless as possible. This is a good thing, since the underlying systems involved in creating the wrapper scripts for a grid job, sending the job’s relevant files to an available grid machine, monitoring it, setting up the NOvA software environment to get the relevant libraries, and then sending the output files back to Fermilab to be filed on tape, can be VERY complicated (as indicated by all the steps I just listed!). At some point soon you’ll want to learn some of the details of these pieces and how they all fit together, so that you can debug problems as they occur, or at least know to contact the right person. For the moment, however, I’ll just give you a broad overview tell you how to run the script which Gavin wrote, submit_mc_gen. Also, I’ll give you the relevant instructions for monitoring your jobs, and cleaning them up when things go bad.

Part 1: Some basic concepts and terms

Gavin, Adam A, and other members of the production group have written a LOT about how the various elements of our grid submission and production tools fit together. Therefore, I’ll start by linking to a few useful posters and other documents.

First, here’s a quick guide by Gavin on how NOvA has developed our production code to work with the Open Science Grid (OSG):

Second, here’s a poster by Adam A at the recent CHEP conference, describing the basic structure of a job (I’ll explain some of the terms in a minute):

Finally, here’s a poster by Gavin from the same conference, which describes how we transfer novasoft and its libraries to machines on the grid, as well as how we maintain our software releases (useful to know!):

Now, a quick summary. As you probably know, much of NOvA’s file production (including simulations, but also calibration, reconstruction, CAF production, etc.) relies on so-called distributed computing on the Fermilab grid. We have too many files to process on a single machine interactively (i.e., just on the command line, by typing nova -c <job> etc.), so we farm out the jobs to other available computers in “batch”, both at Fermilab and other sites (via the OSG). At the end, we retrieve the final output, in this case, by writing back the files to Fermilab (Enstore/tape for our MC) and also putting them in a SAM dataset.

When you submit a job, you are submitting via a utility called jobsub (or jobsub_client, if we ever upgrade to the newest version). Each job is sent to the grid and assigned to an available machine, and the transfer of input and output data is taken care of as efficiently as possible by IFDH (Intensity Frontier Data Handling) tools. On the machine itself, novasoft is set up using CVMFS, which caches those NOvA libraries which are needed for use by the job. For production jobs, we’re taking files from and writing them back to Fermilab’s Enstore tape storage, organized by dCache (as opposed to NOvA’s BlueArc locations, which you access using regular Unix cp from /nova/data/).

Throughout this entire process, the SAM file handling system both keeps track of files and also facilities the monitoring of grid jobs. Central to SAM is the idea of Metadata. In the previous lesson, on make_sim_fcl, when the script created a dataset, it was based on the Metadata properties of the files being created. This includes information on which detector, the generator used, the horn current - any information useful to identifying what these files were. If you pop open one of the fcl files created last time, you’ll see that this Metadata is saved as a series of Art configuration parameters. In an output root file, this information is saved using an Art Metadata module. This system allows us to find the correct set of files in storage, track what process a file came from, what its properties are, etc., as it moves through our production system.

When you submit a set of grid jobs (as we’ll describe in the next section), SAM begins a Project. This project requests all files matching the requested SAM dataset. Via samweb (the html service), you can keep track of what’s happening with this project - which jobs have had their input fcl (or root) files transferred and are currently running, which have been successfully finished, and which have ended incomplete or in error. Each job is assigned a “jobid” number, which can be used to fetch log files, kill jobs, etc. As jobs finish, the output files are assigned (based on their Metadata) to a new dataset in a new data “tier”. For instance, if your earlier fcl files went to the “fcl” tier, the output files will now go to the “artdaq” tier (this was the “prod_daq_*” dataset created in the previous guide by make_sim_fcl).

One important feature of SAM and grid submission is the concept of a “draining” dataset. In the last lesson, you saw that make_sim_fcl created four datasets: one fcl one, one daq on, and a “draining” version of each. These draining sets are designed to allow you to easily run recovery jobs. Let’s say, or instance, that you run 1000 jobs, but only 800 of them complete successfully (the other 200 failed due to CVMFS issues, having been submitted to crappy machines, or whatever). It’s important to remember that SAM doesn’t run based on a set filelist, but rather based on datasets - you give it a dataset, and it will randomly choose files from that dataset based on the most efficient access to the data. How, therefore, do you run those last 200 files? The answer is “by using the draining dataset.” Unlike the vanilla fcl dataset, the fcl draining dataset includes additional information based on any so-called “child” files in SAM. Basically, SAM looks into the daq dataset, sees which files can be connected back to a parent fcl, and removes the corresponding fcl file from the fcl draining dataset. So, while “prod_fcl_*” will still contain 1000 total files, “prod_fcl_*_draining” now only has the 200 which haven’t completed successfully yet.

Long story short: run the “draining” fcl dataset to pick up any jobs which didn’t successfully complete on previous attempts.

Note that like the creation of the fcl files, the output files from your simulation jobs will get copied back to Fermilab and SAM via the FTS service. This has been extremely backed up recently with tens (or hundreds…) of thousands of files, so don’t be surprised if your jobs appear to have completed successfully, but aren’t in their output dataset yet.

Part 2: Running create_mcgen_cfg

As of Jan 2016, has support for mcgen jobs. General support for can be found here [[]]. To enable mcgen support in, you must use -c mcgen in your configuration file. A new script, create_mcgen_gen exists to help create the configuration file for you. If you're comfortable with running this script probably won't be too useful for you. Its still useful for looking at some of the default options. Additionally, this script is just a heavily modified version of submit_mc_gen, so if you are familiar with the option from submit_mc_gen you can use the same options with create_mcgen_cfg.

create_mcgen_cfg exists in the NovaGridUtils package. For support, please email PaulR.

Here's a list of options one can use in create_mcgen_cfg:

-h/--help This will output the full list of options.

-dt/--datatier This is the outTier you want from you mcgen job. If you want to output g4, you would use, -dt g4, for artdaq, -dt artdaq.

-d/--dataset This is the dataset you want to process. Typically generated through make_sim_fcl ahead of time.

-r/--release This is the tag/release you want to run your jobs in.

-j/--jobname : The name of the job you’re creating, for the project status page. will add your user name to the beginning and a time stamp at the end. Try to use a name that describes your job, e.g. PROD_ND_CRY_FIRSTATTEMPT or something

-w/--where : Where you want the jobs to run. This can either be “offsite”, where the jobs will go to the places listed in DEDICATED_OFFSITES, or “fermi”, where the jobs will stay on Fermilab sites. Unlike submit_mc_gen, this script does not support mixed running.

-n Number of files you want to run over. If this number is greater than the number in the dataset, the job will wait until that's not the case. Make sure you submit less than or equal to the number of files in the dataset

Also, with the integration of, mc jobs now have multifile support. Currently, the default is set to 1 file per job, but you can change that to whatever you desire in the out .cfg file.

Part 3: Running submit_mc_gen (old method, see part 2 for the newest way to submit mc jobs)

Okay, that might all have been a bit confusing. Running the production jobs themselves might make everything a bit more clear.

For this, you’ll want to download a tagged copy of ProductionScripts to your FA14-10-03x.a test release:
addpkg -t ProductionScripts FA14-10-03x.a

Then make it:
gmake ProductionScripts.all

The script you are interested in is called submit_mc_gen, and is located in ProductionScripts/submission. It’s written by Gavin, so if I’m not around or don’t know the answer to a question, he will. Typing submit_mc_gen -h will give you the definitions of various options (though I’ll explain a lot of them here). Trying opening it up and looking at the “default” options, starting on line 111. Here you can see some of the basic assumptions of the script. Under BUILD, you can see that we always use the optimized version of the build, for speed. The script defines the type of jobsub used (WHICH_JOBSUB), the user (novapro), the list of Fermilab sites (FNAL_SITES) and the list of offsite OSG sites (DEDICATED_OFFSITES; ignore “ALL_OFFSITES”). DEDICATED_OFFSITES in particular you may need to edit now and then, as we add and subtract usable collections of machines. We also define the OS of the machines we’re looking for as Scientific Linux 5 or 6 (OS) (SL6 is the one we actually use, I believe).

Except for changing this site list, you shouldn’t need to edit the actual content of submit_mc_gen that often. Remember to compile the script again if you do, however, with gmake!

Instead, like make_sim_fcl, most of the work will be done using command-line-arguments. Let’s look at the options!

Here are the options you’ll find yourself using most often for submit_mc_gen:

-u : User. The script will want to send you an email so you can keep track of how job submission went. Put in your FNAL username here. So, for instance, if I submit with “-u rtoner”, I’ll get an email at .

-r : Release. FA14-10-03x.a or whichever you’re using (should be the same as you used with make_sim_fcl).

-dt : Data tier. The data tier of the output files. This can either be artdaq (or regular simulation files) or g4 (for geant4-only files).

-d : The dataset you want the project to call up. So, in your case, the dataset containing your fcl files (regular or draining).

-j : The name of the job you’re creating, for the project status page. The script will append your username to the front, so you can find it easily in samweb’s list of jobs. I usually do something which reminds me exactly what I’m doing, like the dataset name followed by “_somedescriptivestring”.

-v : The iteration of the dataset. I don’t think this actually does anything, but I tend to make it match anyway, just in case, heh.

-w : Where you want the jobs to run. This can either be “offsite”, where the jobs will go to the places listed in DEDICATED_OFFSITES, or “fermi”, where the jobs will stay on Fermilab sites. Or, it can be “mixed”, and 50% of jobs will go to offsite and 50% to FNAL. We’re supposed to be running these jobs offsite, so I’ll use that in my examples, though oftentimes it can be much faster to use fermi.

-n : How many files to run from the dataset.

So, for instance, if I’m running a regular FD MC job, my submission script will look like this:
submit_mc_gen -u rtoner -r FA14-10-03x.a -dt artdaq -d prod_fcl_FA14-10-03x.a_fd_genie_fhc_nonswap -j FDMC_nonswap_rtoner_prod_fcl_FA14-10-03x.a_fd_genie_fhc_nonswap_firstpass -v 1 -w mixed --cvmfs nova -n 1500

So this will use FA14-10-03x.a, with output to the artdaq tier, creating a job of name rtoner_FDMC_nonswap_rtoner_prod_fcl_FA14-10-03x.a_fd_genie_fhc_nonswap_firstpass, for version 1, and using the nova CVMFS server. It will run 1500 files, and submit 750 of these to offsite machines, and 750 to FNAL machines. I will get an email at with the submission information.

If only 1400 of those files completed, I would recover the remaining files using the draining dataset (using the new, remaining number of files, and a new project name):
submit_mc_gen -u rtoner -r FA14-10-03x.a -dt artdaq -d prod_fcl_FA14-10-03x.a_fd_genie_fhc_nonswap_draining -j FDMC_nonswap_rtoner_prod_fcl_FA14-10-03x.a_fd_genie_fhc_nonswap_draining1 -v 1 -w mixed --cvmfs nova -n 100

Near detector overlays require some additional options. First, you need to specify the “g4” dataset instead of the “artdaq” one - the output files from generation will initially be geant4 (for combination with the rock singles), before going through the rest of the photon transport, etc. packages (this is all accomplished by the overlay code, which is a separate set of code which I won’t describe here). Second, you also need to specify the overlay dataset to use:
--overlay : Specify the overlay dataset (e.g., the rock singles dataset).

So, for instance, here is my submission code for an initial round of ND rock overlays (only 2000 out of the total 20,000), using our current rock dataset, g4_secondaries_FA14-10-03_nd_genie_fhc_nonswap_geantonly (I think we’re using this until otherwise told):
submit_mc_gen -u rtoner -r FA14-10-03x.a -dt g4 -d prod_fcl_FA14-10-03x.a_nd_genie_fhc_nonswap -j rtoner_prod_fcl_FA14-10-03x.a_nd_genie_fhc_nonswap_firstpass -v 1 -w offsite --cvmfs nova -n 2000 --overlay g4_secondaries_FA14-10-03_nd_genie_fhc_nonswap_geantonly

Check with PaulR before running overlays, since I believe there may have been some changes to the procedure.

Part 4: Monitoring and debugging jobs

Let’s look at what actually happens when you submit a script like the ones above (let’s ignore the overlay ones for now). The output to the screen might initially look like gibberish, but it actually contains some useful information which you can use to monitor your grid jobs.

In this case, I submitted recovery jobs for my ideal FD FHC nonswap jobs. This particular example is a recovery job using the draining dataset:
submit_mc_gen -u rtoner -r FA14-10-03x.a -dt artdaq -d prod_fcl_FA14-10-03x.a_fd_genie_fhc_nonswap_ideal_draining -j FDMC_rtoner_prod_fcl_FA14-10-03x.a_fd_genie_fhc_nonswap_ideal_draining1 -v 1 -w offsite --cvmfs nova -n 38

One of the first things you’ll see (in blue text) is the script checking for the existence of the relevant dataset:
Setting up environment...
Making sure dataset prod_fcl_FA14-10-03x.a_fd_genie_fhc_nonswap_ideal_draining is real...
prod_fcl_FA14-10-03x.a_fd_genie_fhc_nonswap_ideal_draining exists. We're happy!

You’ll next see a lot of text describing the complicated jobsub command that the script is whipping up. See the text under “INFO: JOBSUB command is:”, “INFO: JOBSCRIPT is:”, “INFO: NOVA command is:”, etc.

There’ll be some information confirming where you’re submitting:
INFORMATIONAL: We have chosen to submit via nova
INFORMATIONAL: We have chosen to submit in offsite mode
INFORMATIONAL: Will proceed to submit to the following sites: 50% to FZU,Michigan,Wisconsin,OSC, 50% to FZU,Michigan,Wisconsin,OSC

Now for the useful information! submit_mc_gen will submit these jobs in two batches to two separate clusters, with separate JOBID numbers. So twice you’ll see something like this:
Submitting job(s)...................
19 job(s) submitted to cluster 1895190.
JobsubJobId of first job:

This jobid can be used to refer to either all the jobs in the cluster, or individual ones. To access the jobs in bulk, use (i.e., subtract the .0). To access an individual job, do , where N is an integer ID’ing the individual job (N=0 is the very first job, for instance).

Finally, you’ll also get an html address for the station monitor page:

This is just a regular webpage, so paste it into your browser bar! On this page, you can see which jobs are current running, completed, or failed (including the jobid for each process!). If you’re lucky, after a long time you’ll have a bunch of dark green bars, showing that the jobs have finished running and transferring their output, and you can check for them in the output dataset.

When submitting jobs, you should submit them while logged in as your username. The script handles the part of submitting as novapro for you.

But wait, what happens if things go very wrong? Maybe a few jobs failed, or maybe they all failed, and you want to see why. Maybe you know things are going badly, and you want to cleanly end the job. Here are some useful tools, mostly jobsub, which the JOBID number to access a particular process or all of them.

Here’s the script to get information on what’s happening with a job (in this, all the jobs for a given ID; for all these scripts, add the .N number if you want information only for one job):
jobsub_q --jobid= --debug -G nova

Here’s how you retrieve the log files for those jobs and put them in a directory:
jobsub_fetchlog -G nova --unzipdir=<directory> --role Production --jobid=

If things aren’t starting at all, you can check the availability for an individual job using this command from Jeny Teheran Sierra (I’ve never actually tried it):
condor_q -better -pool -name 1476239.0

This page also gives information on the status of available machines and what’s waiting right now (it will initially only load a few plots; wait 30 or so seconds for it to also load info on individual users and machines):

Those are the commands you’ll mostly use for debugging. What about killing a job?

Use this to kill a set of jobs on a cluster in jobsub (again, add .N for an individual job):
jobsub_rm --role=Production --group nova --jobid=

Sometimes you might want to halt the project itself (e.g., if it’s never actually going to complete). Here’s the command for this; the project name is the one you find on the samweb project page (so remember the username_ part at the beginning in addition to the -j name you gave it):
samweb stop-project <Project name>

Note that the jobs will automatically terminate if things run for more than 24 hours. However, it is recommended to get in the habit of manually cleaning up your jobs after they finish, i.e. stop the project and kill all processes. Otherwise, some stray jobs, like 1 out of 10000 or so, will get stuck and continuously run, wasting slots and CPU hours.

Part 5: Examples

Okay, that should be all the information you need to run some jobs of your own. Let’s do some test jobs!

Before you run any of these, run “kx509” on the command line to get your certificate sorted.

Let’s start by running your test fcls from last time, in the dataset prod_fcl_FA14-10-03x.a_fd_cry_all_firstfcltest. There were 10 of these. Let’s send them to offsite. Let’s give it the complicated name <username>_FDCRY_prod_fcl_FA14-10-03x.a_fd_cry_all_firstfcltest_firstpass You would do this using the following command:
submit_mc_gen -u <your username> -r FA14-10-03x.a -dt artdaq -d prod_fcl_FA14-10-03x.a_fd_cry_all_firstfcltest -j FDCRY_prod_fcl_FA14-10-03x.a_fd_cry_all_firstfcltest_firstpass -v 1 -w mixed --cvmfs nova -n 10

Hopefully, this all works! Write down somewhere which clusters it submitted to (remember there’ll be two jobids), and open the address of the samweb page in your browser. Now WAIT. This may take a while. If it takes more than 24 hours, tell me. Once the processes are done, check prod_daq_FA14-10-03x.a_fd_cry_all_firstfcltest for the output files.