OPOS Submission Instructions¶
NOvA MC Submission instructions¶
For submission of any NOvA Monte Carlo you will use a custom-designed submission called submit_mc_gen.
It has a help menu that pops up if you give it a bad option or simply:
This lives in $NOVAGRIDUTILS_DIR/bin/ (defined after setting up nova software).
It does have the ability to run with either the Production role (-role Production [default]) or Analysis role(-role Analysis).
This does not need to be defined for novapro production-style jobs. It's a convenience for non-production users.
The script initially sets up default environment variables and local function variables.
It then parses several command-line options, checking for required arguments and re-setting variables if specified.
Once it has all the options it then proceeds with submission in two phases.
First it checks the dataset it has been passed is real (-d <dataset_name> ) and makes a SAM project, unless you pass
it an already running project (-p <projectname>), naming the project $USER_$JOBNAME_$DATE.
$USER is the local submitter, $JOBNAME (-j <job_name>) and $DATE just by machine date (yymmdd_hhmmss_nanoseconds).
Then, given a choice of where to submit (-where <where>, most relevant options: offsite, fermi, mixed) it splits the number of dataset
definition files 50:50 to submit 50% to X and 50% to Y sites - of course this isn't necessary for offsite or fermi (only mixed) but
this aspect hasn't been changed so you'll get two jobid's for each submission. This will change in future. It's on the TODO list.
Also, you might want to check all the offsite locations are valid before submitting offsite. We can update this as necessary.
They're currently set in a list when default variables are defined.
So we have a project and where we're going to submit. Now we proceed to construct the jobsub_submit command with required arguments.
Most options are set in stone, like the SAM variables and submitting one file per job etc; the normal nova standards.
You are required to pass it the following:
- -r <tag_version> : i.e -r S16-01-07
- -dt <data_tier> : -dt g4 (rock singles are g4 data_tier, also use for ND overlay files, otherwise use -dt artdaq)
- -d <dataset_name> : name of dataset that holds the input fcl files
- -j <job_name> : user-specified jobname for own accounting - used as part of SAM_PROJECT_NAME
- -w <where> : where you are submitting: offsite, fermi, mixed and a few other site-specific options
- -n <num_jobs> : number of jobs to submit
A couple of extra options that might be useful.
When testing you can pass it
"-t" to steer jobs to a non-pnfs location without hash directory structure and also on only 10 events.
This also requires the following option:
-out <output_directory> : this steers jobs to a user-specified location. For non-test jobs it expects
the hash directory structure.
And finally you specify the number of jobs to run with (-n <num_jobs>). You don't need this with the "-t" option.
I grappled with a nice way to see the output on the command line and send it to a log within the script but it's far
easier to just pipe it through tee to a file of your choosing. Sorry for the information spam though...
ND Rock Singles¶
For ND Rock Singles this is the command you would use for submission.
$NOVAGRIDUTILS_DIR/bin/submit_mc_gen -r S16-01-07 -dt g4 -d fcl_secondaries_S16-01-07_nd_genie_fhc_nonswap_test-rock_secondaries \ -j <job_name> -w offsite -n <num_jobs> | tee <some_file_name>
You would specify the <job_name> (whatever you like) and <num_jobs> (start with 100).
You need NovaGridUtils v1_50.