Project

General

Profile

Information about job submission to OSG sites » History » Version 16

Version 15 (Kenneth Herner, 10/05/2015 02:48 PM) → Version 16/162 (Kenneth Herner, 10/05/2015 02:48 PM)

h1. Information about job submission to OSG sites

This page captures some of the known quirks about certain sites when submitting jobs there.

h2. What this page is

Most OSG sites will work with with the jobsub default requests of 2000 MB of RAM and 35 GB of disk, but at some sites there are some more strict limits. Additionally some sites only support certain experiments as opposed to the entire Fermilab VO. Here we list the OSG sites where users can submit jobs, along with all known cases where either the standard jobsub defaults may not work, or the site only supports certain experiments. Information on it is provided on a best-effort basis and is subject to change without notice.

h2. What this page is NOT

This page is *NOT* a status board or health monitor of the OSG sites. Just because your submission fits in with the guidelines here does not mean that your job will start quickly. Nor does it keep track of downtimes at the remote sites. Its sole purpose is to help you avoid submitting jobs with disk/memory/cpu/site combinations that will never work.

h2. Organization

The following table lists the available OSG sites, their Glidein_site name (what you should put in the --site option), what experiment(s) the site will support, and finally and known limitations on disk, memory, or CPU.

*NOTE:* In many cases you may be able to request more than the jobsub defaults and be fine. We have not done detailed testing at each site to determine what the real limits are. If you do try a site and put in requirements that exceed the jobsub defaults, sometimes a

condor_q -better-analyze -pool fifebatchgpvmhead1.fnal.gov -name fifebatchN.fnal.gov -l xxxxxxx.x

where N is the fifebatch schedd your job went to and xxxxxxx.x is your job ID number, will give you useful information about why a job doesn't start to run (i.e. it may recommend lowering the disk or memory requirements to a certain value.) Please only run that command for one job at a time.

*NOTE 2:* Under supported experiments, "All" means all experiments except for CDF, D0, and LSST. It does include DES and DUNE.


|_. Site Name |_. name for --site option |_. Supported Experiments |_. Known limitations |
|Brookhaven National Laboratory | BNL | All | jobsub defaults are OK
very few opportunistic slots available |
|Caltech T2 | Caltech | All | jobsub defaults are OK |
|Cornell | Cornell | All | jobsub defaults are OK |
|FNAL CMS Tier 1 | FNAL | All | jobsub defaults are OK |
|Czech Academy of Sciences | FZU | NOvA only | request --disk=20000MB or less |
|Harvard |Harvard | NOvA only | jobsub defaults are OK; SL5 only |
|University of Washington | Hyak_CE | All | disabled now due to
short glidein lifetime |
|ATLAS Great Lakes Tier 2 (AGLT2) | Michigan | All | jobsub defaults are OK |
|MIT | MIT| All + CDF | jobsub defaults are OK |
|Midwest Tier2 | MWT2 | All | jobsub 1.1.6 defaults are OK (request --memory=2000 or less) |
|Red | Nebraska | All |jobsub defaults are OK |
|Notre Dame | NotreDame | All |jobsub defaults are OK; aim for short jobs due to preemption |
|Tusker/Crane | Omaha | All | jobsub defaults are OK |
|Ohio Supercomputing Center | OSC | NOvA only |jobsub defaults are OK |
|Southern Methodist University | SMU | NOvA only |jobsub defaults are OK; SL5 only |
|Southern Methodist | SMU_HPC | NOvA only |single core jobs should request --memory=2500 or less |
|Syracuse | SU-OG | All | request --disk=9000MB or less |
|Texas Tech | TTU | All but mu2epro and seaquest | jobsub defaults are OK |
|University of Chicago | UChicago | All | linked with MWT2; request --memory=2000 or less |
|University of Califronia, San Diego | UCSD | All |jobsub defaults are OK |
|Grid Lab of Wisconsin (GLOW) | Wisconsin | All | jobsub defualts are OK |
|Western Tier2 (SLAC) | WT2 | uboone only | jobsub defaults are OK |