Project

General

Profile

Information about job submission to OSG sites » History » Version 66

Version 65 (Kenneth Herner, 03/07/2016 10:39 AM) → Version 66/162 (Kenneth Herner, 03/09/2016 07:58 AM)

h1. Information about job submission to OSG sites

This page captures some of the known quirks about certain sites when submitting jobs there.

h2. What this page is

Most OSG sites will work with with the jobsub default requests of 2000 MB of RAM and 35 GB of disk, but at some sites there are some stricter more strict limits. Additionally some sites only support certain experiments as opposed to the entire Fermilab VO. Here we list the OSG sites where users can submit jobs, along with all known cases where either the standard jobsub defaults may not work, or the site only supports certain experiments. Information on it is provided on a best-effort basis and is subject to change without notice.

h2. What this page is NOT

This page is *NOT* a status board or health monitor of the OSG sites. Just because your submission fits in with the guidelines here does not mean that your job will start quickly. Nor does it keep track of downtimes at the remote sites. Its sole purpose is to help you avoid submitting jobs with disk/memory/cpu/site combinations that will never work. Limited offsite monitoring is available from https://fifemon.fnal.gov:3000/dashboard/db/offsite-monitoring

h2. Organization

The following table lists the available OSG sites, their Glidein_site name (what you should put in the --site option), what experiment(s) the site will support, and finally any and known limitations on disk, memory, or CPU.

*NOTE 1:* *NOTE:* In many cases you may be able to request more than the jobsub defaults and be fine. We have not done detailed testing at each site to determine what the real limits are. If you do try a site and put in requirements that exceed the jobsub defaults, sometimes a

%{color:red}jobsub_q --better-analyze --jobid=<your job id>%

will give you useful information about why a job doesn't start to run
(i.e. it may recommend lowering the disk or memory requirements to a certain value.) We provide information about the largest successful test we have had for memory if above 2000MB.

*NOTE 2:* Under supported experiments, "All" means all experiments except for CDF, D0, and LSST. It does include DES and DUNE.

*NOTE 3:* The estimated maximum lifetime lifetim is just an estimate based on a periodic sampling of glidein lifetimes. It may change from time to time and it does NOT take into account any walltime limitations of the local job queues at the site itself. *It also does not guarantee that there are resources available at any given moment to start a job with the longest possible lifetime.* You can modify your requested lifetime with the --expected-lifetime option.

|_. Site Name |_. --site option (sorted) |_. Supported Experiments |_. Known limitations |_. Estimated maximum job lifetime |
| Brookhaven National Laboratory | BNL | All | jobsub defaults are OK | 24 h |
| Caltech T2 | Caltech | All | jobsub defaults are OK; can go up to --memory=3000MB --memory=3000 | 25 h |
| Clemson | Clemson | All | jobsub defaults are OK | 24 h |
| Cornell | Cornell | All | jobsub defaults are OK | unknown |
| GPGrid | FNAL | All+CDF+LSST+DES+D0 | new GPGrid accessed via FNAL Feb 2016, but job could end up on CMS T1 | 96 h unknown |
| Fermi private cloud | | All | memory up to 7500MB 7500
--resource-provides usage_model=FERMICLOUD_PRIV,FERMICLOUD_PP_PRIV | 4 days |
| FNAL CMS Tier 1 + GPGrid | FNAL | All | jobsub defaults are OK | 48 h |
| "Czech Academy of Sciences":http://monitor.farm.particle.cz/total_overview.php | FZU | NOvA only | request --disk=20000MB or less | unknown |
| Harvard | Harvard | NOvA only | jobsub defaults are OK; SL5 only | unknown |
| University of Washington | Hyak_CE | All | available resources vary widely; --memory=1900MB or less is better
This site is very good for short, low-memory jobs
| 3.5 h |
| University of Manchester | Manchester | uboone only | can go up to --memory=8000MB --memory=8000 via multicore; disk default is OK
Ask for one CPU for every 2 GB of memory requested (e.g. 2 CPUs for memory request between 2 and 4 GB.)
| between 12-24 h |
| ATLAS Great Lakes Tier 2 (AGLT2) | Michigan | All | jobsub defaults are OK; can go up to --memory=2500MB | unknown but at least 6 h|
| "%{color:red} MIT%":http://www.cmsaf.mit.edu/condor4web/ | MIT | All + CDF |%{color:red} opportunistic access currently disabled%
jobsub defaults are OK | unknown |
| Midwest Tier2 | MWT2 | All | jobsub defaults are OK
single core jobs will take a very long time to run if requesting more than 1920 MB of memory. Possible to get up to 7680MB in testing.

Running custom 3.x linux kernels, probably need to set UPS_OVERRIDE to setup products. | 5 h |
| Red | Nebraska | All |jobsub defaults are OK; tested up to --memory=7500MB OK | 48 h |
| Notre Dame | NotreDame | All |jobsub defaults are OK; can go up to --memory=2500 ; aim for short jobs due to preemption | 24 h |
| Tusker/Crane | Omaha | All | jobsub defaults are OK; can go up to --memory=3000 | 24 h |
| Ohio Supercomputing Center | OSC | NOvA only | jobsub defaults are OK | 48 h |
| Southern Methodist University | SMU | NOvA only | jobsub defaults are OK; SL5 only | unknown |
| Southern Methodist | SMU_HPC | NOvA only | single core jobs should request --memory=2500 or less | 24 h |
| Syracuse | SU-OG | All | request --disk=9000MB or less and --memory=2500MB or less
%{color:red} 2015/10/15 libXpm.so not installed (may be required by ROOT) on all nodes% | 48 h |
| %{color:red} Texas Tech% | TTU | All but mu2epro and seaquest | jobsub defaults are OK
%{color:red} 2015/11/20 down since OSG software upgrade% | unknown |
| University of Chicago | UChicago | All | linked with MWT2; recommend --memory=1920 or less for single-core jobs
; be sure to set UPS_OVERRIDE environment variable appropriately | variable| 5 h |
| University of California, San Diego | UCSD | All |jobsub defaults are OK; can go up to --memory=4000MB --memory=4000 | 13 h |
| University of Bern | UNIBE-LHEP | uboone only | 2000 MB memory or less; must request use >1 cpu if you need for more memory | 48 h |
| Grid Lab of Wisconsin (GLOW) | Wisconsin | All | jobsub defaults are OK; tested up to --memory=8000MB OK | 24 h |
| Western Tier2 (SLAC) | WT2 | uboone only | jobsub defaults are OK; can go up to --memory=2500MB | 10 days |