Project

General

Profile

Observation Strategy Validation

Testing the DES Observing System

A description of obstac calibration requirements can be found in the DES SV planning document submitted to NOAO in April 2012 (docDB #6255). The following are excerpts from this document.

By carrying out extensive DES-like observations during Science Verification, we will accumulate and analyze
the data needed to calibrate and optimize key elements of the DES Observing System, namely ObsTac and
components of the calibration system.

Calibration of ObsTac

During DES operations, the ObsTac program will automatically queue up the next set of exposures (filter,
pointing, exposure time) and will switch between wide-area and SN observations using a decision tree that
takes into account current observing conditions (photometricity, seeing, moon, and sky brightness), time
of night and of season, and which sky areas have been previously observed with survey quality in a given
tiling. If the physical performance of the telescope, camera, and seeing differ from the assumptions in our
simulations, then we will have to calibrate/optimize ObsTac and perhaps adjust the survey strategy to meet
our goals.

Seeing and the Supernova Trigger
The ObsTac decision to perform the SN program is based on the DECam PSF, which is determined by the
seeing and the camera characteristics. The decision tree is:
  • If the PSF > 1.100, pursue Supernova Survey
  • If the PSF  1.100, then pursue Wide Area Survey
    • unless there is a gap for a filter-field > 7 days, in which case pursue Supernova Survey observation
      of that filter-field.
      The simulations of the performance of this trigger are based on a) 5 years of CTIO seeing data reported in
      Els et. al. 2009, b) the assumption that the performance of the DECam optics achieves the design goal of
      0.4500 FWHM contribution to the PSF (rather than the requirement of 0.5500), and c) the assumption that
      dome seeing is negligible. The simulations show that a seeing trigger of 1.100 provides enough time to perform
      a very high quality wide-area survey while also allowing a generous SN observation program.
      This 1.100 trigger is a calibrated value: using the delivered and measured PSF in DECam images taken during
      SV and early operations, we will compare against the statistics of the simulations and decide on adjustments
      to the trigger value so that the goals of both the wide-area and SN surveys can be achieved. Both the Image
      Health PSF values stored in the mountain database by SISPI and the DESDM measured PSF values will be
      useful for this.
The Slew Model
During the wide area survey there is an operational trade-off between slewing to new fields in a tile vs.
changing to a new filter with the same pointing and subsequently slewing. Our simulations indicate that
staying, roughly, with a fixed filter on a given night and filling out a tiling by slewing between each exposure is
operationally more efficient than observing g, r (in dark time) or i, z, y (in bright/grey time) at each pointing,
as the former maximizes flexibility in achieving high-quality data. This conclusion relies on a) the actual
minimum time between successive reads of the camera, currently 20 seconds, of which 17 is readout and 3 is
erase, and b) the slew and settle time of the telescope. Our current slew model is:
  • 20 seconds slew < 3
  • 20 + 1.765(s − 3) seconds 3  slew  20
  • 50 seconds slew > 20
    Tests of the upgraded but not fully tuned Telescope Control System prior to shutdown indicated that slews in
    hour angle achieved track-slew-track over 2 deg in 20 seconds with 1.000 FWHM jitter, and in 25 seconds with
    0.600 FWHM jitter. The goal is 3 deg in 20 seconds with 0.2500 FWHM jitter (see Slew and Settle of Blanco
    - Science Impact (docdb-6196)). When the TCS is retuned with the rebalanced telescope, these values are
    likely to change. We will track them and calibrate ObsTac for the slew performance the Blanco delivers and
    that we can accept.

Sky Brightness and the Moon
ObsTac in principle can use the sky brightness to predict where to take images next, in the case of the
bluer filters, and the presence of the moon, or to select which filters to use, in the case of z-band and the
wide variations night to night. We will want to perform a DECam calibration of the sky brightness and
moon-model used by ObsTac (see Sky Brightness Notes, docdb-6123) during SV and/or early operations.

DECal, Star flats, and Calibration

The DECal system response measuring system should be run when the weather precludes on-sky observations.
During commissioning and/or SV, we will also want to establish whether DECal can be operated during the
day.
The DECal system can be checked using star flats taken on SDSS Stripe 82. Star flats are very time consuming
and require good conditions. We plan to pursue one early in Commissioning/Science Verification; tables of
color corrections per CCD can be constructed to compare with the synthetic color corrections from the DECal
system. Assuming 30 second exposures, 20 second readout/slews, and a need to place the same star on each of
the 62 CCDs in all 5 filters, we compute that 4.5 hours are needed, during photometric time. Commissioning
of DECam includes performing a star flat. We assume observations of sufficent quantity/quality for DES
and therefore don’t allocate time in science verification.
Throughout the science verification period the DES calibration group will be particularly active. The
calibration group goals are described briefly in Table 5.

Milestone Where Reason
Calibrate single night DESDM determine instrumental zeropoint
Star flat Fermilab determine CCD system response
DECal DESDM linking system responses to data
Global calibration DESDM photometry & astrometry
Rasicam integrate
PWV GPS system integrate

Sky Brightness Notes (docDB # 6612)

Comments from Jim and Eric

I'd imagine:
    1) turn on obstac
    2) collect information about seeing from imageHealth or DESDM survey table
    3) compare rates of collection of survey and SN data as a function of seeing
        with simulations.
    4) Adjust the SN seeing cutoff (1.1" currently).
    5) Make sure that the data taken look like the right spatial pattern.
        Are the dithers right for SN? Is the tiling pattern right for the Main Survey?

In the end we need to calibrate ObsTac data rates and the survey simulations data rates  so:

Calibrations:
    1) Calibrating the ObsTac simulation seeing model against the delivered seeing
        which will take ~month of data to do. 20 nights?

    2) We need to calibrate the ObsTac sky brightness model as a function
    of airmass, moon separation, moon distance, and filter. I'd imagine not
    taking special data to do this, just normal obsTac operations on... 7 nights?

    2a) Except that we probably want to know what the sky looks like 10-ish degrees
    from the moon, which is probably a commissioning task.

    3) slew times. Just a just, but we need to calibrate the simulation tool. Probably
    1 or 2 nights of data.

Bad data: as Eric says, we want to test the whole cycle of bad data,
from ObsTac auto declaring data bad, setting Mountain done column to false,
and re-attempting it, to humans declaring data bad and ObsTac re-attempting it,
to DESDM updating the DM done column with true or false.

This has lines into image health, rasicam, the database, and DESDM.