Project

General

Profile

Requirement Categories

Signals & Noise: (Paul, ...)

Determine that survey signals (i.e. magnitude zeropoints) and noise levels (i.e. sky brightness + detector noise) are close to assumptions in DES planning

  1. Measure detector noise
  2. Measure sky background for a range of moon phases, moon separation angles, and airmass for each filter
  3. Photometry of spectrophotometric standards through each filter to determine zeropoints

These data will be used to determine if the signal-to-noise requirements can be met for the planned survey integration times, and if so under what range of conditions (moon, airmass) for each filter. Most of the data needed for this analysis will likely be obtained during commissioning.

Not sure if this belongs in this category or the next but during commissioning we will be using a pinhole filter to study stray light.

Photometry: (Douglas, Paul, Gary, Jim, ...)

  1. A photometric model (i.e. flat fields and sky-subtraction methodology) is in place which produces relative magnitudes between bright stars that are reproducible to <0.02 mag RMS on different exposures taken (a) at different times during a cloudless night (b) on different cloudless nights up to 3 weeks apart.
  2. Time variability of Y-band fringing has been investigated sufficiently to see whether OBSTAC must be instructed to take a minimum number of consecutive images in Y band, such that a temporally local fringe frame can always be constructed.
  3. Color terms of DECam vs SDSS Stripe-82 photometry for stars with g-r<0.8 are within 0.02 (??? TBC) of values predicted from synthetic photometry using expected component wavelength responses.
  4. Ratio of dome flat to twilight flight is constant to <1% RMS after high-pass filtering with SExtractor sky-subtraction algorithm. Similarly for night-sky flats in g,r,i bands (where fringing should be negligible). This is a test of whether the small-scale responsivity variations are properly traced by flats, and whether pupil ghosts and other additive sources of contamination are smooth enough to be removed by sky subtraction algorithms. Failure indicates that better pupil-ghost removal, etc., must be developed, and/or sources of scattered light mitigated.

Jim's screed on Photometricity versus Calibration is here:
Photometricity versus Calibration

From that comes 7 requirements/measurements on photometricity
and 3 on calibration. Roughly:

Photometricity

  1. Minimize stray light directly w/ pinhole camera
  2. Characterize dominant stray light contribution of the pupil ghost
  3. Measure remaining stray light contribution using star grid observations
  4. Measure whether stray light near moon and stars are a problem
  5. Test whether bandpass variations fn(focal plane position) are understood
  6. Test instrument stability
  7. Test whether relative photometry behaves

Calibration

  1. Test whether "all-sky-photometry" is possible
  2. Test whether massively overlapping images allows a solution for calibration to accuracy needed
  3. Test whether detailed atmospheric modeling needed

These are in the form of tests to be done, rather than requirements to
be met, as I think it is easier in this case to start with the testing path
and then turn it into quantitative requirements.

Astrometry: (Gary, ...)

An astrometric calibration model exists that is stable at <100 mas accuracy.

1. The WCS being placed into image headers by SISPI is accurate to <1” across entire science array (plus any errors in telescope axis pointing received from TCS). [will have to include corrections for lateral color and differential refraction].
2. Repeated, large-dither observations of a given set of bright stars reproduces their relative sky positions with <100 mas median absolute error.
3. Residuals to astrometric map do not show systematic anomalies with respect to array position above 50 mas (e.g. glowing edges are at size expected, residuals are smooth across each CCD, relative CCD positions remain stable, etc.)

Image quality: (Aaron, Jiangang, ...)

No major degradations of IQ by optical element errors, and operational focus/alignment methods are in place to hold DECam within 50 um [TBC] of best focus across FOV in all exposures.

Pointing and guiding _(Jim, )

pointing and guiding are sufficient to hit desired DECam tilings and not degrade DIQ.

  1. Pointing is accurate to ~1/3 of SN field dithering, rms. This assumes the SN group really wants their stated dithers.
  2. The time from the issuance of a 2 degree slew to the opening of the shutter shall be no more than 25 seconds.
  3. Inside the window of open shutter, residual telescope settling from slew shall not contribute more than 0.1" rms to PSF budget
  4. Guiding errors will contribute no more than 0.1" rms to the PSF budget
  5. Guide stars will be automatically selected every where on the sky at airmass > 5 inside DES filters

Requirements 2 and 3 reflects the current (admittedly unfinished) state of the TCS, as seen in slide 47 of (docDB #3734).
Recall that 0.2"rms is 0.5" FWHM, the entire PSF budget, and that the jitter is still occurring at 45 seconds.

Anomalies (Daniel, Jim, ...)

Normal survey operations / reductions mitigate impact of recurring anomalies (e.g. bad pixels, cosmic rays, scattered light, telescope settling) and appropriately notify observers of extreme anomalies (e.g. CCD failures, loss of guide).

Framework:
  1. masking completeness
  2. masking purity
  3. area lost
    due to anomalies as a function of their per-pixel S/N and size.

The extreme of a bad pixel / cosmic ray with area ~ 1 and high S/N should always be masked. Scattered light can have S/N well below 1sigma per pixel but should still be masked for large ghost images. For testing, anomalies can be injected into DESDM.

The following list is an attempt to transform the science requirements into anomaly requirements (r) and goals (g). The fulfillment of a goal always ensures the corresponding requirement is met.

  • ANO-1r: In the complete galaxy catalog of 100 sq. deg. fields, the fraction of anomalies detected and included in the catalogs should be known at the 1% level. This means that if we predict the fraction of anomalies in the galaxy catalog to be X%, it should actually be within X+-1%.
  • ANO-1g: The stricter goal should be that anomalies detected and classified as galaxies should be less than 1% of galaxies when averaged over 100 sq. deg. fields.
  • ANO-2r: In the galaxy catalog of the complete survey area in bins of photometric Deltaz=0.1 between z=0.1...1.5, the fraction of anomalies detected and included in the catalogs should be known at the 1% level. This means that if we predict the fraction of anomalies in the galaxy catalog in any bin to be X%, it should actually be within X+-1% when averaging that bin of over the survey area.
  • ANO-2g: The stricter goal should be that in these bins of photometric z=0.1...1.5, anomalies detected and classified as galaxies should be less than 1% of galaxies averaged over the survey area.

I am not sure I am understanding these requirements (from R-30) correctly:

  • ANO-3r: The rms of the fraction of anomaly detections in the final catalogs when comparing square fields of (0.05deg)^2, (0.5deg)^2 and (4deg)^2 size should be less than 2% in any of the grizY bands.
  • ANO-3g: In no square field of (0.05deg)^2, (0.5deg)^2 and (4deg)^2 size should there be more than 2% anomaly detections in the final galaxy catalog.
  • ANO-4r: The rms of the fraction of anomaly detections in the final catalogs and in redshift bins of Delta z=0.1 between z=0.1...1.5 when comparing square fields of (0.05deg)^2, (0.5deg)^2 and (4deg)^2 size should be less than 5% in any of the grizY bands.
  • ANO-4g: In no square field of (0.05deg)^2, (0.5deg)^2 and (4deg)^2 size should there be more than 2% anomaly detections in the final galaxy catalog.

This one stems from the SN requirement R-33:

  • ANO-5r: The surface density of unmasked anomalies detected with S/N>=5 at the same position in two subsequent exposures of the same region of the sky should be less than 20% of the surface density of supernovae visible in two subsequent frames at S/N>=5.

Operational Readiness: (Brenna, Klaus, Dara, Tom Diehl)

A second goal of the Science Verification period is to demonstrate operational readiness to start the survey. At the end of SV we require that these task have been completed successfully:

  • Validate the DES observation strategy tool (obstac)
    • Complete an extended set of observations (>= 1/2 of a night) with obstac in control
    • Calibrate obstac's slew time calculation.
    • Calibrate the offsets between the obtained sky brightness and that of obstac's model.
    • Measure the instrumental contribution to the PSF, so that obstac's model based on DIMM data can be used to estimate data PSFs.
    • Verify that obstac will repeat an exposure if and only if it has been aborted, or declared bad by observers, based on image health or RASICAM metadata, or data management.
    • Verify that exposures on the queue whose latest-start-time value has been passed get skipped (not an obstac function per se, but closely related)
    • Demonstrate a successful switch between the main survey, the SN survey and back
    • Close the loop between DES DM (First Cut) and the exposure/survey table on the mountain.
    • Verify that observers can declare an exposure bad.
    • Demonstrate that Rasicam operates as expected and that the required information is available to obstac. The quality of the Rasicam photometric flag will have been assessed (at least somewhat) during commissioning.
    • Demonstrate that the seeing and sky background values calculated by ImageHealth (IH) are of sufficient quality.
  • Establish the Position of DES Run Manager
    • Define run Manager duties and responsibilities
    • Establish a daily meeting with DM and the observers (and other parties as needed)
    • The run managers for year 1 of the survey have been named and they have received training for both SISPI and the instrument systems (e.g. cooling, electronics, CCDs)
  • Establish Standard Survey Calibration Procedures
    • Define pre-night observing sequence (bias, dome flats, darks)
    • Define post-night observing sequence
    • Define standard star observing sequence (pre-, post- and potentially during the night)
  • Establish nightly observing procedures
    • Define task list for observers
    • Provide observer's guide and other documentation
      • SISPI user guide
      • Call/Contact list
      • Instructions and "how-to" documentation in case of alarms, failures or other incidents
    • Define data quality assurance and monitoring tasks
      • Define list of online plots to be viewed regularly using the Telemetry Viewer GUI
      • Define tests to performed using image data on the observer workstation (observer2)
      • Define tests to be performed using QR results.
    • Complete electronic log book procedures and check lists
  • Complete setup for all critical alarms. This includes thresholds, alarm messages, alarm actions (e.g. email to experts) and instructions for the observers. (Doing this for all alarms will be an SV goal)
  • Complete observer training material
  • Assign observing shifts for year 1 of the survey.
  • Define duties and responsibilities of the (DES) technical staff in residence during DES observing.
  • Validate SISPI and SISPI tools
    • Establish that SISPI runs through the night with only a minimal number of restarts required.
    • Verify that all relevant information is presented to the observers by the SISPI GUIs.
    • Test the ScriptsEditor tool (should be done already during commissioning)
    • All tools to needed for quality assurance and to assess image quality such as iraf have been installed on the observer workstation (should be done already during commissioning)
    • QR is operational and processing images at the expected rate.

Supernova: (John, ...)

Usable templates for all SN fields in place and image subtraction pipeline functional.

See John's document

Goal categories

Galaxy photometry & detection: (Nacho, Huan, Marcelle, Tom)

Test accuracy of DESDM galaxy magnitudes and colors, internally and with reference to truth fields of varying source density. Check detection efficiency.

Test Galaxy-1: Check repeatability of SExtractor output at different seeing conditions.

  • Dataset: Repeated images of some fields taken at significantly different (TBD) seeing conditions. This requires revisiting a few fields to repeat observations as soon as we notice a significant change in seeing (preferably in the same night, but not necessarily).
  • Tools: TBD
  • Galaxy-1-1) Gal-G-1 Match the galaxy catalogs from the different images sets and calculate the relative degradation in completeness, magnitude and color errors as a function of seeing (see tests 2-1, 5-1 and 5-2).

Comments: - We can basically reuse the tools developed for the tests 2-1, 5-1 and 5-2 (see below).

Test Galaxy-2: Check completeness and purity of the galaxy catalogs as a function of magnitude.

  • Dataset: Observations of truth fields (VVDS deep, CFHTLS deep and wide, CDFS)
  • Tools: TBD, 2DPhot
  • Galaxy-2-1) Gal-G-2 Match the DESDM output catalogs with the truth tables and compute the completeness and purity as a function of magnitude.
  • Galaxy-2-2) Gal-G-3 Use the properties of the objects in the catalogs to simulate galaxies and stars, inserting them into the images (via 2DPhot code) and rerunning the catalog-construction step of the pipeline. Compute the completeness and purity as a function of magnitude. Verify that these results agree with those obtained in 2-1

Comments: - Verifiying that the completeness and purity measured via the two methods agree is important to make sure that we can use only method 2-2 when there is no truth table available.

Test Galaxy-3: Evaluate galaxy detection and S/G separation in a range of stellar densities.

  • Dataset: Observations at a range of stellar densities, preferably with HST imaging.
  • Tools: TBD
  • Galaxy-3-1) Gal-G-4 Repeat test 2-2 and plot completeness and purity as a function of stellar density.

Comments: - If HST imaging is available, repeat 2-1 too.

Test Galaxy-4: Check the photometry in crowded fields (cluster cores), specifically for deblended BCGs and cluster members.

  • Dataset: Imaging of a few known cluster cores. TBD in the SV region.
  • Tools: TBD
    • Galaxy-4-1) Gal-G-5 Match the output of DESDM to the truth table of cluster members. Verify that the magnitude and colors meet the specification for these objects.

Test Galaxy-5: Check fidelity of DESDM galaxy magnitude and color errors against errors empirically determined from repeat measurements.

  • Dataset: Consecutive pairs of undithered single-epoch exposures of the same spot on the sky, for each of grizY under photometric conditions, plus associated DESDM SExtractor catalogs with model fitting photometry and detmodel colors.
  • Tools: TBD, e.g. IDL or Python scripts
  • Galaxy-5-1) Compute 68-percentile errors (sigma_68) of pairwise magnitude measurements of galaxies, as a function of magnitude. Do so for aperture (1.5", 2", 3" diameter), model, and auto magnitudes, and for detmodel colors.
  • Galaxy-5-2) Compare these sigma_68 values against the average of the corresponding magnitude and color errors from the DESDM catalog and verify that they agree.

Comments: - Consecutive and undithered exposure pairs are desired in order to: (1) minimize differences in observing conditions, and (2) avoid any effects due to the dependence of system throughput on position over the focal plane.
- May need to specially request detmodel colors be included as part of DESDM single-epoch (as opposed to coadd) SExtractor runs for these exposures, or do so ourselves, separately from DESDM.

Test Galaxy-6: Check homogeneity of DESDM magnitudes and colors for galaxies over the DECam focal plane.

  • Dataset: - Single-epoch exposures of truth field, e.g., on Stripe 82 or in CFHTLS W1. Undithered between grizY filters, and under photometric conditions.
    - Multiple single-epoch exposures (of any field) taken using standard DES tiling offset pattern (e.g., 10 tilings per filter), under photometric conditions.
    - Corresponding SExtractor catalogs with model fitting photometry and detmodel colors.
  • Tools: TBD, e.g. IDL or Python scripts
  • Galaxy-6-1) From truth field exposures, derive star flat and color term corrections to transform observed galaxy magnitudes on each DECam CCD to a TBD fiducial DES system (e.g, average over all CCDs), using the truth field system (e.g., SDSS) as an intermediary.
  • Galaxy-6-2) For multiple tiling-offset exposures, apply above transformations to derive galaxy magnitudes in the fiducial DES system, then compute residual magnitude offsets relative to the mean for each galaxy, and finally average these residual offsets by CCD.
  • Galaxy-6-3) Verify that the rms of the resulting residual magnitude offsets over the CCDs is < 0.02 mag (motivated by science requirement R-10).

Comments: - Use of truth field exposures intended as a quick way of constructing star flat and color term corrections, as opposed to more cumbersome way using extensive grid of offset DECam exposures.
- May need to subtract off in quadrature the photometric calibration errors inherent to the truth field data itself (e.g. ~2% for SDSS).
- May need to specially request detmodel colors be included as part of DESDM single-epoch (as opposed to coadd) SExtractor runs for these exposures, or do so ourselves, separately from DESDM.

Test Galaxy-7: Check homogeneity of DESDM colors over a wide field with red sequence galaxies

  • Dataset: - Single-epoch exposures of fields with known clusters with spectroscopic redshifts (e.g. Stripe 82). griz filters, under photometric conditions.
    - Corresponding SExtractor catalogs with model fitting photometry and detmodel colors.
  • Tools: IDL scripts from Eli Rykoff, plus tools from Test Galaxy-6
    • Galaxy-7-1) Similar to Test Galaxy-6, transformations need to be applied to get all fields on the same TBD fiducial DES system.
    • Galaxy-7-2) Calibrate cluster red sequence model from known clusters from 0.1<z<0.7 using redMaPPer empirical calibration (similar to tests performed on DC6B)
    • Galaxy-7-3) Compare derived value for intrinsic scatter of red sequence to model expectations.
    • Galaxy-7-4) Generate composite red sequences in redshift bins for illustration purposes.

Comments: - Use of cluster red sequence galaxies is another way of obtaining a "truth table" without worrying about photometric calibration errors inherent to the truth field data itself. The intrinsic scatter of the g-r color at low redshift should be ~0.05, and r-i and i-z have intrinsic scatter of ~0.02-0.03.
- There are currently ~500(150) redMaPPer clusters with lambda>10(20) in the stripe 82 SV field (14<RA<40) with spectroscopic redshifts between 0.1<z<0.65, which will be sufficient for these tests.

Test Galaxy-8: Homogeneity of detection completeness < 2% up to 4 degrees, <5% for each redshift bin (R-30)
  • Dataset: Same as Test Galaxy-2. Additional, shallower and wider (e.g.,CFHTLS) datasets could be used to check the detection completeness as a function of location, in the magnitude region where it should be nearly 100%.
  • Galaxy-8-1) Measure detection completeness in the reference field in subareas with increasing size. For each scale s, determine RMS and check against 2 % requirement.
  • Galaxy-8-2) Repeat Galaxy-8-1) for redshift bins available with 5% requirement.

Star/Galaxy separation: (Nacho, Peter, Tom, ...)

Test SG separation-1: First glance at star-galaxy classification
  • Dataset: one night of single epoch data in the x bands
  • Tools: SExtractor or ImageHealth: produce a stellarity (CLASS_STAR) vs magnitude plot using the catalog from single-epoch data of one night. There should be a clear two-pronged structure which merges at magnitude limit (check with noise, MAGERR, or mask, if it has been validated). The stellarity is computed with SExtractor and does not rely on PSF extraction or truth tables. There should be less than 1% of objects with 0.1<CLASS_STAR<0.9 up to m_limit-0.5.This would just tell us the overall health of the images wrt to extracting shape information for star/galaxy separation.
Test SG separation-2: Ratio of stars to galaxies known to 1% (R-28)
  • Dataset: 1 sq.deg. of full-depth, coadded data, at least in i-band, with spectroscopically identified galaxies and stars (O(40000) each??). In addition, or alternatively, a detailed simulation of the area (need Trilegal-like sim).
  • Tools: TBD, python scripts (catalog matching)
  • SG separation-2-1) Divide area into subareas of 100 sq.arcmin. Compute in each sub-area |R1-R2|, where R1 is N_true_stars/N_true_galaxies from the external catalog and R2 is N_classified_stars/N_classified_galaxies. The average of the distribution <|R1-R2|> < 0.01.
  • SG separation-2-2) Make the same test with a simulation as external catalog. The sub-areas might be larger due to the fact that simulations may not be accurate enough at small scales.
Test SG separation-3: Ratio of stars to galaxies known to 1% for all redshift bins (R-29)
  • Dataset: several sq.deg. of full-depth, coadded data, at least in i-band, with spectroscopically identified galaxies and stars (O(40000) galaxies per bin)
  • Tools: TBD, python scripts
  • SG separation-3-1) Repeat SG separation-2-1) in redshift bins. Using expected N(z) and expected impurity as a function of photo-z, evaluate if further statistics are necessary to reach a good purity determination (therefore needed larger dataset and/or simulation).
Test SG separation-4: Homogeneity of star-galaxy classification < 2% up to 4 degrees, <5% for each redshift bins (R-30)
  • Dataset: At least one field with 4 deg on a side to full depth in each band (as required by R-30). Comparison to combination of COSMOS field to reach the faint end, southern VVDS fields, and Guide Star Catalog II to reach requested area (but only at J < 21.5)
    see also Anomalies above
  • SG separation-4-1) Measure impurity in the reference field in subareas with increasing size. For each scale s, determine RMS and check against 2 % requirement.
  • SG separation-4-2) Repeat SG separation-4-1) for redshift bins available.
Test SG separation-5: Purity of sample better than 1% for i<22, even in crowded fields (R-31)
  • Dataset: HST fields with different (stellar) density. For BCG blending, galaxy cluster fields archival ACS data (from CLASH and older programs)
    (see also Test Galaxy-3)
Test SG separation-6: Contamination of weak-lensing signal < 0.4% (R-17)
  • Dataset: same as SG separation-4 (full-depth HST or spectroscopic survey)
Comments:
  • The 40000 galaxies figure is based on a simple derivation of the amount of galaxies needed for an error on the impurity of 0.1%, for a conservative value of real impurity of 5% (TBC). WL also has additional constraints for a very pure bright star sample for PSF extraction. We have not considered that test here as we assume it included in PSF or WL requirement testing.
  • R-17 is formally harder than R-28 and R-31 (0.4% vs. 1%). But the shape measurements eliminate many stars since the cannot be deconvolved from the PSF. However, at the faint end pixel noise can let the star appear larger than the PSF, and these stars would then pass into the final shear catalog. Does WL have numbers for how likely this is at the faint end???
  • Simulations used to substitute external datasets should be evaluated against sample datasets, for instance, HST imaging, VVDS (perform R1-R2 test with sim vs dataset). Simulations can adopt simplified galaxy models for all tests apart from SG separation-5, which deals with blended objects.
Visibility:
  • During SV (assumed to be in October) the visible area covers roughly RA = 22h .. 5h, DEC = -60 .. 0
  • HST field are often only single pointings (~12 armin^2), but there are plenty at various galactic latitudes. Archival galaxy clusters fields available (e.g MACS0329 or A383 from CLASH)
  • The VVDS field 0226-0430 is accessible during SV (and for not too long into the season), the second one in the south (0332-2748) could be done during the season.
  • The COSMOS field cannot be observed before January.

Calibration: (Douglas, ...)

Refine models for astrometry and relative and absolute stellar magnitude calibration (flats, color terms, nightly variations, etc.).

  1. Repeated, large-dither observations of a given set of bright stars reproduces their relative sky positions with <25 mas median absolute error.
  2. Residuals to astrometric map do not show systematic anomalies with respect to array position above 20 mas (e.g. glowing edges are at size expected, residuals are smooth across each CCD, relative CCD positions remain stable, etc.)
  3. Requirement R-40: The error on the Jacobian of the WCS solution per single CCD image must have a shear <4x10-4.
  1. A photometric model (i.e. flat fields and sky-subtraction methodology) is in place which produces relative magnitudes between bright stars that are reproducible to <0.01 mag RMS on different exposures taken (a) at different times during a cloudless night (b) on different cloudless nights up to 3 weeks apart.
  2. We have initial measure of what RASICAM outputs are indicative of >0.01 mag degradation of relative photometry across FOV.
  3. Fringe-removing process is in place that leaves no visible traces in SExtractor sky-subtracted output images.
  4. Color terms of DECam vs SDSS Stripe-82 photometry for stars with g-r<0.8 are within 0.01 (??? TBC) of values predicted from synthetic photometry using DECal data and atmospheric models.
  5. Color terms are observed to be stable night-to-night to within ? (Y-band: ?)
  6. Observations of BD+17 are taken in all filters, errors <0.01 on instrumental magnitudes (including errors in shutter timing, non-linearity, and photometry).
  7. Regress inferred zero points on clear-sky exposures of Stripe 82 against seeing as a check on stellar photometry algorithms.

PSF models: (Jiangang, Gary, ...)

The DECam focus and alignment are determined by the donuts analysis and BCam. From the image itself, the ultimate calibration of the focus and alignment comes from the distribution of the PSFs across the focal plane. Based on the optics model as well as some seeing variations, we can build a model for the PSF distribution given different tilt/shift/defocus. For a given image taken after the focus and alignment are set according to the donuts anslysis and BCam result, we will measure the moments of the PSF by the following steps:

1). The second and third moments of the PSF, altogether 7 quantities. When we do the measurement, we will use a Gaussian weights with fwhm = 0.55 and 1.1 arcsec respectively. As a result, each PSF will have 2 x 7 = 14 measured quantities.

2). On each CCD, we choose ~ 20 PSFs. Each of the 20 PSFs has the above mentioned 14 quantities measured. For each of the 14 quantities, we choose the corresponding median values from the 20 PSFs. As a result, for each CCD, we will have 14 quantities to be analyzed.

3). On the focal plane, each of the 14 quantities from every CCD will be analyzed by fitting to Zernike polynomials. That is, for example, the second moments M_xx, each CCD has one M_xx that is the median of all the PSFs M_xx. Then, across the Focal plane, there will be about 62 such M_xx corresponding to each CCD. We fit this 62 M_xx (x, y) to Zernike polynomials whose coefficients are denoted as a_0, a_1, a_2, ....

4). The coefficients of the Zernike expansion, a_0, a_1, a_2, ..., will be what we are interested in.

5). In the model side, we will generate the PSFs by specifying the relevant deformations. Each deformation will have a set of PSFs to be measured and then we get the set of Zernike polynomial coefficients as mentioned from step 1) - 4). By setting different shift/defocus/tilt, we can build a mapping between the given defocus/mis-alignment and the pattern on the Zernike coefficients.

6). By comparing what we obtained and what we expect to get from the optics model, we can determine whether there is any imperfection in the alignment and focus, which should be further adjusted with the hexapod.

Supernova: (John, ...)

Assess performance of subtraction pipeline and false-alarm rates, to reach operational performance level.

See John's document

Photo-z: (Huan, ...)

Assess photo-z performance in fields of spectroscopic surveys

Test Photoz-1: Derive photo-z solution from full-depth imaging of VVDS training set field.

  • Dataset: grizY imaging, to full DES main survey depth or better, of VVDS-Deep 2hr field, plus associated DESDM processed coadded images and SExtractor catalogs with model fitting photometry and detmodel colors. * Tools: existing DESDM neural network photo-z module
  • Photoz-1-1) Derive neural network photo-z solution using grizY DESDM coadd catalog outputs and VVDS Deep spectroscopic redshifts. * Photoz-1-2) Apply photo-z solution and add photo-z's and photo-z errors to coadd catalog.
Comments: - VVDS-Deep provides deepest available training set data.
- The VVDS-Deep CDFS field is an alternative, but not as good as
2hr field due to significantly fewer redshifts.
- Both fields are part of DES SN area, so imaging to deeper than
main survey depths is likely and useful for deriving deeper
photo-z solution for SN host galaxy purposes.
- Other photo-z methods, as in DC6B Photo-z Challenge, may also
be applied, in addition to the DESDM neural network photo-z module.

Test Photoz-2: Check statistics of photo-z's derived in Photoz-1 test.

  • Dataset: Photo-z output catalog from Photoz-1 test. * Tools: existing IDL scripts employed in DC6B Photo-z Challenge
  • Photoz-2-1) Calculate and plot mean bias vs. photo-z in bins of width 0.1. * Photoz-2-2) Calculate and plot photo-z sigma and sigma_68 (68 percentile error) vs. photo-z in bins of width 0.1, and check that overall sigma_68 < 0.12 to test science requirement R-8. * Photoz-2-3) Calculate and plot 2-sigma and 3-sigma outlier fractions vs. photo-z in bins of width 0.1, and check against requirement R-23 that 2-sigma fraction < 0.1 and 3-sigma fraction < 0.015.
Comments: - Additional DC6B Photo-z Challenge tests, based on
requirements R-22 and R-23 on uncertainties in photo-z
bias, sigma, and outlier fractions, are not really testable
given the limited SV data set. These will rather be
examined later as more DES data are accumulated, plus the
tests will require additional spectroscopic follow-up to
compile larger training sets.

Mask: (Nacho, ...)

Test Mask-1: Saturated star mask
Dataset: Single 1/2 night images and catalog (single-epoch), Mangle star-mask, Mangle depth-mask
Tools: TBD, Mangle tools, cutout tool, matching script

  • Mask-1-1) Use a very bright star catalog (TBD which) and inspect images to check that they have been masked out, up to the point of eliminating diffraction spikes. All stars should have been masked out.
  • Mask-1-2) Match very bright star catalog with star-mask polygon positions. All stars should be matched (i.e. all stars are included in the mask).
  • Mask-1-3) Match star-mask polygon positions with very bright star catalog. Unmatched polygons should cover an area <1% of the total area for the night(i.e. all polygons are matching a known star).
  • Mask-1-4) Match object catalog with star-mask polygons. <1% objects (roughly <1% area) should be present in the star-mask polygon areas.

Comments: Test Mask-1-3 is not really needed: if the polygons mask a 'normal' area, as long as that is considered in the catalogs and is not a significant fraction of the survey, we are all right. But can give information on how well we understand the mask behavior

Test Mask-2: Survey area must be known to 1% (partially covers R-26)
Dataset: Single 1/2 night images and catalog (single-epoch), Mangle footprint-mask
Tools: TBD, Mangle tools, cutout tool, matching script

Pre-req: Check visually through the catalog scatter plot and image coverage if footprint-mask and catalog have a roughly similar footprint.
Pre-req: Object catalog should be for bright, star-like objects. E.g. MAG < 19 and a selection in radius a la Weak Lensing or using CLASS_STAR.
  • Mask-2-1) Use Mangle tool polyid or similar matching script and check that more than 99% of the star-masked objects are inside the footprint-mask.
  • Mask-2-2) Create a dense random sample using the footprint-mask. Match this pseudo-catalog with the object catalog. Make a scatter plot of unmatched random objects. The areas with unmatched objects must amount to less than 1% of the total area.
    Comment: The total areas not passing the tests should be less than 1%. One could consider making the cut harsher in the individual tests.

Test Mask-3: The limiting magnitude over each homogeneous patch of the survey known to 0.1 magnitudes (partially covers R-26)
Dataset: Coadded catalog for the central region of a surveyed area (1 sq.deg. at least), Mangle depth-mask
Tools: TBD, Mangle tools, python scripts

  • Mask-3-1) Make a histogram of MAGERR using a cut |MAG-MAGLIM|<0.1 and check that the mean is compatible with 0.11 (~S/N=10). MAG is the 2-arcsec circular aperture, MAGERR is the estimated error in the magnitude and and MAGLIM is the value of the depth-mask at that position.

Observers’ tools: (Brenna, Klaus, Dara, Ken, Eric, Brian)

A number of software tools has been developed for the DECam project. Most of these package will have already played a major role during commissioning and SV. Our goal is to have all these tools fully operational at the end of the SV period. In particular this includes:

  • Quick Reduce (QR) is fully operational with the expected throughput and all quality assurance algorithms have been implemented.
  • Tools such as the Jiangang's and Reina's PSF assessment have been "enhanced" so that they can be routinely used by the observers without expert help.
  • The SISPI user interfaces have been refined to based on the experience gained during commissioning and SV.
  • The link between DESDM First Cut and the survey table on the mountain has been automated.
    Define changes to observers’ tools (incl. Quick Reduce) to improve operational efficiency during first observing season.

Measurements of the sky background as a function of time from twilight, zenith distance, moon distance, and moon phase would be valuable to determine a set of constraints for obstac (based on the signal-to-noise requirements). Much of the data needed to calculate these constraints (although probably not all) will be obtained in the course of other SV (and commissioning) observations. To efficiently use these data, it would be useful to store the following information in a database about each image:

  • date and time
  • exposure time
  • filter
  • RA, DEC
  • weather conditions (at least a photometric/nonphotometric flag)
  • flag if the data are known to be bad for some reason

The following quantities could be stored as well, or calculated after the fact from that information:

  • zenith distance
  • time relative to astronomical twilight end/start (or sunset/rise)
  • moon phase
  • moon angle from field
  • moon zenith distance

Some of this information will already be available to the observers, and the rest would be useful to display in real time in the control room.