Daniel Gruen, 02/25/2013 05:33 PM
From DESSV Plan: Map of bad pixels, defects is stable, as seen from dome flats and median sky flats aka DECam TD-16
Explanation: Bad pixels are pixels whose counts are non-linear in the SB of the sky at their position. The hope is that they are stable over time in the sense that a bad pixel will stay "bad" while no new bad pixels appear. Bad pixels are likely to change in "badness" with thermal cycles (Juan Estrada, private comm.), so some degree of monitoring is required lest they contaminate neighboring pixels.
Black Spots: 10 flat exposures were median combined to eliminate cosmic rays.
Any pixel with 20% less response than the median was counted as a black spot
Non-linear: two sets of 10 exposures at 50000 e and 80000 e were median combined
and the two resulting images were divided. Any pixel with a response
that was different by more than 1% was flagged as non-linear. Require 4 sigma detection.
Hot pixels: 10 dark 400-second exposures were median combined.
Hot pixels were counted as those that had more than 6300 e/pixel/hour.
All the above 3 masks are combined by OR to form a final mask.
We require *sets of dome flat fields with 3 different flux levels* (1) shortest reasonable exposure time where shutter effects are negligible (below 1% level), (2) at approx. 50000 e and (3) at approx 80000 e, to cover non-linearity in the typical low surface-brightness and the almost-saturated regime. Of set (2) and (3) we require 10 flats, of set (1) we require enough frames for the Poisson rms to be below 0.2%. The frames should be *bias subtracted*. They must be *taken at different times of the commissioning procedure* to check for stability over time and, subsequently, after certain intervals. They should *span the DES filters* since one could imagine spectral dependence of non-linearity. Additionally, we require a *"known bad pixel" mask* for each chip from the DECam Imager Tests https://sites.google.com/site/decamtroubleshooting/8-measurement to compare with.
Note that we need sets of dome flats with different exposure times taken *without moving the telescope* or changing the light source in between, so the gradient stays the same in all images. Small changes in light source luminosity will be corrected. At the lower exposure time limit, we might discover non-linearities due to shutter effects, so we should be careful these are below 1% of the flux or else we won't be able to discriminate shutter effects from bad pixels.
We also need a set of *10 dark 400-second exposures* for detecting hot pixels independently from their non-linearity.
for each filter, for each chip:
* for each day available:
** for each of the 3 flux levels:
*** calculate median flux of each of the frames
*** multiply each frame with the quotient of desired over median flux level to bring them to the same level
*** median-stack frames from the same chip to generate master flats
** for each pair of flux levels (1,2), (2,3)
*** divide the two master stacks
*** note pixels where the fraction is off the quotient of the desired flux levels (i.e., which show non-linearity) beyond the expected statistical uncertainty of the master flats
*** compare the list of noted pixels to the list of known bad pixels: are there any new ones?
*** save the list for later use
after a few observations separated by some time in between have been made:
for each pair of flux levels (1,2), (2,3)
* compare the number of noted outliers for each date -> is there an increase?
* compare the bad pixel map created for each date [manually, by blinking the masks] -> is there any visible development?
If we want to monitor the change of these bad pixels over time, we could save the FITS image or simply a list of divided flux levels (2,3) for each date; this will parametrize the non-linearity.
h3. Cold pixels
Just get a median stack of 10 50000e flats.
h3. Hot pixels
Just get a median stack of the dark frames.
h3. Noise properties
Good pixels should have proper Poisson-noise variation from one flat-field exposure to another. This can be tested by calculating the variance between the rescaled counts of pixels in different frames.
1% deviation from the expected flux level quotients in any of the (1,2), (2,3) flux level quotient frames will flag a pixel as non-linear. Note the statistical uncertainty of the stacks should be of the order of 0.1%.
h3. Cold pixels
20% less than median response in the stack of 10 50000e flats will flag a pixel as cold.
h3. Hot pixels
More than 6300 e/pixel/hour in the dark frame stack will flag a pixel as hot.
h3. Noise properties
If variance is 6sigma off the expected Poisson variance, a pixel should be considered bad. This will produce approx. one false positive over the whole array.
If significant instability is seen (more than 1% of bad pixel count changes over commissioning / SV time; more than 10% change in non-linearity of single pixels), that might be a problem.
We have verified that:
* the cold pixel mask is reasonably stable over time (PASSED)
* the hot pixel mask shows instability; some columns change even in the bias frames between above and below the threshold at a ~ 100 ADU level. This is something the bad pixel masking is not designed for yet and potentially harmful (*NOT PASS*)
* there are additional defects of two adjacent rows, one with a several per-cent excess flux and one with a similar flux decrement (see also DES-WL issue 3500) near the edges of many/all chips, dubbed _funky columns_. They do not appear in the bias, flat or dark frames and are therefore currently not masked or corrected for (*NOT PASS*). Their no-show in these calibration frames means that they are flux-dependent (not exposure time dependent, not constant) defects.
The upshot therefore is that current bad pixel masking does not catch all present cases of bad pixels, and we need to find software solutions to that. Therefore this requirement will pass once the hot pixel variability and funky columns are accounted for and it is verified, that no other