## Suggestions for galaxy-count extinction calibration » History » Version 4

*Basilio Santiago, 05/09/2013 06:10 AM *

1 | 1 | Gary Bernstein | h1. Suggestions for galaxy-count extinction calibration |
---|---|---|---|

2 | 1 | Gary Bernstein | |

3 | 1 | Gary Bernstein | The goal is to determine the mean galaxy counts per CCD as a function of (filter, seeing, sky brightness, extinction) and then invert this to estimate the extinction given measurements of (galaxy counts, seeing, sky brightness). Also as a byproduct be able to identify CCDs with highly unlikely galaxy counts (too high or too low) which would indicate either a hardware failure or some bright object contaminating the image. |

4 | 1 | Gary Bernstein | |

5 | 1 | Gary Bernstein | Based on discussions with Basilio & others, here are suggestions for steps in this project: |

6 | 1 | Gary Bernstein | |

7 | 1 | Gary Bernstein | # Choose a standard SExtractor configuration and a catalog cut used to identify galaxies. The cut should be on some signal-to-noise ratio (e.g. the error in one of the magnitudes) rather than an absolute magnitude cut since the whole point is to identify images with extinction before we have set magnitude zeropoints. It would be best if the SExtractor outputs can be reproduced from the First Cut catalogs at DESDM. |

8 | 1 | Gary Bernstein | # Selecting images on photometric nights at low airmass, get an idea of the mean and distribution of galaxy counts per CCD as a function of (seeing FLUX_RADIUS, sky brightness) at the extinction factor f=1 that we'll define to be one airmass in clear conditions. |

9 | 1 | Gary Bernstein | # The galaxy-count calibration can be extended to brighter skies by simply adding noise to the image and re-running it (adding Gaussian noise with variance V ADU^2 is equivalent to observing with a sky that is Vg ADU per pixel brighter, where g is the gain). |

10 | 1 | Gary Bernstein | # The extension of the galaxy-count prediction to atmospheric transmission f<1 can't be done with cloudy data because we don't know what the extinction is for any given exposure (unless we have post-facto photometric solutions in hand. I expect it is better to artificially construct extincted images as follows: |

11 | 1 | Gary Bernstein | ## Take a cloudless, low-airmass image and multiply it by f to reduce the signal. |

12 | 3 | Gary Bernstein | ## We have to be a little careful with the noise. If a pixel had a signal of S ADU, it will now have fS ADU in it. We expect to have Poisson noise with variance fS/g ADU^2 in this pixel. But after scaling the image, we have reduced the variance to f^2 S/g ADU^2. So we need to add noise with variance fS/g * (1-f) ADU^2 to the pixel in order to have its signal and noise be consistent. If the read noise R^2 was significant compared to S/g, then we need to restore the suppressed read noise by adding more variance R^2(1-f^2) to each pixel as well. |

13 | 2 | Gary Bernstein | |

14 | 2 | Gary Bernstein | Since galaxy counts are well described as power law functions of the limiting magnitude, I would expect that in a given filter you'll find the counts well approximated by a power law function of f. |

15 | 4 | Basilio Santiago | |

16 | 4 | Basilio Santiago | |

17 | 4 | Basilio Santiago | h1. Alternative idea, that uses the QR SV data base |

18 | 4 | Basilio Santiago | |

19 | 4 | Basilio Santiago | h2. Procedure to estimate atmospheric transmission based on SV data |

20 | 4 | Basilio Santiago | |

21 | 4 | Basilio Santiago | The notation used here is the same as in the expression used by Gary for the S/N and effective exposure time Teff. |

22 | 4 | Basilio Santiago | |

23 | 4 | Basilio Santiago | 1 - Using "clear nights" (cloudless and stable according to site monitoring tools), and for each filter in separate, build a table of Ngal(T,FWHM,b,X), Ngal is the total number of galaxies with S/N >= 10 in a given exposure (all CCDs put together). |

24 | 4 | Basilio Santiago | |

25 | 4 | Basilio Santiago | 2 - Perhaps, also build the same tables for other S/N, like S/N=3, for instance. |

26 | 4 | Basilio Santiago | |

27 | 4 | Basilio Santiago | 3 - Model Ngal dependency with X for fixed (T,FWHM,b). |

28 | 4 | Basilio Santiago | |

29 | 4 | Basilio Santiago | 4 - Based on item 3, correct Ngal for X=1 (zenith pointing), again for each filter in separate. |

30 | 4 | Basilio Santiago | |

31 | 4 | Basilio Santiago | 5 - Convert the Ngal table into a mag_lim table using the same procedure that QR uses regularly to determine the limiting magnitude for a given exposure (currently based on Capak et al curves from Subaru). Remember to match the Capak et al number counts to the same area of the full DECam. |

32 | 4 | Basilio Santiago | |

33 | 4 | Basilio Santiago | 6 - Then for any given exposure, characterized by (T,FWHM,b), use the inferred mag_lim (mlimobs) from QR and match it to the corresponding table value (mlimtable). We should be able to interpolate (linearly) in this table. |

34 | 4 | Basilio Santiago | |

35 | 4 | Basilio Santiago | 7 - Compute eta=10**[0.4*(mlimobs - mlimtable)] as a measure of the fractional atmospheric extinction, eta. This should be a measure of atm transmission, regardless of whether it is due to clouds or to airmass (remember, our tabled Ngal are meant to correspond to X=1). |

36 | 4 | Basilio Santiago | |

37 | 4 | Basilio Santiago | 8 - Use the background limited model for S/N proposed by G. Bernstein to compute the effective exposure time. |

38 | 4 | Basilio Santiago | |

39 | 4 | Basilio Santiago | The assumptions that go into this procedure are: |

40 | 4 | Basilio Santiago | |

41 | 4 | Basilio Santiago | A - the true galaxy number counts are uniform over 2 sq deg scales on the sky |

42 | 4 | Basilio Santiago | B - the chosen parameters assumed to be governing Ngal are the most important ones. |

43 | 4 | Basilio Santiago | C - our model to correct Ngal to X=1 is correct. |

44 | 4 | Basilio Santiago | D - the chosen S/N value(s) to estimate Ngal is(are) low enough to allow step 8, which is based on a background limited case. |