# Suggestions for galaxy-count extinction calibration¶

The goal is to determine the mean galaxy counts per CCD as a function of (filter, seeing, sky brightness, extinction) and then invert this to estimate the extinction given measurements of (galaxy counts, seeing, sky brightness). Also as a byproduct be able to identify CCDs with highly unlikely galaxy counts (too high or too low) which would indicate either a hardware failure or some bright object contaminating the image.

Based on discussions with Basilio & others, here are suggestions for steps in this project:

- Choose a standard SExtractor configuration and a catalog cut used to identify galaxies. The cut should be on some signal-to-noise ratio (e.g. the error in one of the magnitudes) rather than an absolute magnitude cut since the whole point is to identify images with extinction before we have set magnitude zeropoints. It would be best if the SExtractor outputs can be reproduced from the First Cut catalogs at DESDM.
- Selecting images on photometric nights at low airmass, get an idea of the mean and distribution of galaxy counts per CCD as a function of (seeing FLUX_RADIUS, sky brightness) at the extinction factor f=1 that we'll define to be one airmass in clear conditions.
- The galaxy-count calibration can be extended to brighter skies by simply adding noise to the image and re-running it (adding Gaussian noise with variance V ADU^2 is equivalent to observing with a sky that is Vg ADU per pixel brighter, where g is the gain).
- The extension of the galaxy-count prediction to atmospheric transmission f<1 can't be done with cloudy data because we don't know what the extinction is for any given exposure (unless we have post-facto photometric solutions in hand. I expect it is better to artificially construct extincted images as follows:
- Take a cloudless, low-airmass image and multiply it by f to reduce the signal.
- We have to be a little careful with the noise. If a pixel had a signal of S ADU, it will now have fS ADU in it. We expect to have Poisson noise with variance fS/g ADU^2 in this pixel. But after scaling the image, we have reduced the variance to f^2 S/g ADU^2. So we need to add noise with variance fS/g * (1-f) ADU^2 to the pixel in order to have its signal and noise be consistent. If the read noise R^2 was significant compared to S/g, then we need to restore the suppressed read noise by adding more variance R^2(1-f^2) to each pixel as well.

Since galaxy counts are well described as power law functions of the limiting magnitude, I would expect that in a given filter you'll find the counts well approximated by a power law function of f.

# Alternative idea, that uses the QR SV data base¶

## Procedure to estimate atmospheric transmission based on SV data¶

The notation used here is the same as in the expression used by Gary for the S/N and effective exposure time Teff.

1 - Using "clear nights" (cloudless and stable according to site monitoring tools), and for each filter in separate, build a table of Ngal(T,FWHM,b,X), Ngal is the total number of galaxies with S/N >= 10 in a given exposure (all CCDs put together).

2 - Perhaps, also build the same tables for other S/N, like S/N=3, for instance.

3 - Model Ngal dependency with X for fixed (T,FWHM,b).

4 - Based on item 3, correct Ngal for X=1 (zenith pointing), again for each filter in separate.

5 - Convert the Ngal table into a mag_lim table using the same procedure that QR uses regularly to determine the limiting magnitude for a given exposure (currently based on Capak et al curves from Subaru). Remember to match the Capak et al number counts to the same area of the full DECam.

6 - Then for any given exposure, characterized by (T,FWHM,b), use the inferred mag_lim (mlimobs) from QR and match it to the corresponding table value (mlimtable). We should be able to interpolate (linearly) in this table.

7 - Compute eta=10**[0.4*(mlimobs - mlimtable)] as a measure of the fractional atmospheric extinction, eta. This should be a measure of atm transmission, regardless of whether it is due to clouds or to airmass (remember, our tabled Ngal are meant to correspond to X=1).

8 - Use the background limited model for S/N proposed by G. Bernstein to compute the effective exposure time.

The assumptions that go into this procedure are:

A - the true galaxy number counts are uniform over 2 sq deg scales on the sky

B - the chosen parameters assumed to be governing Ngal are the most important ones.

C - our model to correct Ngal to X=1 is correct.

D - the chosen S/N value(s) to estimate Ngal is(are) low enough to allow step 8, which is based on a background limited case.