Project

General

Profile

Testing Astrometry And Selection on Coadds

What is the impact of Fixed Astrometry in SVA1 Finalcut on the coadd photometry? How about image selection (low tau_effective)? And is any of this related to the problems with missing galaxies in redMaPPer that seem to be solved by shifting from MAG_DETMODEL galaxy colors to MAG_AUTO galaxy colors?

-Eli.

The Data

I am looking at the 13 tiles in the SVA1 SPTE coadd testbed chosen by Erin Sheldon for WL test purposes. These have 10 full-depth contiguous fields and 3 "outrigger" fields somewhat arbitrarily chosen. All SVA1 finalcut images from these fields were used as an input.

New astrometric solutions were taken from Fixed Astrometry in SVA1 Finalcut currently being used successfully in the WL group.

In addition, Robert Gruendl's FIRSTCUT_EVAL table was used to monitor T_EFF, to look at the input of cutting out "bad" images that would not be considered survey quality in Y1+ observations. A minimum of T_EFF was put at 0.2 for g band, and 0.3 for r,i,z.

The Runs

There were four runs performed, with the following code versions and key points:

SVA1 settings (Mode 0)

  • swarp 2.36.2+1
  • sextractor 2.18.10+7
  • psfex 3.15.0+6
  • pixscale = 0.270
  • detection threshold for psf creation: 1.6

Y1A1 settings (Mode 1)

  • swarp 2.36.2+2
  • sextractor 2.18.10+14
  • psfex 3.17.0+6
  • pixscale = 0.263
  • detection threshold for psf creation: 5.0

Y1A1 settings, Clean (Mode 2)

  • Same as above in Mode 1
  • Clean out low T_EFF images before coadd
  • Clean out images with very bad astrometric solutions (to match mode 3)

Y1A1 settings, Clean, Fixed Astrometry (Mode 3)

  • Same as above in Mode 2
  • Use new astrometic solutions

Results

Attached at the bottom of the page is a tar file with a giant pile of plots, if anybody is interested, for all the tiles. I will highlight here what I think are interesting plots that show what's going on. The plots all have the code "0-1" or "1-2" or "2-3" to show that this is plotting the delta for objects between (eg) mode 0 and mode 1.

Mode 0 to Mode 1: The Code Upgrade

Changing from Mode 0 to Mode 1 makes negligible difference in either stars or galaxies, AUTO, PSF, or DETMODEL colors. This is good! As a comparison to tiles below, here are a couple of sample plots, showing the change in the given quantity between processing modes.

The blue line is 0 offset, magenta is the median of the stars/galaxies in the plot.

I believe the scatter is caused by the different pixel scale, which gives different psf sampling. However, the changes in all cases are on the order of ~0.2sigma (it isn't 1 sigma because the same input data is used, so the two runs are highly covariant.)

This is generally the case for all the tiles in the testbed. Therefore, the code upgrade does not make a significant difference to the coadd photometry. Okay!

Mode 1 to Mode 2: Cleaning Bad Images

This was a bit of a shocker, but in retrospect maybe shouldn't be.

First, the tile from above (where the code upgrade was negligible):

It is clear that while MAG_AUTO colors for both stars and galaxies is mostly unaffected, MAG_DETMODEL colors have a lot of scatter. And note also that the (nominal) errors for MAG_DETMODEL (especially) as well as MAG_PSF are much smaller than the MAG_AUTO errors, so they should not be changing by this much.

Looking at another tile we see something similar:

Note that this effect is not isolated to r-i colors, though these are the most interesting for galaxy clusters at z~0.5.

In general, MAG_AUTO colors are much more consistent when cleaning out the bad images, but not always perfect:

The magenta line has been pushed off 0 because there's so much more scatter. However, things are still within errors for the auto colors.

Mode 2 to Mode 3: Fixing the Astrometry (with clean images)

The good news is that there's very little change in the coadd photometry when making the astrometric fix:

Interpretation

Previously, I (Eli) had looked at the differences between the psf homogenized coadds and sva1 coadds to try to identify any problems from discontinuous psfs on psf magnitudes and detmodel magnitudes. The results there were ambiguous. First, it's clear that psf magnitudes are biased based on the regions. Second, it also was apparent that the detmodel colors were much less biased.

However, this current test is a bit different, in that there are a lot more variables controlled, so we can observe more subtle shifts. And it does appear that the large scatter objects in detmodel colors are spatially correlated.

For example, for one particular tile, I look at the high scatter outliers in delta r-i for both stars and galaxies. Red points are the 10% highest outliers, and blue are the 10% lowest. If you look at the weight map (not shown) you can see that these regions correspond to changes in depth/psf:

(As an aside: this doesn't actually say which one is "correct". Though I have confirmed that Mode 1 -- before cleaning -- these stars are mostly on the stellar locus, while that for Mode 2 -- after cleaning, more discontinuous psf -- these stars are pushed off the stellar locus.)

That is, when we remove the (bad) images, we tend to make the overall coadd less homogeneous, with more discontinuities, and thus a more difficult fit for the psf model. (I guess adding a few crappy images is a very poor man's homogenization.) This problem is exacerbated the shallower and more patchy our images are ... which is still a big worry for Y1.

And I think part of the problem isn't quite illustrated here, and that's the effect (or non-effect) on the detmodel errors. Currently, the detmodel magnitude error is only based on the flux values in the detection image, and it does not marginalize over uncertainties in the galaxy model or the psf. In the case of the galaxy model, I don't think this is a problem because the error here is highly covariant among bands (so shifting the model slightly will have essentially the same effect on g and r, leaving the color constant). But in the case of the psf model, the uncertainty here is nowhere to be found in the detmodel errors. So even if the bias in detmodel is small (and often it does seem to be non-trivial), the error is certainly wrong.