Project

General

Profile

Sub-Pixelizing Mangle Masks » History » Version 4

« Previous - Version 4/13 (diff) - Next » - Current version
Eli Rykoff, 01/30/2014 11:57 AM


Sub-Pixelizing Mangle Masks

-Eli.

Currently, we have the full mangle mask which gives the 2" aperture depth at every point in the survey, and the healpix nside=4096 approximation of the mangle masks. The full mangle mask is ... unwieldy, which it would be nice to use a pixelized version. Note that this is close to the finest practical pixelization before desktop workstations will struggle with the memory requirements.

However, as currently implemented, the pixelization simply takes the depth at the central point of that pixel. As each pixel at nside=4096 is 0.75 arcmin^2, this has trouble around stars, boundaries, etc.

The Subpixel Method

A few of us (Alex D-W in particular) have been working on seeing what can be done to calculate the depth on a finer scale and then average it. Note that in order for this to work we need 2 output maps: the first has the average depth in the pixel where the image was not completely masked, and the second is the fraction of the pixel area that is not masked. In this way we can create an anti-aliased map that contains most of the information from the fine scales but is more practical to use. (In particular, this format is useful for cluster masked-region corrections.)

The reason that Aurelien has been doing a single-pixel value is that the regular mangle code is too slow for this procedure (which is optimized for other things.) Luckily, Erin Sheldon has adapted some super-fast code from Martin White in the "pymangle"https://github.com/esheldon/pymangle python package that is over 50x faster than the default mangle code for looking up weights ... this is very useful!

The Reference

Although doing the full survey at very large nside is unwieldy, we can do a small region at larger nside without too much trouble (though we have to avoid trying to use the regular healpix map functions or you'll blow away your RAM.) As a reference, I have started with the i-band mask in the field of RXJ2248, and have run healpix with nside = 65536 (this is 256x more pixels than nside=4096).

The map looks like this at very high resolution, along with a zoom in to a section in the center:

The Subpixel Tests

For a first test, I have taken the i-band mask in the field of RXJ2248, and run nside=4096 with 1, 4, 16, 64, and 256 sub-pixels. In practice, to do all of SVA1 (or the DES footprint), 256 subpixels is "doable" but significantly slower. If we can get away with 16 or 64 that would make things run a lot faster. (The run time for the whole lot on this field was 20 minutes, about half of that for the 256 subpixel run).

These runs are done at fine scale and then averaged down to the coarser nside=4096 scale. I look at tat the output weight and fraction-observed maps, as well as the delta-mag for the weights and ratio of ractions relative to the high resolution run. For each of these I calculate the fraction of "bad" pixels where the weight is misestimated by 0.1 mag or greater, or the bad fraction is off by >5%. These are somewhat arbitrary cuts, but they get to the rate of bad outliers where our sampling was clearly insufficient.

256 Subpixels

64 Subpixels

16 Subpixels

4 Subpixels

1 (sub)pixel

Summary

In terms of getting the average depth, things actually converge very quickly. Although the single pixel has a significant number of outliers (3%) and some scatter, even at 4 subpixels the scatter is reduced and the outlier fraction is down to 1%. But 64 subpixels is almost as good as 256, with only 0.3% of pixels with a difference of more than 2% for the mean depth.

In terms of getting the bad fraction, things are trickier. First, we're dealing with a quantized value when we only have 16 or 64 subpixels, but that's something we can live with. But even with 16 subpixels we have 5% of the time the masked area is misestimated by more than 5%. On the other hand, with 64 subpixels only 2% of the pixels are misestimated by > 5% and most of these are only slight outliers.

Looking at these plots, I think that 64 subpixels is a good compromise between computation time and fidelity.