Sub-Pixelizing Mangle Masks » History » Version 2
Eli Rykoff, 01/29/2014 06:33 PM
h1. Sub-Pixelizing Mangle Masks
Currently, we have the full mangle mask which gives the 2" aperture depth at every point in the survey, and the healpix nside=4096 approximation of the mangle masks. The full mangle mask is ... unwieldy, which it would be nice to use a pixelized version. Note that this is close to the finest practical pixelization before desktop workstations will struggle with the memory requirements.
However, as currently implemented, the pixelization simply takes the depth at the central point of that pixel. As each pixel at nside=4096 is 0.75 arcmin^2, this has trouble around stars, boundaries, etc.
h2. The Subpixel Method
A few of us (Alex D-W in particular) have been working on seeing what can be done to calculate the depth on a finer scale and then average it. Note that in order for this to work we need 2 output maps: the first has the average depth in the pixel *where the image was not completely masked*, and the second is the fraction of the pixel area that is not masked. In this way we can create an "anti-aliased":http://en.wikipedia.org/wiki/Spatial_anti-aliasing map that contains most of the information from the fine scales but is more practical to use. (In particular, this format is useful for cluster masked-region corrections.)
The reason that Aurelien has been doing a single-pixel value is that the regular mangle code is too slow for this procedure (which is optimized for other things.) Luckily, Erin Sheldon has adapted some super-fast code from Martin White in the "pymangle"https://github.com/esheldon/pymangle python package that is over 50x faster than the default mangle code for looking up weights ... this is very useful!
h2. The Tests
For a first test, I have taken the i-band mask in the field of RXJ2248, and run nside=4096 with 1, 4, 16, 64, and 256 sub-pixels. In practice, to do all of SVA1 (or the DES footprint), 256 subpixels is "doable" but significantly slower. If we can get away with 16 or 64 that would make things run a lot faster. (The run time for the whole lot on this field was 20 minutes, about half of that for the 256 subpixel run).
I look at the output weight and fractional observed maps, as well as the ratio of the weights and fracs to the fiducial 256 sub-pixel run. For each of these, I calculate the fraction of "bad" pixels where the weight is misestimated by >2% or the bad fraction is misestimated by >5%. These are somewhat arbitrary cuts, but they get to the rate of bad outliers where our sampling was clearly insufficient. I have also plotted a zoomed in region around a star mask.
h3. 256 Subpixels
h3. 64 Subpixels
h3. 16 Subpixels
h3. 4 Subpixels
h3. 1 (sub)pixel
In terms of getting the average depth, things actually converge very quickly. Although the single pixel has a significant number of outliers (3%) and some scatter, even at 4 subpixels the scatter is reduced and the outlier fraction is down to 1%. But 64 subpixels is almost as good as 256, with only 0.3% of pixels with a difference of more than 2% for the mean depth.
In terms of getting the bad fraction, things are trickier. First, we're dealing with a quantized value when we only have 16 or 64 subpixels, but that's something we can live with. But even with 16 subpixels we have 5% of the time the masked area is misestimated by more than 5%. On the other hand, with 64 subpixels only 2% of the pixels are misestimated by > 5% and most of these are only slight outliers.
Looking at these plots, I think that 64 subpixels is a good compromise between computation time and fidelity.