Project

General

Profile

Sub-Pixelizing Mangle Masks » History » Version 7

Eli Rykoff, 01/31/2014 09:57 AM

1 1 Eli Rykoff
h1. Sub-Pixelizing Mangle Masks
2 1 Eli Rykoff
3 1 Eli Rykoff
-Eli.
4 1 Eli Rykoff
5 1 Eli Rykoff
Currently, we have the full mangle mask which gives the 2" aperture depth at every point in the survey, and the healpix nside=4096 approximation of the mangle masks.  The full mangle mask is ... unwieldy, which it would be nice to use a pixelized version.  Note that this is close to the finest practical pixelization before desktop workstations will struggle with the memory requirements.
6 1 Eli Rykoff
7 1 Eli Rykoff
However, as currently implemented, the pixelization simply takes the depth at the central point of that pixel.  As each pixel at nside=4096 is 0.75 arcmin^2, this has trouble around stars, boundaries, etc.
8 1 Eli Rykoff
9 1 Eli Rykoff
h2. The Subpixel Method
10 1 Eli Rykoff
11 1 Eli Rykoff
A few of us (Alex D-W in particular) have been working on seeing what can be done to calculate the depth on a finer scale and then average it.  Note that in order for this to work we need 2 output maps: the first has the average depth in the pixel *where the image was not completely masked*, and the second is the fraction of the pixel area that is not masked.  In this way we can create an "anti-aliased":http://en.wikipedia.org/wiki/Spatial_anti-aliasing map that contains most of the information from the fine scales but is more practical to use.  (In particular, this format is useful for cluster masked-region corrections.)
12 1 Eli Rykoff
13 6 Eli Rykoff
The reason that Aurelien has been doing a single-pixel value is that the regular mangle code is too slow for this procedure (which is optimized for other things.)  Luckily, Erin Sheldon has adapted some super-fast code from Martin White in the "pymangle"https://github.com/esheldon/pymangle python package that is over 50x faster than the default mangle code for looking up weights ... this is very useful!
14 1 Eli Rykoff
15 3 Eli Rykoff
h2. The Reference
16 3 Eli Rykoff
17 3 Eli Rykoff
Although doing the full survey at very large nside is unwieldy, we can do a small region at larger nside without too much trouble (though we have to avoid trying to use the regular healpix map functions or you'll blow away your RAM.)  As a reference, I have started with the i-band mask in the field of RXJ2248, and have run healpix with nside = 65536 (this is 256x more pixels than nside=4096).
18 3 Eli Rykoff
19 4 Eli Rykoff
The map looks like this at very high resolution, along with a zoom in to a section in the center:
20 1 Eli Rykoff
21 1 Eli Rykoff
|!{width:400px}nside65536_map.png!|!{width:400px}nside65536_map_zoom.png!|
22 6 Eli Rykoff
23 6 Eli Rykoff
It's also worth looking at how the map changes as we go from high to low resolution:
24 6 Eli Rykoff
25 7 Eli Rykoff
|!{width:750px}deres.gif!|
26 1 Eli Rykoff
27 4 Eli Rykoff
h2. The Subpixel Tests
28 1 Eli Rykoff
29 4 Eli Rykoff
For a first test, I have taken the i-band mask in the field of RXJ2248, and run nside=4096 with 1, 4, 16, 64, and 256 sub-pixels.  In practice, to do all of SVA1 (or the DES footprint), 256 subpixels is "doable" but significantly slower.  If we can get away with 16 or 64 that would make things run a lot faster.  (The run time for the whole lot on this field was 20 minutes, about half of that for the 256 subpixel run).  
30 4 Eli Rykoff
31 4 Eli Rykoff
These runs are done at fine scale and then averaged down to the coarser nside=4096 scale.  I look at tat the output weight and fraction-observed maps, as well as the delta-mag for the weights and ratio of ractions relative to the high resolution run.  For each of these I calculate the fraction of "bad" pixels where the weight is misestimated by 0.1 mag or greater, or the bad fraction is off by >5%.  These are somewhat arbitrary cuts, but they get to the rate of bad outliers where our sampling was clearly insufficient.
32 4 Eli Rykoff
33 1 Eli Rykoff
h3. 256 Subpixels
34 1 Eli Rykoff
35 2 Eli Rykoff
|!{width:400px}nside4096-256_map.png!|!{width:400px}nside4096-256_map_zoom.png!|
36 2 Eli Rykoff
|!{width:400px}nside4096-256_fracmap.png!|!{width:400px}nside4096-256_fracmap_zoom.png!|
37 4 Eli Rykoff
|!{width:400px}nside4096-256_finescale_hist_map.png!||
38 2 Eli Rykoff
39 1 Eli Rykoff
h3. 64 Subpixels
40 2 Eli Rykoff
41 2 Eli Rykoff
|!{width:400px}nside4096-64_map.png!|!{width:400px}nside4096-64_map_zoom.png!|
42 2 Eli Rykoff
|!{width:400px}nside4096-64_fracmap.png!|!{width:400px}nside4096-64_fracmap_zoom.png!|
43 4 Eli Rykoff
|!{width:400px}nside4096-64_finescale_hist_map.png!|!{width:400px}nside4096-64_relative_hist_frac.png!|
44 1 Eli Rykoff
45 2 Eli Rykoff
h3. 16 Subpixels
46 2 Eli Rykoff
47 2 Eli Rykoff
|!{width:400px}nside4096-16_map.png!|!{width:400px}nside4096-16_map_zoom.png!|
48 2 Eli Rykoff
|!{width:400px}nside4096-16_fracmap.png!|!{width:400px}nside4096-16_fracmap_zoom.png!|
49 4 Eli Rykoff
|!{width:400px}nside4096-16_finescale_hist_map.png!|!{width:400px}nside4096-16_relative_hist_frac.png!|
50 2 Eli Rykoff
51 2 Eli Rykoff
h3. 4 Subpixels
52 2 Eli Rykoff
53 2 Eli Rykoff
|!{width:400px}nside4096-4_map.png!|!{width:400px}nside4096-4_map_zoom.png!|
54 1 Eli Rykoff
|!{width:400px}nside4096-4_fracmap.png!|!{width:400px}nside4096-4_fracmap_zoom.png!|
55 4 Eli Rykoff
|!{width:400px}nside4096-4_finescale_hist_map.png!|!{width:400px}nside4096-4_relative_hist_frac.png!|
56 2 Eli Rykoff
57 2 Eli Rykoff
h3. 1 (sub)pixel
58 2 Eli Rykoff
59 2 Eli Rykoff
|!{width:400px}nside4096-1_map.png!|!{width:400px}nside4096-1_map_zoom.png!|
60 2 Eli Rykoff
|!{width:400px}nside4096-1_fracmap.png!|!{width:400px}nside4096-1_fracmap_zoom.png!|
61 4 Eli Rykoff
|!{width:400px}nside4096-1_finescale_hist_map.png!|!{width:400px}nside4096-1_relative_hist_frac.png!|
62 2 Eli Rykoff
63 2 Eli Rykoff
h2. Summary
64 2 Eli Rykoff
65 2 Eli Rykoff
In terms of getting the average depth, things actually converge very quickly.  Although the single pixel has a significant number of outliers (3%) and some scatter, even at 4 subpixels the scatter is reduced and the outlier fraction is down to 1%.  But 64 subpixels is almost as good as 256, with only 0.3% of pixels with a difference of more than 2% for the mean depth.  
66 2 Eli Rykoff
67 2 Eli Rykoff
In terms of getting the bad fraction, things are trickier.  First, we're dealing with a quantized value when we only have 16 or 64 subpixels, but that's something we can live with.  But even with 16 subpixels we have 5% of the time the masked area is misestimated by more than 5%.  On the other hand, with 64 subpixels only 2% of the pixels are misestimated by > 5% and most of these are only slight outliers.
68 2 Eli Rykoff
69 2 Eli Rykoff
Looking at these plots, I think that 64 subpixels is a good compromise between computation time and fidelity.