I won't be able to stay long enough for the discussion of postings so
I'm sending an idea.
In this
http://bmode.caltech.edu/~spuder/keck_analysis_logbook/analysis/20130424_re…
and other postings it was shown that there is a large increase in variance when
rel gain deproj is turned on. It is somewhat a surprise because there shouldn't be
too many modes removed, naively.
If we have one detector pair that is indeed the case. Now we have tons of detector
pairs, and we allow the *relative* relgain between the pairs to vary on pretty fast
time scale. And we observe large scatters in the rel gain parameters derived - meaning
different modes (in the final coadded map). I suspect that as our integration time
increases and also as our number of pairs increases (from B2 to Keck), the # of modes
we remove also increases..
Chao-Lin