Just FYI, there is an efficiency measurement on the spider integrating
sphere here:
_dPdTIntSphere/index.html
The efficiency with a black body source is 5 %, and would be lower with a
collimated FTS. There is a large mode loss mismatch between the receiver
(20 deg FOV) and the sphere output (2Pi sr).
Jamie
-----Original Message-----
From: Jamie Bock [mailto:jjb@astro.caltech.edu]
Sent: Wednesday, December 26, 2012 7:46 PM
To: 'Obrient, Roger (Guest)'; 'John Kovac'
Cc: 'keckarray(a)mailman.stanford.edu'.edu'; 'bicep2-
list(a)lists.fas.harvard.edu'#x27;; 'Kaufman, Jonathan (Guest)'
Subject: RE: [Bicep2-list] South Pole Report for B2 ... FTS
Hi Roger, Grant, and Jon,
We were talking during the SPIDER meeting about putting the FTS output
into a back-to-back Winston to fill the telescope FOV (or area). I
seem to recall there is even a Winston FOV converter that doesn't use a
back-to-back.
Many thanks for your hard work on FTS analysis and please let us know
when the posting is up on the results. My opinion FWIW is that the FTS
data are useful to diagnose the gain behavior, but I somehow doubt we
would use them for an absolute correction. OK it is a rich data set.
But if the gain variation is cause by band mismatch I expect we will
trust the CMB more and use el nods to monitor.
Best wishes for the new year & associated pole ceremonies,
Jamie
-----Original Message-----
From: bicep2-list-bounces(a)lists.fas.harvard.edu [mailto:bicep2-list-
bounces(a)lists.fas.harvard.edu] On Behalf Of
Obrient, Roger (Guest)
Sent: Tuesday, December 25, 2012 6:18 AM
To: John Kovac
Cc: keckarray(a)mailman.stanford.edu; ;
Kaufman, Jonathan (Guest)
Subject: Re: [Bicep2-list] South Pole Report for B2 ... FTS
Hi all,
This is an update on holiday activities at the Pole. First, some
coal
in the stocking:
Jon, Roger and Grant worked very hard on the FTS analysis up to the
Dec 24th deadline to warm up (and continued to do so up to sending
off
this report). In the end, our systematics
studies suggest an
uncertainty on spectral gain mismatch of 1.7%, which is a lot higher
than people had hoped for. This uncertainty comes from systematic
comparisons of multiple pointings on a tile as well as multiple
dec-rotations (different scan speeds & directions gave more
repeatable
results, at a 0.3% level). We suspect that this
is a consequence of
not properly filling the beams, and we urge the greater collaboration
to think about how to remedy this for Keck and BICEP-3.
Elaborating further, we have computed spectral gain mismatch by
integrating between 100 and 200GHz, but we have also repeated this
calculation just integrating from 130 to 170GHz to test how the
atmospheric lines contribute. In this idealized situation, we find
that the repeatability for these systematic checks falls to 0.3%,
suggesting that the band of the actual instrument may be too wide to
expect such repeatability with the tools on hand. A comparison to
BICEP-1 suggests that their band indeed cuts off more rapidly on the
low end (as you would expect from a wave-guide), as seen in the
attached figure. Our measurements suggest that B2 may have some
small
fringed response at the oxygen that is hard to
repeat for different
coupling scenarios.
I'm happy to talk with anyone about this; regrettably, we're still
trying to get the post up.
Immanuel and Kirit have finally found a setting (max power) for their
amplified noise source that shows evidence of side-lobes in Keck.
They are still actively analyzing these. They are open to
suggestions
about how to combine this map where the main lobe
is probably
saturated with lower power maps where it may not.
A final side-lobe schedule just finished tonight (other
polarization),
so we will remove rx1 from the mount tomorrow;
Grant will supervise
this since he has a lot of experience doing that. We could use some
guidance on what to do with rx1 after in-lab noise tests that should
happen on Thursday. We could:
1) do spectroscopy (subject to the same challenges we had with
BICEP-2)
2) open and replace heat straps and re-cool (to test if the strap
design influences skewness).
3) open and prep for the new tile Martin will bring.
Jon will supervise the opening of BICEP-2 tomorrow (we've backfilled
a
little to speed the warmup) and Sarah will do
checks on the FPU
shorts
issues. We already have rx3 open and ready for
D2.
best,
Roger
-----Original Message-----
From: John Kovac [mailto:jmkovac@cfa.harvard.edu]
Sent: Sat 12/22/2012 6:14 PM
To: Obrient, Roger (Guest)
Cc: Kaufman, Jonathan A-039-S; Kaufman, Jonathan (Guest);
keckarray(a)mailman.stanford.edu; bicep2-list(a)lists.fas.harvard.edu
Subject: Re: [Bicep2-list] South Pole Report for B2 ... FTS
Hi Roger, thanks for the reply. Responses are in-line below. Sounds
like this is on track to converge, hopefully very soon now.
If Jon will update his posting tomorrow I will try to review it asap
to give any final feedback, but:
Roger, based on this email exchange I think that you understand my
points and that we're in agreement on goals for these tests, so when
you are satisfied with the results posted, you may make the call to
go
ahead with the warm up.
John
On 12/21/12 10:38 PM, Obrient, Roger (Guest) wrote:
> We are working on analysis of the 9-pointing data and will also use
> the 16-pointing sets to test this idea. I agree that the above
quote
is
unlikely to be true in the end.
Good. I think consistency of A/B spectral match and
bandcenters under
multiple pointings / DKs is still the main thing we haven't seen
completed--so that should be the main focus to try to wrap this up.
>> Fig 4: shows repeatable structure at 200-300 GHz--if real, we'd
>> want to convince ourselves that is OK (I actually expect some
level
> of
out
of band
response from island coupling if nothing else, but that's
probably well-matched A/B). So: how repeatable are those features
under dk rotation, scan speed change, signal strength change, etc?
So my first
concern is how big an effect does this have on the
spectral gain mismatch. We are
going to try to investigate that in
the data on hand, but if it is sufficiently small (i.e. the 0.2%
standard) then I would think the extra tests may not add much to the
final picture. Let us know if you agree.
These aren't new tests--you already have all that data. Jon can
answer
those questions just by looking at it.
> Fig 5: left vs right slope is almost surely
an analysis artifact,
I'm
not
worried about it per se but it needs to be understood to
interpret other kinds of systematic repeatability.
Why do we think this is an
analysis artifact?
It goes negative, so a phase issue seems likely.
We will study center frequencies of multiple
pointing (three)
starting on one tile (Tile 4). If we see the gradient pattern
change,
then we will conclude that it is from the FTS,
and probably won't
repeat this exercise for other tiles.
If the gradient is from the FTS then you should ensure that you have
multiple FTS pointings on each tile from which to derive final
spectra.
I think you said you are doing that.
> Fig 11, BW repeatability under DK rot: looks
very poor, judging
from
> UR and LL figures I'd say we can't
trust your BW's to better than
10-20%.
Makes me
wonder about the robustness of your algorithm. You are
not just picking out 50% crossing points, are you? An integral
will be more robust. You don't have a link or a description of
your algorithm in this posting.
Yes, Jon is just using the -3dB points. We've
discussed better ways
to
do this, such as peak normalizing and integrating
or least-squares
fit
to a top-hat model. This will be done during the
warm-up
Fine--I agree all the BW stuff can be fixed later. It is important to
understand how well we've measured the band edges, but perhaps
applying all the repeatability testing to the A/B atm relgain
mismatch
will achieve this.
>> The pattern in Fig 13 UL seems telling. Can I assume that this,
>> and all your other plots that don't say otherwise, is constructed
>> from data taken with a single pointing per tile? The systematic
>> divergence is greatest around the tile edges--this looks like
>> direct evidence for systematic skewing of spectra from pixels that
>> are off-axis through the FTS.
> Yes it is a single pointing per tile. We expect that this effect
is
a pointing issue, and we are testing that in
analysis with the multi-
pointing data sets.
good.
>
> Scan speed:
>
> Good to see generally nice agreement, at least for the aspects of
the
> spectra you are plotting. The different
(small) offsets in
frequency
> axis for the different tiles (Fig 15 UL, LL)
is telling you
something
>> about a non-repeatable systematic error in your frequency axis for
>> each run--you should understand this, but fortunately it appears
to
>> be small. Are you using the encoder
data in your analysis (if
not,
will you add it?)
He is using the encoder, but
only in a crude way. He computes the
moving arm velocity from the sample rate and
the *average* difference
in the encoder positions. We will redo some spectra using the
specific encoder positions and check to see if this makes a
difference.
It looks like this should be understandable, but as this effect is
small, full investigation can be deferred to post-warmup.
> How do the out-of-band features respond to
the different scan
speeds?
>> I'm talking about the inset of Fig 4.
> Again, should we check the spectral gain mismatch on this 1st
before
diving into other systematc checks?
Yes, I am interested in the A/B mismatch in these out-of-band
features- -but also you already have all the data to see if they look
similar at different scan speeds, polarity, etc.--that will tell you
if they are real. I'm not proposing taking any additional data.
>>
>> Nine Pointings
>>
>> This is probably the analysis you should have started with. The
>> question you need to ask here is how repeatable are the spectra
and
>
quantities derived from them for a given pixel that enters the FTS
at
>> different angles and/or different orientations. As we discussed
>> yesterday, you can make focal plane coordinate plots of the
>> mismatch in e.g. band center or (more interestingly) A/B relgain
>> mismatch for pointings that closely overlap vs. those that are
>> offset by a larger amount. See if you can draw conclusions like:
>> "spectra taken for multiple pointings where the FTS central ray is
>> within a 3 pixel radius vary by < 1.5 GHz, but pointings further
>> off-axis give larger discrepencies." Make a similar statement for
A/B relgain mismatch.
> Yes, that's a good suggestion and will
dovetail with ongoing
efforts
to analyze the multiple-poinitngs. And again, we
have a 16-er in the
works, and a pair of dec angles.
Great.
>> Spectra Gain Mismatch
>>
>> Please fold these results into the appropriate sections above--it
>> is more confusing to have them separated out at the end like this.
> I agree and I've already gotten Jon started in reorganizing this
> post accordingly/
>
>
>
>> Fig 23, scan direction: Are these the exact same data as was
>> processed in Fig 7? The pattern of inconsistency does not appear
>> similar to me, so it would appear that this consistency check is
>> probing a different kind of divergence in the left vs right
spectra
>> than the simple slope of Fig
5...assuming that is what is behind
>> Fig 7. Odd. Worth looking at the A vs B spectra to understand
this.
> Yes, they're the same data. We can
plot A vs B.
>
>> Fig 24, DK rotation:
>> A LOT of seemingly random variation. As an aside, please fix your
>> histograms. As usual the scatter plots are very informative.
>> There is some correlation here but a very disappointing level of
spread.
Yes,
I've had Jon fix up the histograms. The next version we send
up
north won't be as silly looking and will be more informative.
> You make a statement: "The spread looks pretty large but if you
> look at the dxi_df (which is integrated to get spectra gain
> mismatch) for each detector, they are clearly sub-percent."
> Sub-percent compared
to what?
>> You sound like you are trying to reassure us this is a small
>> effect, but it does not appear to be. There is a hint in the Fig
>> 24 LL scatter plot that actual spectral gain mismatches of order
1%
>> are being measured in these spectra with
S/N of no better than 1.
>> I think that is worse than
>> BICEP1 results, but I'm not sure--can you make a direct
comparison?
Obviously other consistency checks need to be added as
well.
I didn't really understand Grant and Jon's comments about the
differential spectral gain and I've asked them to remove this
argument.
> It would be helpful to see plots that illustrate the A and B
> spectra time the atm model separately, and then their difference
> (which is what I think you are plotting here...but you need to
> define dxi_df
or
> link to a definition). How does B2 average
(not differential)
> spectral response in the region of the 120 GHz and 180 GHz atm
> lines compare with B1 150 GHz spectra? Are our B2 detectors
> hitting those
lines harder than B1 did?
I see no reason why we can't do this, and
add some comments on the
integrals over sub-ranges to see how those two lines
contribute.
This
isn't much different computationally than
what I proposed for
checking
about the out-of-band response contribution to
spectral gain
mismatch.
That sounds great. I think you are on the right track to wrap this
up.
>
> good luck,
> John
>
>
>