Dear quanta,

Tomorrow at 11am we will have our usual group meeting (6-310) with Seth Lloyd telling us about his and others' recent work on quantum-inspired classical algorithms.

At 1:30pm we'll have a seminar by Shruti Puri  from Yale.  This is in the usual place: 6C-442 (which i wrote wrong in my first announcement).

Next week (Dec 7) we'll have Annie Wei speaking about quantum walks in the 11am group meeting, and we'll have Srinivasan Arunachalam giving the afternoon seminar. 

Seminar titles and abstracts are below.

-aram

Tomorrow
Title: Quantum Error Correction with Biased Noise Cat-Qubits

Abstract

Many physical systems exhibit error channels that are strongly biased, that is, one type of error
dominates the channel. Naively, in this case error correction becomes much easier and more
hardware efficient. However, maintaining the bias while performing gates which do not
commute with the dominant error is not possible with two-dimensional systems. In this talk, I
will demonstrate a way to circumvent this using the non-trivial topology of a continuous family
of Schrödinger cat-state qubits. These bosonic qubits can be experimentally realized in a
driven non-linear (Kerr) oscillator and exhibit a phase-flip rate that is exponentially suppressed
relative to the bit-flip rate. The phase of the drive provides a continuous parameter that permits
topologically-protected realization of CNOT gates in a bias-preserving manner. I will present all
the bias-preserving operations possible with these cat qubits and discuss how these can lead
to efficient quantum error correction codes.


Next week
Title: Strengths and weaknesses of quantum examples for learning

Abstract: Recently, there has been a lot of recent interest in quantum
algorithms for machine learning. These try to improve over classical
algorithms for generalizing or otherwise learning from given data (the data
itself could also be quantum).

I consider the following setting: Let C be concept class of Boolean
functions f:{0,1}^n->{-1,1}. In the classical learning model, a learner is
given a (x,f(x)) sample where x is drawn according to a distribution D.
Quantumly an example would be a *superposition* over samples. The goal of
the learner is to output a hypothesis with error at most eps. In this talk,
I describe a few instances when quantum examples help and when they don't.
No prior knowledge of quantum computing or quantum physics is necessary to
understand this talk.

a) In the classical PAC model, D is unknown to the learner. Fundamental
results in classical learning show that the classical sample complexity of
a concept class C in the PAC model is tightly determined by the so-called
VC-dimension of C. Our main result is to show that exactly the same formula
gives the quantum sample complexity, showing that quantum examples do not
help in the PAC model of learning (nor in the related agnostic model).

b) Let D be the uniform distribution, eps=0 (i.e., we need to *identity* f)
and suppose C contains Boolean functions having at most k non-zero Fourier
coefficients, then how many classical examples suffice to identity f? This
question has been studied for over a decade under the name of sparse
recovery and Fourier sampling, having applications in compressed sensing
and the data stream model. Regev and Haviv (CCC'15) showed Theta(nk)
classical samples suffice and are required to solve this problem. In this
work, we solve this problem using O(k^{1.5}) quantum samples and show that
Omega(k) quantum samples are necessary (importantly, it is independent of
n!).

c) I also consider the coupon-collector problem and show how quantum
examples give a logarithmic-improvement over the classical algorithms that
solve the coupon collector problem.

Based on joint work with Ronald de Wolf and others.