Research
My research is in two main areas: quantum information and quantum metrology. I have invented algorithms for solving differential equations and simulating quantum systems, as well as the techniques for the most accurate possible phase measurements.
1. Quantum Information
Quantum algorithms
Quantum computers would employ the power of quantum superposition to perform many calculations in parallel, and thereby surpass the power of even the largest classical computers. Rather than using bits, which can only be in the state 0 or 1, quantum computers use quantum bits, or qubits, which can be in any superposition of 0 and 1. The number of possible states in this superposition increases exponentially with the number of qubits, so for example, with only 30 qubits, one can have over a billion states in the superposition. Moreover, quantum computers would be able to perform calculations on each of these states simultaneously, giving rise to massive parallelism that easily outperforms even the most powerful classical computers available today.
For example, the most powerful computer in the world is currently China's Tianhe-2 supercomputer. It contains 3120000 processor cores for a performance of 31 petaflops. In comparison, a quantum computer using a superposition of only 22 qubits would be able to perform more parallel calculations: 222 = 4194304. It is not quite this easy, because it is not possible to directly access the results of each of these parallel calculations. This is where quantum algorithms come in. Quantum algorithms use interference to extract useful information without needing to know the detailed results of each of the individual calculations. My work provides a basis for the most important applications for quantum computing – the “killer apps”.
The first killer app is simulation of quantum systems. I have developed the most powerful known algorithms for this task. In 2007, I showed how to use quantum computers to simulate quantum systems far more efficiently than was previously known [CMP 270, 359 (2007)]. I have developed a way to apply these efficient simulation methods not only to static systems, but to time-dependent systems [JPA 44, 445308 (2011)]. Using quantum walks, I have improved the efficiency of these algorithms even further, to scale at the limits of what is theoretically possible in terms of the simulation time [QIC 12, 29 (2012)].
Simulation of physical systems is an extremely important application, and much of the time of the world’s most powerful computers is dedicated to this task. Fast simulation of quantum systems can lead to an enormous range of advances. For example, simulation of proteins and other biomolecules gives better understanding of their structure and behaviour, and can lead to development of new pharmaceuticals, or enzymes for the more efficient production of biofuels.
The second killer app
is solving differential equations. I have developed an
algorithm to solve linear differential equations [arXiv:1010.2745].
This algorithm represents a great advance over an earlier algorithm for
solving nonlinear differential equations [arXiv:
0812.4423], which scaled exponentially in the number of time
steps. The
importance of solving differential equations can not be overstated,
because
this includes most of the other applications of supercomputers. For example, it can be used to simulate
global climate models for predicting weather and climate change, to
model
plasma flow for fusion reactor design, to simulate fluid dynamics for
designing
aircraft, and many other applications.
The quantum
algorithm for differential
equations does have a number of limitations, however. It is
limited to linear differential equations, and also gives the solution
encoded as a quantum state, rather than explicitly.
The other killer apps for quantum computing are a direct result of my work in [CMP 270, 359 (2007)]. There I not only showed how to simulate physical systems, but how to simulate quantum evolution under sparse Hamiltonians. This has enabled a wide range of other algorithms. In particular, it is the basis for new quantum algorithms to solve systems of linear equations [PRL 103, 150502 (2009)], and evaluate NAND trees [Theory of Computing 5, 119 (2009)].
The solution of systems of linear equations is an important area, because it includes a wide variety of problems, and itself can form the basis for new algorithms (such as my algorithm for solving differential equations). The evaluation of NAND trees solves an extremely general class of problems. This corresponds to a situation where two adversaries alternately make moves. For example, chess is a problem of this type. The algorithm for NAND tree evaluation might potentially solve the game of chess – something which was once thought to be completely infeasible.
Photon efficiency
A leading alternative for constructing quantum computers is using
optics, and many small-scale demonstrations have been achieved.
A major problem with optical demonstrations is that of photon
loss. One of the main locations for photon loss is at the
source. That is, the photon source does not always emit a
photon. It is therefore vital to understand how the photon
efficiency – the probability of a single photon – behaves under linear
optical processing, and whether it can be improved.
In 2004 I showed
that indeed it is possible to increase the efficiency [PRA 69, 031806(R) (2004)],
and I expanded greatly on these results in [NJP 6, 93 (2004)].
Then in [Optics Letters 31, 107 (2006)],[JOSA B 24, 189 (2007)]
I showed how to treat photon sources with coherence
between the zero and single-photon components. These results
showed what the corresponding efficiency is for these sources, which
previously could not be quantified. In work with Alex Lvovsky, I have recently solved many of the
outstanding problems in this area [PRL 105, 203601 (2010)].
Anyon simulation
In order to improve the resource efficiency and loss tolerance of
optical quantum computing, it is vital to find new paradigms for
performing the computation. One alternative is provided by
anyons. Anyons can potentially provide enormous resistance
against error, but are extremely difficult to produce. A
promising solution is therefore to simulate anyons using another
physical system. A demonstration of simulation of abelian anyons
was achieved using photons [PRL 102, 030502 (2009)],
but
abelian anyons are difficult to use for quantum computation.
Therefore, I invented a method to simulate non-abelian anyons using
photons [NJP 12, 053011 (2010).]. This is a major step towards an
alternative paradigm
for quantum computing using optics.
Bell inequalities
A major task in the foundations of quantum mechanics is to show that
quantum systems really are in superpositions of different states,
rather than just having some definite state which is unknown (a “hidden
variable”). To prove this, Bell inequalities are used, that would
be satisfied by any system that is not in a superposition, and obeys
the usual rules of probability. Demonstrating violations of Bell
inequalities shows that quantum systems are in a superposition. A
significant problem in Bell inequality experiments, particularly
optical experiments, is that of loss. To address this problem,
typically the fair sampling assumption is used, which is that the loss
is (at least) independent of the measurement settings. I showed
that this assumption is actually unnecessary [PRA 81, 012109 (2010)].
In fact, one
can use a much weaker assumption, showing that Bell inequalities are
more robust against loss than previously believed.
2. Quantum Metrology
The other key quantum technology is quantum metrology.
Measurement is the basis of quantitative science, and the most precise
forms of measurement are based on interferometry. Normally
interferometry gives precision scaling as 1/N1/2 in the
number of resources N. In contrast, quantum metrology using
entangled states has the potential to provide precision scaling as 1/N,
giving vastly greater precision. For example, the current state of the art in distance measurement is
LIGO (the Laser Interferometer Gravitational-Wave Observatory), which
uses a 35 W laser beam to achieve distance measurements with precision
of one part in 1021. In comparison, using entangled
states, the same precision could be achieved using only
nanowatts.
It has long been a goal to achieve improved precision using nonclassical light in gravitational wave detectors. This was first proposed by
Carl Caves in 1981. A squeezed light source has now been installed in the GEO600 gravitational wave detector, and has enabled it to achieve its best ever sensitivity. In addition, a squeezed light source was tested in LIGO in 2011, prior to its conversion into Advanced LIGO. These squeezed sources provide a constant-factor improvement in sensitivity, rather than improving the scaling to 1/N. Nevertheless, they have the potential to finally increase sensitivity to the point where gravitational waves can be detected.
Even if the states could be generated that had the potential to yield 1/N scaling, there is the problem that the quantity
that needs to be measured is the phase, but this can not be measured
directly. It is necessary to devise a measurement algorithm that
can extract this enhanced phase information. I made the key
discoveries in the theory of measurement algorithms, showing how to
achieve the promised improvement in precision.
The central difficulty in performing measurements is that if no
information about the phase is used, then the measurement itself
usually introduces an additional uncertainty scaling as 1/N1/2,
so it is not possible to obtain the nonclassical improvements. To
obtain 1/N scaling for the uncertainty it is typically necessary to use
an estimate of the phase with precision 1/N. Moreover, for a
self-contained measurement, this phase information must be obtained
during the measurement. Designing these adaptive measurements is
a highly nontrivial task. Prior to my work, it was thought that
one simply needed to use the best available estimate of the phase to
adjust the interferometer to its operating point. I discovered
that this approach does not achieve the desired scaling,
and in fact it is necessary to displace the interferometer from its
operating point.
In particular, in 2000 I pioneered the theory of adaptive phase
measurements based on Bayesian analysis [PRL 85, 5098 (2000)].
This method is the foundation for the recent experimental work
[NJP 11, 073023 (2009)],[Nature 450, 393 (2007)],[Nature Photonics 5, 43 (2011)]. This work showed that small phase
uncertainty was
obtained by a radically new approach – selecting a feedback phase that
minimises the expected phase uncertainty after the next
detection. In addition, I showed how to perform the most
accurate possible phase measurements using a local oscillator [PRA 63, 013813 (2001)].
This significantly improved on the accuracy of previous proposals, and
achieved the theoretical bound for measurements on a state with a
single time mode. This work again showed the remarkable result
that the phase estimate that is used in adjusting the measurement
should not be the best estimate, and one needs to use a much more
sophisticated measurement algorithm.
This work was limited to achieving phase uncertainty scaling as (log N)1/2/N,
because this is the best that can be achieved with a single time
mode. Then in 2007 I showed how to use multiple time modes to
circumvent this bound and achieve measurements scaling at the limit of
1/N1/2. My experimental collaborators, led by Geoff
Pryde, achieved this protocol in the laboratory, and this work was
published in Nature [Nature 450, 393 (2007)]. This work received commentary in both
Nature and Science, and received national media attention.
Since then, I developed even more powerful techniques to achieve
the 1/N scaling with measurements that are not adaptive.
This is an extraordinary achievement, because measurements that are not
adaptive normally introduce an uncertainty of 1/N1/2. Again
my experimental collaborators achieved these measurements in the
laboratory, and this work was published in New Journal of Physics [NJP 11, 073023 (2009)]. I developed an extensive theoretical
basis for this work
introducing many innovations, such as considering the phase uncertainty
for an equivalent state in a single time mode, and rigorously showed
how to count resources in interferometry [PRA 80, 052114 (2009)].
The advanced measurement schemes that I developed in [PRA 80, 052114 (2009)],[NJP 11, 073023 (2009)],[Nature 450, 393 (2007)] can be
applied to both entangled “NOON” states and multipass interferometry,
and were demonstrated with multipass interferometry in [NJP 11, 073023 (2009)],[Nature 450, 393 (2007)].
The next challenge is to develop measurement schemes that can be
performed with the entangled states that are available in practice, and
thereby demonstrate the rapid measurements possible with entangled
states (multipass interferometry increases the measurement time).
I made a major step in this direction, by developing a measurement
scheme that uses the actual states of a given photon number produced by
the experimental apparatus. In contrast, previous work used
postselection on only those measurement results corresponding to a
4-photon NOON state. My experimental collaborators have now
achieved this measurement scheme in the laboratory, and this has been published in Nature Photonics [Nature Photonics 5, 43 (2011)].
Another major problem in phase measurement is that of tracking a
fluctuating phase. In 2002 [PRA 65, 043803 (2002)],
as well as in 2006 [PRA 73, 063824 (2006)] with corrections in 2013
[PRA 87, 019901(E) (2013)],
I developed extensive theory for tracking a
fluctuating phase using coherent and squeezed states. This theory
showed that the usual scaling laws that are expected for measurements
of a fixed phase are dramatically changed. In addition, it again
showed that the measurements used should not be based on the best phase
estimate, and the measurement algorithm that is needed is far more
subtle.
In the newest results, we have shown that the scaling found for adaptive measurements
is in fact the best possible for arbitrary measurements on Gaussian states
[arXiv:1306.1279].
These results will need to be used in interferometers
with highly squeezed inputs.
I have been closely collaborating with the experimental group headed by
Elanor Huntington, as well as the experimental group of Akira Furusawa,
on the demonstration of these schemes. Initial work with coherent
states was published in Physical Review Letters in 2010
[PRL 104, 093601 (2010)]. This work also used the theory of quantum
smoothing, which uses data from both before and after the time of
interest, to obtain improved accuracy. This work has been
highlighted as a Physical Review Letters editor’s suggestion, as well
as with a synopsis on Physics (which spotlights exceptional
papers from Physical Review journals), and has been reported by the
science reporting website physorg.
We have now had success in achieving these measurement techniques using squeezed
states, and this work was published in Science
[Science 337, 1514 (2012)].