Friday, June 8, 2007

An experimental test of non-local realism

By Hein Teunissen
Quantum mechanics(QM) seriously challenges classical, intuitive ideas of how nature works. Einstein’s objections to QM were based on his belief that any physical theory must obey the concept of ‘local realism’. Local realism assumes that the results of measurements on a system localized in space-time are fully determined by pre-existing properties carried along by that system (its physical reality) and cannot be instantaneously influenced by a distant event (locality). But a famous thought experiment on entangled particles, by Einstein, Podolsky and Rosen, showed that QM did not fulfill this condition (EPR paradox).

In quantum mechanics, entangled particles are groups of particles that cannot be described independently even though they may be physically isolated from each other. Entangled particles form a state described by a single quantum mechanical wavefunction, which means that they have perfectly correlated quantum numbers, and this remains so when the particles are separated (in contrast to the apparent randomness of microscopic phenomena).

In the ‘Copenhagen interpretation’ of QM, the act of measurement on a quantum system causes an instantaneous collapse of the wavefunction of the system, so measurement on one of the entangled particles must instantaneously alter the state of the other particle. According to Einstein, this was ‘spooky action at a distance’, which defies locality. In an attempt to preserve locality, it was proposed that the description of reality given by the quantum mechanical wavefunction was incomplete. It was thought that a more complete theory could be formulated based on local realism, in which the physical reality of a system could be fully described by a set of ‘local hidden-variables’ (a limitation of QM being that not all variables are known).

Many experiments have been performed on entangled particles, for which quantum mechanics provides very accurate predictions. In 1964, John Bell showed that the excellent predictions of QM could not be reproduced by any alternative theory based on local realism (Bell’s theorem). The observed accuracy of QM in so-called Bell test experiments has led to the acceptance that local realism is violated.

Because the argument put forth by Einstein and his colleagues is very powerful, scientists have made numerous attempts to find out more about the ‘correct’ interpretation of QM. In the paper ‘An experimental test of non-local realism’ (Nature, April 2007), Gröblacher et al. try to determine which of the separate concepts of local realism may be the problem: ‘locality’ or ‘realism’. In the same vein, John Leggett had previously proposed a broad class of theories that give up the concept of locality. These theories provide both an explanation of all Bell-type experiments and model perfect correlations of entangled states, while still being based on a plausible type of realism.

In the recent Nature article, Gröblacher and his co-workers show, both theoretically and experimentally, that the class of non-local realistic theories as proposed by Leggett are incompatible with experimentally observable quantum correlations between entangled photons. The authors start from an inequality derived by Leggett for his proposed class of non-local realistic theories, and extend this inequality to make it applicable to real experimental situations and to allow simultaneous tests of all local hidden-variable models. Thus, the experimental results allow one to (once more) exclude all local hidden-variable models (called the CHSH inequality), and to test the mentioned class of non-local hidden-variable theories (Leggett inequality).

To experimentally test the obtained inequalities, Gröblacher and his colleagues generate pairs of photons whose polarization states are entangled. They modified an experiment that was previously used to test Bell’s inequality, where entangled photons are generated and then separated. In the experiment to test Leggett’s inequality, the polarization of one of the photons is altered to an elliptical state. Each photon is then passed through a medium which will absorb the photon depending on its polarization. The inequalities are tested by counting the number of times both entangled photons are not absorbed. It was found that the experimental results are accurately predicted by quantum mechanics and that both the CHSH inequality and Leggett’s inequality are significantly violated.

The authors state:

Our result suggests that giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned…We believe that the experimental exclusion of this particular class [introduced by Leggett] indicates that any non-local extension of quantum theory has to be highly counterintuitive…We believe that our results lend strong support to the view that any future extension of quantum theory that is in agreement with experiments must abandon certain features of realistic descriptions.


The history of quantum mechanics is very interesting. Many notable physicists have participated in this field to resolve complications with regard to the interpretation of QM. This history is characterized by a mixture of physical test-cases and philosophical arguments. The claim following from Bell’s theorem is very powerful, namely that no physical theory based on both locality and realism can ever parallel the accuracy of quantum mechanical predictions. The claim following from the violation of Leggett’s inequality is merely that realism combined with a certain type of non-locality are incompatible with QM. However, useful as such, this research also contributes to a better understanding of phenomena like entanglement, which is anticipated to possibly lead to a new technological revolution, called ‘quantum computation’.

Thursday, May 31, 2007

Beyond the diffraction limit

By Olivier Rekers
Due to the diffraction limit, it is hard to look at things that are smaller than the wavelength of the imaging light. But with the help of a 'hyperlens,' it is possible to produce magnified images of objects that are smaller than the wavelength of the imaging light. Hyperlenses and their close cousins, superlenses, have received a lot of attention recently because they have the potential to provide detailed information in living biological systems, unlike other high resolution imaging systems such as scanning tunneling microscopy.

How do hyperlenses avoid the diffraction limit? To understand that we need to understand what happens to the light when it strikes an object smaller than its wavelength. When light hits any object, information about the object is encoded in the light by changing the amplitude, phase, and direction that the light travels. When the object is smaller than the wavelength of light, the part of the light that carries the information doesn't propagate like normal light, instead it vanishes just a short distance from the object—after traveling one wavelength it is already half gone. These so-called evanescent waves are the key to breaking through the diffraction limit. One property of evanescent waves is that by controlling the refractive index it is possible to create a situation where they do not vanish, but rather propagate like normal light and can then be used to image the very small object. Two papers published in Science contain experimental details of hyperlenses that do just this.

A 'hyperlens' is made out of a cylindrical layered object that has dielectric constants of different signs across the layer (radial axis) and along the layers (tangential axis). It is both possible to use a half-cylindrical or a cylindrical lens. The papers summarized here report on one of each.

The two groups differ in their methods for achieving the necessary strong anisotropy in their 'hyperlens' medium. The group of Liu used a curved, periodic stack of silver and aluminum oxide. This stack is deposited on a concave quarts substrate. The object to be imageed is placed in contact with the lens—in this case a chromium layer inscribed with a pattern. With use of a conventional lens is it possible to make a projection of a sub-wavelength structure.

The group of Smolyaninov combine the idea of a hyperlens with the earlier concept of a superlens. A superlens is made of a single layer of a meta-material with a negative refractive index. In this case, the negative refractive index is created by depositing concentric rings of poly (methyl methacrylate) on a golden film surface. The evanescent wave impinging on the gold surface excites a surface plasmon polariton wave, which experiences the structure as a negative refractive index. Snell's law then ensures that the superlens magnifies a sub-wavelength sample in a ring—the magnification increases as you travel outwards from the center point. Near the edge of the superlens, the magnification is sufficient that it is possible to see use a microscope objective to image the sample.

Both lenses have significant advantages. Conventional microscopy is limited due to the diffraction limit. This limit makes it impossible to see things smaller than 200 nm. Thus, viruses, proteins, DNA molecules, and many other samples that are impossible to clearly visualize with a regular microscope may soon be accessible to visible light microscopy. Used in combination with labeling or spectroscopic techniques that enable the observer to identify different structures this could become a very useful tool in identifying molecular pathways.

Wednesday, May 23, 2007

Molecular Voyeurism: coming to a microscope near you

The development of atomic force microscope and scanning tunneling microscopes (STM) provide resolution that no other imaging technique can touch. These microscope techniques have progressed to the point that if an atom is dropped onto a specially prepared surface, called atomically flat, then the microscope will provide you with a picture of the atom—the proverbial red blob on a black background. The same is true of small molecules, where each atom is imaged and the molecular structure can be clearly seen. However, getting a picture out takes time, so a complete picture tends to be a time average of the molecule. What that really means is that researchers only get clear pictures of molecules that sit still.

In the Brevia section of Science, researchers are reporting imaging of the step-wise motion of a single molecule. To achieve this, the scientists fixed carbon nanotubes of various widths to a surface. They then deposited hydrocarbon molecules in the carbon nanotubes. This confines the molecules to motion in one dimension, which is important to keep the molecule within the field of view of the STM. They show that some hydrocarbons stick and no motion is observed, however, others moved slowly along the carbon nanotube (the movies are very blurry).

So is this a big advance in microscopy? No, but it is a nice demonstration of single molecule imaging. To put this in context, researchers in the nanotechnology area want to make small molecules that move in a controlled way along a surface. To do this, it will help to be able to observe how such motion works in one dimension—giving chemists more knowledge with which to choose functional groups from their catalog.