Monday, December 17, 2007

Gold nanoparticles for cancer diagnosis and treatment

Rolf Loch
Although there are many techniques for cancer diagnosis and treatment, there is still a need for techniques that are more accurate and/or less invasive to the body. One promising scheme, which is useful for diagnosis as well as therapy, is to attach gold nanoparticles to tumor cells and illuminate them with infrared laser light. This technique is much less invasive than chemotherapy, X-ray therapy or surgery. Magnetic resonance imaging is noninvasive and capable of detecting cancer tumors when they are very small, but the equipment and its operation are very expensive, meaning that it is often used at a later stage, when the cancer is already more advanced. Ultrasound therapy offers a much cheaper alternative, but the high intensity sound waves that are necessary for treatment cause tissue heating and cavitation (creation of small pockets of gas in the bodily fluids or tissues that expand and contract/collapse). The long term influence of these side effects are still not known.

Because of their extremely small size, nanoparticles restrict the motion of electrons in one or more directions. This restriction, called quantum confinement, allows the properties of the particles to be modified by changing their size, in contrast to bulk material whose properties are independent of size. In particular, the properties of the surface become dominant and, in the case of noble metals, resonant electromagnetic radiation will induce large surface electric fields that enhance their radiative properties. This means that the particles absorb much more light than would normally be expected and the light that is not absorbed is scattered much more strongly than expected. This absorption and scattering is typically orders of magnitude stronger than the most strongly absorbing molecules and organic dyes. It has been found that gold nanorods and nanocages exhibit strong infrared (IR) absorption and biological compatibility, making them good candidates for use in biological systems. Huang et al. grew gold nanorods of different sizes and showed that different aspect ratios between the rod diameter and the length resulted in different absorption spectra. This showed that it is possible to produce biologically compatible nanoparticles with different optical properties. For further investigation, they chose nanoparticles with an aspect ratio of 3.9 because the absorption band overlaps the wavelength at 800 nm, which is the wavelength of a commercial Ti:sapphire laser. Furthermore, this wavelength is in a region where the light extinction of the human tissue is a minimum, resulting in a penetration depth up to 10 cm, which means that almost the whole human body is accessible.

Due to their strong scattering, gold nanorods have excellent potential as optical contrast agents for molecular imaging. Furthermore, the strongly absorbed IR radiation can be converted into heat efficiently, making it a promising potential photothermal therapeutic agent. In photothermal therapy, optical radiation is absorbed and transformed into heat. The heat causes the proteins and DNA to denature, irreversibly damaging the cell and, consequently, causing its death. Usually, photothermal therapy is done with visible light, which is absorbed by the agent as well as the tissue. The use of IR radiation is favourable because cell tissue is transparent to IR light, making it possible to diagnose and treat tumor cells deeper in the body. The convenient characteristic bioconjugation (binding to biomolecules) of gold nanostructures improves the target selectivity, so that they can stick to particular proteins, which makes it possible to target cancer cells with the nanoparticles and ensure that unhealthy cells receive most of the energy during therapy. As a result, the photothermal destruction of surrounding healthy tissue is minimized and the damage is much less than that caused during x-ray therapy.

Many cancer cells have copies of a protein, called Epidermal Growth Factor Receptor (EGFR), at their surface, which normal healthy tissue cells either do not have or have much fewer copies than cancer cells. The anti-EGFR antibody will naturally attach itself to this protein, which means that nanorods with anti-EGFR on their surface will end up attached to the surfaces of cancer cells. In the article from Huang et al., a synthetic method has been described to attach the gold nanorods to anti-EGFR antibodies. The process begins by coating the nanorods in a bilayer of cetyltrimethyl-ammonium bromide (CTAB). This is then exposed to polystyrenesulfonate (PSS) before the prepared nanorods are mixed with an antibody solution. The antibodies are probably bound to the PSS-coated nanorods by a mechanism called electrostatic physisorption or physical adsorption. This means that liquid molecules only adhere to the surface of a solid through a weak intermolecular interaction, called the Van der Waals force.

Consecutively, Huang et al. cultured (grow under controlled conditions) nonmalignant and malignant cells and immersed them into the anti-EGFR-conjugated nanorods solution for 30 minutes. They showed, by using surface plasmon resonant absorption spectroscopy (a standard technique to measure adsorption on surfaces of nanoparticles) and light scattering imaging, that the malignant cells are easily distinguished from the noncancerous cells due to the larger amount of EGFR on the malignant cells and consequent high concentration of nanorods.

By irradiating the samples with a continuous wave Ti:sapphire laser at different power densities, and successively staining them with a blue dye that only dead cells accumulate, it was shown that the intensity required to cause the destruction of malignant cells (10 W/cm2) is half the value necessary to cause death of normal healthy cells (20 W/cm2). Thus, the researchers have demonstrated the potential for gold nanorods to improve the efficacy of photothermal therapy. In another study by Chen et al., it was found that gold nanocages further lower the intensity threshold required to destroy cancerous cells to 1.5 W/cm2.

In conclusion, the combination of nanotechnology and lasers, in the form of IR irradiated bioconjugated gold nanoparticles, can potentially be of use to effectively and safely diagnose and treat cancer, even in deeper parts of the human body. Primary fluorescent tests with mice were successfully performed by H. Wang et al. by injecting nanorods and imaging them as they flowed through blood vessels before these nanoparticles were, presumably, filtered out of the blood by the kidneys. Thus, we can expect that this promising, minimally invasive and cheap technique has shown great success in early trials. It has the potential to diagnose and treat any type of cancer, and, if proved safe, will become available for routine use in hospitals in the near future.

Monday, December 3, 2007

An innovative containment system for nuclear reactor accidents

Martijn Hendrikx
In response to safety concerns over nuclear Pressurized Water Reactors (PWR), researchers have been investigating innovative ways to prevent and mitigate accidents. A recent development in this research is the concept of containing the molten reactor material that may form during an accident, which is called corium.

To understand and control the formation of corium, a detailed understanding of the thermodynamics of PWR reactors is required. A PWR reactor produces heat through the fission of nuclear fuel (primarily uranium). The fuel is shaped in rods, which are interspersed with rods made from a material that absorbs neutrons strongly to control the fission process. Heat is removed by water flowing along the rods. To prevent the water from boiling, it is kept under high pressure in the reactor vessel, which is made from steel. These elements constitute the core of the nuclear reactor.

Corium is a very hot mixture of molten nuclear fuel and molten reactor components, and may appear similar to lava. It is highly radioactive, dangerous, and is only formed when the operators lose control of the nuclear reaction and the vessel becomes hot enough to melt various components of the core. In the context of reactor safety, special attention will be devoted to the phenomenon of the 'heat knife.' A heat knife is a localized peak in heat transfer rate within the vessel wall, capable of inducing vessel damage. Its formation is the result of melt stratification due to density differences of oxide and metallic compounds present in the corium melt bath. It is evident that the release of corium, which is especially probable if a heat knife forms, into the environment should be prevented at all cost.

Since the control of an accident depends on the specific thermodynamic properties of the reactor core, the researchers have focused their attention on a particular type of reactor: the Russian VVER reactor, the power of which is indicated by a numerical suffix (in Megawatts).

The safety systems presented here prevent the release of corium into the environment. In order to meet modern requirements, they must be based on natural physical forces. Two approaches, based on the treatment and localization of the corium melt (which is called 'catching'), have been proposed. The first approach aims at retaining the corium inside the vessel (in-vessel catchers), whereas the second is based on ameliorating the situation after a substantial amount of corium has been released from the containing vessel (ex-vessel catchers).

In the unfortunate event of corium formation inside the reactor vessel, one should attempt to prevent it from escaping into the environment. Escape is only possible when the hot corium has melted a substantial layer of the vessel wall material, allowing its penetration. A straightforward way to prevent this is by using the concept of in-vessel catchers introduced in 1994, which relies on externally cooling the vessel to protect its integrity.

The cooling mechanism relies on natural circulation of the large amount of water inside the reactor as well as on the flow of boiling water along its exterior surface (boiling water can act as a coolant here due to temperature differences). The heat transfer rate should be large enough to reduce, or at least stabilize, the temperature of the corium inside the vessel, but, on the other hand, it should not exceed some upper limit from which the intense energy flow damages the vessel wall.

The importance of using the correct heat transfer rate has been demonstrated in experimental and theoretical studies. The heat transfer rate is governed by the mass flow rate and the vapor quality, and hence, an appropriate hydraulic circuit can easily satisfy the heat flow requirements. Moreover, it was shown that the thickness of the vessel wall that is unaffected by the corium also determines whether the cooling is effective in retaining the corium.

In the following text, the feasibility of implementing the water jacket safety concept is discussed for various types of reactors. It is already known that for VVER440 reactors, the physical parameters mentioned (heat transfer rate, unaffected wall thickness) allow the water jacket concept to function. However, due to inferior commercial competitiveness of this reactor type, it is important to look at more powerful reactors (640MW, 1000MW and 1500MW) as well.

Fore the case of the VVER-640 reactor, calculations show that the heat flux (this is the heat flow per unit area) at the inner wall depends on two parameters. First, on the melt bath composition, which is subject to continuous change as the core melts. Second, it depends on the thickness of the molten steel layer of the melt bath. When this layer is 30 mm thick, the flux attains its maximum value. Due to the spread of heat over the thick vessel wall, the exterior surface flux is lower than the interior flux. Calculations show that cooling the exterior surface of the VVER-640 reactor vessel is an effective way to reduce the core-melting rate and to retain the corium melt. This information makes the implementation of water jacket in-vessel catcher an attractive option for the VVER-640 reactor.

For the VVER1000 and VVER-1500 reactors however, the water jacket cooling is insufficient. For these larger reactors, the heat flux levels will exceed the safety limits, inevitably resulting in the melt penetrating the vessel wall. Furthermore, new experimental results show that the corium melt is more complicated than previously thought.

In 2004, the Aleksandrov Research Institute of Technology (NITI) produced new experimental data on the corium melt composition. It turned out that inverse stratification of the melt could occur as a result of interactions between its various components. The swapped configuration of melt bath layers excludes heat knife development and may, thus, seem beneficial at first sight. Unfortunately, it also enables the subsequent formation of a three-layer structure (steel-oxide-steel), which does supports heat-knife formation. This means that two counteracting mechanisms have been identified and it remains uncertain which mechanism will be dominant. Furthermore, experiments at OIVT RAN (A Russian research institute) show that an homogenized melt bath with globules of iron may result from corium-steel interaction. Consequently, the heat flux distribution and heat knife formation probability remain uncertain. This has made it impossible to introduce the concept of in-vessel catchers into reactors with capacities above 440 MW. Instead, mitigation has concentrated on containing the corium once it leaves the vessel.

An ex-vessel catcher essentially consists of a basket containing a large volume of sacrificial material (iron- and aluminum oxides). This basket is located below the reactor vessel and is combined with a heat exchanger. When corium leaks from the vessel, it flows into this basket and interacts with the sacrificial material.

The interaction between corium and sacrificial material has many favorable effects. It reduces the heat flow rate thanks to its endothermic nature, excludes heat knife formation (due to inverse stratification), enables water delivery for cooling, prevents the hazardous release of hydrogen, stops nuclear chain reactions in the melt due to its neutron absorbing properties, and, finally, reduces the release of aerosols and gases. The high efficiency of these mechanisms is persistent, as it cannot be affected by the production of more corium.

Ex-vessel catchers can, thus, be implemented in larger reactors (VVER-640, VVER-1000, VVER-1500). The main drawback remains, however, that the treatment of corium only starts after the penetration of the reactor vessel, which is undesirable. Now, however, researchers have started to look at placing ex-vessel corium catchers inside the vessel.

It has been found that sacrificial material, placed in an isolated compartment at the bottom of the reactor vessel, which is extended in length by several meters, can act as an effective in-vessel catcher. In an accident scenario, the corium will enter this compartment and interact endothermically with the material. Outside the vessel, a water jacket is used to remove excess heat. The corium treatment is similar to that of the ex-vessel approach described above, the difference being that catching of the corium happens inside the vessel rather than below it. This novel approach, which clearly justifies the name 'in-vessel catcher', circumvents the main drawback of the ex-vessel catcher.

Assessments show that when the amount of sacrificial material matches the uranium dioxide mass in the core, melt stratification will occur, preventing heat knife formation. Furthermore, the radiation heat flux emanating from the hot melt bath (at around 2100 K) can be reduced by water flowing across the melt bath surface. Unfortunately, the required vessel-extension is rather expensive. However, costs are comparable to that of the ex-vessel catcher. Hence, from a financial point of view the two alternatives are equivalent.

The concept of in-vessel catchers is a promising technology that may improve the safety of nuclear reactors. It enables corium treatment inside the reactor vessel, keeping the environment safe from radioactive contamination. Further analysis is recommended.

Wednesday, October 24, 2007

Resonant energy transfer

Willem Beeker
Two important concepts in physics are energy and forces. Therefore, to understand the world around us we need a thorough understanding of these concepts and how they interact with each other. This article will focus on a system with two balanced forces and a third perturbing force. This is the area in physics that deals with oscillators and resonances that can be commonly found in the world around us. New technologies are being developed in this area are, for instance, wireless power transfer, but more fundamental research also takes place, for instance, the search for metamaterials. The concepts of energy, force, and resonance will be explored in this article. First I will illustrate the concept of a force and energy transfer by discussing a couple of example systems.

A force is needed to change the amount of energy of a certain object. The two forces most encountered in "real life" are gravity and the electromagnetic (EM) force. Once a single force acts upon an object, the energy of the object will either increase or decrease with time. When two forces act upon an object in exactly opposite directions, the object's energy neither increase nor decreases. This equilibrium is either stable or unstable, which becomes apparent after the application of a third force that disturbs the equilibrium (this is often called a perturbing force). If the system was in a stable equilibrium, it will return to its original state once the disrupting force ceases to act. On the other hand, if the system does not return to its equilibrium state, the system was unstable and very small perturbing forces are sufficient to cause it to leave the equilibrium state. A simple example is a sling with solid ropes, where two forces are exactly canceled at both the lowest point (towards the ground) and the highest point. From experience, one already knows that the highest point is unstable. Once disturbed, the sling will move downwards from this unstable position without returning.

If the perturbed system is located near a stable equilibrium point and not heavily damped, the object will oscillate with a certain frequency, called the resonance frequency, around the equilibrium point. If the perturbing force is periodic, then the nature of the system response can be categorized by comparing the frequency of the force with the natural frequency of the system. There are three categories: below resonance, at resonance, and above the resonance frequency. Driving systems on resonance is of interest because energy can be transferred efficiently from the driving force to the system.

The use of a system driven at the resonance frequency often has the purpose of maximizing energy transfer. For example, this is useful in broadcasting technology, where the amount of power received by an antenna can be maximized. This is achieved by shaping antennas so that the current flow in the antenna element is driven resonantly by the incident light field. Recently, researchers presented a new approach to energy transfer that focuses on the magnetic resonance of a system consisting of two helical shaped conducting coils. The researchers made the electrons in a conducting source coil oscillate at the resonance frequency, thereby producing a magnetic field through the coil. At a distance of up to 2 meters they placed a second identical coil to pick up the magnetic field, which then generates a similar current in the second coil. This current was then sent through a conventional light bulb and a power transfer of 60 Watt was measured. This example represents the most important feature of resonant coupling, namely maximum energy transfer. The maximum efficiency of a resonantly coupled system is limited by the coupling constant and the damping coefficient (energy loss through other mechanisms, including the load to be powered by the energy transfer).

The main advantage of using the magnetic resonance is that human tissues do not respond resonantly to this type of radiation, therefore, it can be used safely in everyday life. Small devices like cellular phones could be charged without having to be plugged into an adapter. Small robots with specific tasks, such as cleaning, will probably become commonplace in households in the next 10 years. These robots could get their energy through such radiation, thereby increasing their applicability and effective working time. Actually, any device that uses batteries could be charged by such wireless power transfer, reducing the need to change batteries. A drawback is that to implement this technology, designs will need to be changed; therefore successful implementation will depend on the proper introduction of standards for this technology. So we will need to wait and see whether major companies will start to make use of this.

Tuesday, September 4, 2007

Tunable nanowire nonlinear optical probe

Martijn Tesselaar
Developments in nanotechnology have recently expanded to include the search for nanometer scale optical elements. This search has already yielded such building blocks as nanometer sized light-emitting diodes, lasers, photo detectors, and waveguides. A new step forward is the discovery of a nanometer sized, tunable coherent visible light source, described in a recent article in Nature by Nakayama et al. This light source consists of a tiny needle made from a nonlinear material that will start to emit light by means of a nonlinear process when irradiated by a laser beam.

The light source needle consists of Potassium Niobate (KNbO3) that has a large second-order susceptibility (χ(2)). The material is prepared for this experiment in the form of nanowires by means of self-assembly, where a modified crystal growth process is stopped shortly after induction (when the first seed crystals appear) to obtain very small crystals. These naturally take the form of nanowires that ranged in width from 40 to 400 nm and in length from 1 to 20 mm, depending on when the growth was stopped. The nanowires were shown to be single-crystalline with an orthorhombic crystal lattice by means of x-ray powder diffraction measurements.

The nanowires have further been shown to exhibit nonlinearity by their response to high intensity laser light. When subjected to the light from a single laser, emitting femtosecond pulses, the nanowires start to emit light at twice the frequency-in this case, blue light is generated-in a process called second harmonic generation. When subjected to two laser beams with two different input frequencies, the nanowires generate light at the sum of the two frequencies, called sum frequency generation. The effect of these nonlinear processes are used to produce light at a frequency different from the frequency of the incident laser beam, making it easily distinguishable in subsequently demonstrated applications.

When a nanowire is trapped in an optical tweezers instrument, which traps small particles in a focused laser beam, the nanowire generates frequency-doubled light internally as a side effect. Since the nanowire also functions as a waveguide the generated light is emitted from the ends of the wire. A remarkable feature of the experiment is that the nanowire spontaneously orients itself to the optical axis of the trap/pump laser, thereby accomplishing alignment without the need for manual orientation of the crystal. Most optical tweezers use a laser beam with a wavelength of about 1000 nm because of the transparency of most biological specimens at this wavelength. Consequently, the light emitted from both ends of the nanowire by means of SHG with this setup is around 500 nm in wavelength.

One possible application of this trapped nanowire arrangement is in optical imaging systems, where it could be used to dramatically increase the image's spatial resolution. Since the nanowire's end is, in effect, a nanometer scale illumination aperture, it can be used to perform transmission microscopy with a very high resolution. To do this, a sample is scanned through the beam emanating from one end of the nanowire while a detector registers the amount of transmitted light. This 'nanowire scanning microscopy' can be used to image objects with sufficient resolution to identify nanometer-sized features.

Another application of the trapped nanowire arrangement is pinpoint excited fluorescence, which could be used in fluorescence microscopy. For this application the loose end of the trapped nanowire is brought in contact with a fluorescent dye, which causes the dye to emit radiation in turn by means of fluorescence. Again, because of the extremely small size of the light source, use of this technique should result in a considerable increase in resolution.

One remaining problem with the trapped nanowire arrangement is that the optical tweezers do not hold the nanowire absolutely still. The nanowire is presumed to move laterally because of thermal fluctuations of the optical potential by distances of about 10 nm, while displacement in the longitudinal direction is presumed to be even larger.

Tuesday, July 3, 2007

Fluorescence and Raman microscopy

Roel Arts
Biomedical applications have been a driving force in microscopy innovations since the inception of the microscope. In order to better understand the complex biochemical processes that occur in any kind of living material, better imaging techniques are always needed. The two questions that any kind of biomedical microscopy experiment aims to answer are: "what's in there and where is it?" The more information that can be obtained from a sample, the better.

Broadly speaking, there are two major ways to obtain specific biochemical information. One strategy is to somehow label the molecules of interest and then look for these labels. This is what happens in fluorescence microscopy: a fluorophore—a molecule that glows when illuminated— or "dye" is made to bind to specific molecules. After adding the dye to the sample and washing it, the only fluorophores left under the microscope are the ones that are bound to the molecule of interest. In addition, these fluorophores all have specific colors they emit and specific colors that cause them to glow ("excitation wavelength"). By controlling the color of excitation light and filtering the emitted light, the position of the fluorophores—and hence the molecules of interest—can be obtained.

Another method for obtaining specific molecular information is a more direct one. A large number of spectroscopic techniques are available for identifying molecules. Most of these techniques rely on measuring the vibrational energy levels of a molecule. By measuring the vibrational energy levels of a molecule, a "fingerprint" of it is obtained. One technique for measuring these vibrational levels makes use of Raman scattering, which is a nonlinear optical process.

Raman scattering begins when an incoming photon gets absorbed by the molecule, which brings it to a so-called "virtual" energy level. Normally, this molecule would then emit a photon with exactly the same frequency (Rayleigh scattering). However, it was shown in the 1920s that this is not always the case; sometimes, a slightly lower-frequency (lower-energy or "Stokes shifted") photon is emitted. The difference in energy between incoming and outgoing photon just so happens to match the energy difference between two of the molecule's states; the energy of the photon enabled the molecule to reach a vibrational energy level. If the molecule happened to already be in a higher vibrational energy level when absorbing the photon, it can also fall back to a lower vibrational energy level than before. In this case, the molecule will emit a photon with a higher energy than the one it absorbed ("anti-Stokes shifted"). So by shining some monochromatic light on a sample, measuring the newly-generated colors, and subtracting the original color, the energy differences between vibrational levels of molecules in the sample can be measured. This is called Raman spectroscopy.

The big downside to this technique is that most of the incoming light will not be Raman-scattered. Most molecules have a small absorption cross-section for Raman scattering, which means that the number of photons generated by this process will be low and the signal will hence be weak and hard to detect.

For this reason, fluorescent labeling and Raman microscopy are difficult to combine; the (much) larger signal emitted by fluorescent labels will effectively drown out the Raman signal ("cross-talk"). A recent publication by the Biophysical Engineering Group (BPE) from the University of Twente claims that they have managed to combine the two techniques. This was achieved by making sure that the colors generated by Raman scattering, and the colors emitted by the fluorophores are far enough apart. They used semiconductor quantum dots (QDs) as the labeling agent rather than standard molecular dyes. QDs have been described as "artificial atoms" because they are designed to confine a small number of electrons spatially, which causes them to occupy a discrete set of energy levels, like in an atom. Tailoring these energy levels and hence their emission spectrum, can be done by carefully designing symmetry as well as dimensions of the quantum dots. BPE's letter in Nano Letters demonstrates two experiments in which QDs are used for two hybrid fluorescence/Raman microscopy experiments. Despite the QDs high luminescent yield, they were able to effectively perform resonant- and non-resonant Raman microscopy using a QD-stained sample.

Combined resonance Raman (RR) / one-photon excitation (OPE) imaging was demonstrated using neutrophils (white blood cells). In RR imaging, the sensitivity is improved compared to non-resonance Raman (NR) scattering because the energy of the incoming beam is chosen to make certain vibrational transitions more likely; this will increase the amount of light that gets Raman scattered. OPE is just the "regular" fluorescence process; one photon gets absorbed and a lower-energy photon gets emitted.

A fluorescent image was obtained by labeling specific parts of the cells with 15-20 nm diameter QDs. In addition, the researchers looked for the RR spectrum of flavocytochrome b558, which has a well-known RR spectrum. The sample was illuminated with 413.1 nm UV radiation, which resulted in the QDs emitting 605 nm radiation and the RR spectrum of flavocytochrome b558 being emitted in the 420-445 nm range. These two emitters were well separated and could be imaged independently of each other. The RR spectrum was not found to be affected by the presence of the quantum dots, but the QDs were shown to bleach (become inactive) after a certain amount of illumination, which makes it necessary to obtain the OPE signal before measuring the RR signal.

Combined non-resonance Raman / two-photon excitation (TPE) imaging was done on macrophages, another type of white blood cell. TPE differs from OPE in that the emitted photons have a higher energy than the absorbed photons. In this experiment, a slightly different QD type was used. The sample was illuminated with 647.1 nm radiation resulting in the QD's emitting 585 nm radiation. The nonresonant Raman (NR) spectrum was then measured in the 660-730 nm region (Stokes shifts of 300-1800 cm-1, "fingerprint region") as well as 770-870 nm (2500-4000 cm-1, high frequency region). These spectral regions allowed the researchers to identify certain proteins and lipids.

Although again no cross-talk was observed, it was also observed that the cells were damaged after illumination, presumably due to localized heating due to the presence of the quantum dots. This unexpected complication limits the amount of excitation power that can be delivered to the sample without wrecking all those interesting white blood cells.

The significance of this work is not that it shows new fundamental advances, but rather that it demonstrates the significance of being able to tailor the luminescent properties of QDs. In this case, it is the application that is important; by using a QD labeling agent, areas of interest within a sample can be identified, after which Raman-scattering can be used to get a detailed view of the molecules and molecular processes involved. Because of the highly-tunable optical response of quantum dots, this technique can be extended to any kind of sample where QD labeling is viable.

Tuesday, June 12, 2007

The role played by surface structure in breaking molecules

Herokazu Ueta
Many industrial chemicals are made by breaking other molecules up and sticking them back together in different ways. The problem is that the precursor chemicals—the ones that are used to make the end product—are quite stable so a lot of energy is needed to break them up. To overcome this problem catalysts are used. Often, these are metallic surfaces, which act to make the desired reaction require less energy. However, catalysts, and their interactions with the chemicals they are reacted with, are often poorly understood. Thus, much theoretical and experimental research effort is devoted to understanding catalyst systems.

Some recent work has focused on the interaction between methane and nickel, which is important for steam reforming of natural gas (methane is reacted with water to generate hydrogen and carbon monoxide). Recent experimental work has shown that the efficiency of this reaction depends not only on how high the kinetic energy of methane is but also on how the methane molecule is vibrating as it hits the nickel surface. Unfortunately, this is a difficult problem to model because the quantum mechanical description becomes quite complex once the surface, the molecule, and its vibrations are included.

However, in a recent issue of Physical Review Letters, researchers are reporting the effect of nickel lattice motion and surface reconstruction on methane dissociation. These theoretical calculations used a combination of approaches, where the nickel surface atoms are described exactly by their quantum mechanical state and the methane is approximated as a quasi-diatomic—CH3-H, where CH3 is treated as an atomic entity. The simulations show that the atoms in the nickel surface rearrange themselves during methane bond breaking. They also show that the reconstruction changes the local environment of the methane and reduces the barrier allowing the reaction to proceed. Their results indicate that, not only the excitation state of methane, but also the configuration of the surface atoms plays a role in methane dissociation.

These results need to be followed up with experiments utilizing molecules with a well-defined vibrational state. Such experiments can be used to observe the behavior of the surface during the interaction with the molecule.

Friday, June 8, 2007

An experimental test of non-local realism

By Hein Teunissen
Quantum mechanics(QM) seriously challenges classical, intuitive ideas of how nature works. Einstein’s objections to QM were based on his belief that any physical theory must obey the concept of ‘local realism’. Local realism assumes that the results of measurements on a system localized in space-time are fully determined by pre-existing properties carried along by that system (its physical reality) and cannot be instantaneously influenced by a distant event (locality). But a famous thought experiment on entangled particles, by Einstein, Podolsky and Rosen, showed that QM did not fulfill this condition (EPR paradox).

In quantum mechanics, entangled particles are groups of particles that cannot be described independently even though they may be physically isolated from each other. Entangled particles form a state described by a single quantum mechanical wavefunction, which means that they have perfectly correlated quantum numbers, and this remains so when the particles are separated (in contrast to the apparent randomness of microscopic phenomena).

In the ‘Copenhagen interpretation’ of QM, the act of measurement on a quantum system causes an instantaneous collapse of the wavefunction of the system, so measurement on one of the entangled particles must instantaneously alter the state of the other particle. According to Einstein, this was ‘spooky action at a distance’, which defies locality. In an attempt to preserve locality, it was proposed that the description of reality given by the quantum mechanical wavefunction was incomplete. It was thought that a more complete theory could be formulated based on local realism, in which the physical reality of a system could be fully described by a set of ‘local hidden-variables’ (a limitation of QM being that not all variables are known).

Many experiments have been performed on entangled particles, for which quantum mechanics provides very accurate predictions. In 1964, John Bell showed that the excellent predictions of QM could not be reproduced by any alternative theory based on local realism (Bell’s theorem). The observed accuracy of QM in so-called Bell test experiments has led to the acceptance that local realism is violated.

Because the argument put forth by Einstein and his colleagues is very powerful, scientists have made numerous attempts to find out more about the ‘correct’ interpretation of QM. In the paper ‘An experimental test of non-local realism’ (Nature, April 2007), Gröblacher et al. try to determine which of the separate concepts of local realism may be the problem: ‘locality’ or ‘realism’. In the same vein, John Leggett had previously proposed a broad class of theories that give up the concept of locality. These theories provide both an explanation of all Bell-type experiments and model perfect correlations of entangled states, while still being based on a plausible type of realism.

In the recent Nature article, Gröblacher and his co-workers show, both theoretically and experimentally, that the class of non-local realistic theories as proposed by Leggett are incompatible with experimentally observable quantum correlations between entangled photons. The authors start from an inequality derived by Leggett for his proposed class of non-local realistic theories, and extend this inequality to make it applicable to real experimental situations and to allow simultaneous tests of all local hidden-variable models. Thus, the experimental results allow one to (once more) exclude all local hidden-variable models (called the CHSH inequality), and to test the mentioned class of non-local hidden-variable theories (Leggett inequality).

To experimentally test the obtained inequalities, Gröblacher and his colleagues generate pairs of photons whose polarization states are entangled. They modified an experiment that was previously used to test Bell’s inequality, where entangled photons are generated and then separated. In the experiment to test Leggett’s inequality, the polarization of one of the photons is altered to an elliptical state. Each photon is then passed through a medium which will absorb the photon depending on its polarization. The inequalities are tested by counting the number of times both entangled photons are not absorbed. It was found that the experimental results are accurately predicted by quantum mechanics and that both the CHSH inequality and Leggett’s inequality are significantly violated.

The authors state:

Our result suggests that giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned…We believe that the experimental exclusion of this particular class [introduced by Leggett] indicates that any non-local extension of quantum theory has to be highly counterintuitive…We believe that our results lend strong support to the view that any future extension of quantum theory that is in agreement with experiments must abandon certain features of realistic descriptions.


The history of quantum mechanics is very interesting. Many notable physicists have participated in this field to resolve complications with regard to the interpretation of QM. This history is characterized by a mixture of physical test-cases and philosophical arguments. The claim following from Bell’s theorem is very powerful, namely that no physical theory based on both locality and realism can ever parallel the accuracy of quantum mechanical predictions. The claim following from the violation of Leggett’s inequality is merely that realism combined with a certain type of non-locality are incompatible with QM. However, useful as such, this research also contributes to a better understanding of phenomena like entanglement, which is anticipated to possibly lead to a new technological revolution, called ‘quantum computation’.

Thursday, May 31, 2007

Beyond the diffraction limit

By Olivier Rekers
Due to the diffraction limit, it is hard to look at things that are smaller than the wavelength of the imaging light. But with the help of a 'hyperlens,' it is possible to produce magnified images of objects that are smaller than the wavelength of the imaging light. Hyperlenses and their close cousins, superlenses, have received a lot of attention recently because they have the potential to provide detailed information in living biological systems, unlike other high resolution imaging systems such as scanning tunneling microscopy.

How do hyperlenses avoid the diffraction limit? To understand that we need to understand what happens to the light when it strikes an object smaller than its wavelength. When light hits any object, information about the object is encoded in the light by changing the amplitude, phase, and direction that the light travels. When the object is smaller than the wavelength of light, the part of the light that carries the information doesn't propagate like normal light, instead it vanishes just a short distance from the object—after traveling one wavelength it is already half gone. These so-called evanescent waves are the key to breaking through the diffraction limit. One property of evanescent waves is that by controlling the refractive index it is possible to create a situation where they do not vanish, but rather propagate like normal light and can then be used to image the very small object. Two papers published in Science contain experimental details of hyperlenses that do just this.

A 'hyperlens' is made out of a cylindrical layered object that has dielectric constants of different signs across the layer (radial axis) and along the layers (tangential axis). It is both possible to use a half-cylindrical or a cylindrical lens. The papers summarized here report on one of each.

The two groups differ in their methods for achieving the necessary strong anisotropy in their 'hyperlens' medium. The group of Liu used a curved, periodic stack of silver and aluminum oxide. This stack is deposited on a concave quarts substrate. The object to be imageed is placed in contact with the lens—in this case a chromium layer inscribed with a pattern. With use of a conventional lens is it possible to make a projection of a sub-wavelength structure.

The group of Smolyaninov combine the idea of a hyperlens with the earlier concept of a superlens. A superlens is made of a single layer of a meta-material with a negative refractive index. In this case, the negative refractive index is created by depositing concentric rings of poly (methyl methacrylate) on a golden film surface. The evanescent wave impinging on the gold surface excites a surface plasmon polariton wave, which experiences the structure as a negative refractive index. Snell's law then ensures that the superlens magnifies a sub-wavelength sample in a ring—the magnification increases as you travel outwards from the center point. Near the edge of the superlens, the magnification is sufficient that it is possible to see use a microscope objective to image the sample.

Both lenses have significant advantages. Conventional microscopy is limited due to the diffraction limit. This limit makes it impossible to see things smaller than 200 nm. Thus, viruses, proteins, DNA molecules, and many other samples that are impossible to clearly visualize with a regular microscope may soon be accessible to visible light microscopy. Used in combination with labeling or spectroscopic techniques that enable the observer to identify different structures this could become a very useful tool in identifying molecular pathways.

Wednesday, May 23, 2007

Molecular Voyeurism: coming to a microscope near you

The development of atomic force microscope and scanning tunneling microscopes (STM) provide resolution that no other imaging technique can touch. These microscope techniques have progressed to the point that if an atom is dropped onto a specially prepared surface, called atomically flat, then the microscope will provide you with a picture of the atom—the proverbial red blob on a black background. The same is true of small molecules, where each atom is imaged and the molecular structure can be clearly seen. However, getting a picture out takes time, so a complete picture tends to be a time average of the molecule. What that really means is that researchers only get clear pictures of molecules that sit still.

In the Brevia section of Science, researchers are reporting imaging of the step-wise motion of a single molecule. To achieve this, the scientists fixed carbon nanotubes of various widths to a surface. They then deposited hydrocarbon molecules in the carbon nanotubes. This confines the molecules to motion in one dimension, which is important to keep the molecule within the field of view of the STM. They show that some hydrocarbons stick and no motion is observed, however, others moved slowly along the carbon nanotube (the movies are very blurry).

So is this a big advance in microscopy? No, but it is a nice demonstration of single molecule imaging. To put this in context, researchers in the nanotechnology area want to make small molecules that move in a controlled way along a surface. To do this, it will help to be able to observe how such motion works in one dimension—giving chemists more knowledge with which to choose functional groups from their catalog.