In one view of the beginnings of life, carbon monoxide molecules condense on hot mineral surfaces underground to form fatty acids, above, which are then expelled from geysers.

The New York Times, nytimes.com, June 18, 2009, by Nicholas Wade  —  Some 3.9 billion years ago, a shift in the orbit of the Sun‘s outer planets sent a surge of large comets and asteroids careening into the inner solar system. Their violent impacts gouged out the large craters still visible on the Moon‘s face, heated Earth‘s surface into molten rock and boiled off its oceans into an incandescent mist.

Yet rocks that formed on Earth 3.8 billion years ago, almost as soon as the bombardment had stopped, contain possible evidence of biological processes. If life can arise from inorganic matter so quickly and easily, why is it not abundant in the solar system and beyond? If biology is an inherent property of matter, why have chemists so far been unable to reconstruct life, or anything close to it, in the laboratory?

The origins of life on Earth bristle with puzzle and paradox. Which came first, the proteins of living cells or the genetic information that makes them? How could the metabolism of living things get started without an enclosing membrane to keep all the necessary chemicals together? But if life started inside a cell membrane, how did the necessary nutrients get in?

The questions may seem moot, since life did start somehow. But for the small group of researchers who insist on learning exactly how it started, frustration has abounded. Many once-promising leads have led only to years of wasted effort. Scientists as eminent as Francis Crick, the chief theorist of molecular biology, have quietly suggested that life may have formed elsewhere before seeding the planet, so hard does it seem to find a plausible explanation for its emergence on Earth.

In the last few years, however, four surprising advances have renewed confidence that a terrestrial explanation for life’s origins will eventually emerge.

One is a series of discoveries about the cell-like structures that could have formed naturally from fatty chemicals likely to have been present on the primitive Earth. This lead emerged from a long argument between three colleagues as to whether a genetic system or a cell membrane came first in the development of life. They eventually agreed that genetics and membranes had to have evolved together.

The three researchers, Jack W. Szostak, David P. Bartel and P. Luigi Luisi, published a somewhat adventurous manifesto in Nature in 2001, declaring that the way to make a synthetic cell was to get a protocell and a genetic molecule to grow and divide in parallel, with the molecules being encapsulated in the cell. If the molecules gave the cell a survival advantage over other cells, the outcome would be “a sustainable, autonomously replicating system, capable of Darwinian evolution,” they wrote.

“It would be truly alive,” they added.

One of the authors, Dr. Szostak, of the Massachusetts General Hospital, has since managed to achieve a surprising amount of this program.

Simple fatty acids, of the sort likely to have been around on the primitive Earth, will spontaneously form double-layered spheres, much like the double-layered membrane of today’s living cells. These protocells will incorporate new fatty acids fed into the water, and eventually divide.

Living cells are generally impermeable and have elaborate mechanisms for admitting only the nutrients they need. But Dr. Szostak and his colleagues have shown that small molecules can easily enter the protocells. If they combine into larger molecules, however, they cannot get out, just the arrangement a primitive cell would need. If a protocell is made to encapsulate a short piece of DNA and is then fed with nucleotides, the building blocks of DNA, the nucleotides will spontaneously enter the cell and link into another DNA molecule.

At a symposium on evolution at the Cold Spring Harbor Laboratory on Long Island last month, Dr. Szostak said he was “optimistic about getting a chemical replication system going” inside a protocell. He then hopes to integrate a replicating nucleic acid system with dividing protocells.

Dr. Szostak’s experiments have come close to creating a spontaneously dividing cell from chemicals assumed to have existed on the primitive Earth. But some of his ingredients, like the nucleotide building blocks of nucleic acids, are quite complex. Prebiotic chemists, who study the prelife chemistry of the primitive Earth, have long been close to despair over how nucleotides could ever have arisen spontaneously.

Nucleotides consist of a sugar molecule, like ribose or deoxyribose, joined to a base at one end and a phosphate group at the other. Prebiotic chemists discovered with delight that bases like adenine will easily form from simple chemicals like hydrogen cyanide. But years of disappointment followed when the adenine proved incapable of linking naturally to the ribose.

Last month, John Sutherland, a chemist at the University of Manchester in England, reported in Nature his discovery of a quite unexpected route for synthesizing nucleotides from prebiotic chemicals. Instead of making the base and sugar separately from chemicals likely to have existed on the primitive Earth, Dr. Sutherland showed how under the right conditions the base and sugar could be built up as a single unit, and so did not need to be linked.

“I think the Sutherland paper has been the biggest advance in the last five years in terms of prebiotic chemistry,” said Gerald F. Joyce, an expert on the origins of life at the Scripps Research Institute in La Jolla, Calif.

Once a self-replicating system develops from chemicals, this is the beginning of genetic history, since each molecule carries the imprint of its ancestor. Dr. Crick, who was interested in the chemistry that preceded replication, once observed, “After this point, the rest is just history.”

Dr. Joyce has been studying the possible beginning of history by developing RNA molecules with the capacity for replication. RNA, a close cousin of DNA, almost certainly preceded it as the genetic molecule of living cells. Besides carrying information, RNA can also act as an enzyme to promote chemical reactions. Dr. Joyce reported in Science earlier this year that he had developed two RNA molecules that can promote each other’s synthesis from the four kinds of RNA nucleotides.

“We finally have a molecule that’s immortal,” he said, meaning one whose information can be passed on indefinitely. The system is not alive, he says, but performs central functions of life like replication and adapting to new conditions.

“Gerry Joyce is getting ever closer to showing you can have self-replication of RNA species,” Dr. Sutherland said. “So only a pessimist wouldn’t allow him success in a few years.”

Another striking advance has come from new studies of the handedness of molecules. Some chemicals, like the amino acids of which proteins are made, exist in two mirror-image forms, much like the left and right hand. In most naturally occurring conditions they are found in roughly equal mixtures of the two forms. But in a living cell all amino acids are left-handed, and all sugars and nucleotides are right-handed.

Prebiotic chemists have long been at a loss to explain how the first living systems could have extracted just one kind of the handed chemicals from the mixtures on the early Earth. Left-handed nucleotides are a poison because they prevent right-handed nucleotides linking up in a chain to form nucleic acids like RNA or DNA. Dr. Joyce refers to the problem as “original syn,” referring to the chemist’s terms syn and anti for the structures in the handed forms.

The chemists have now been granted an unexpected absolution from their original syn problem. Researchers like Donna Blackmond of Imperial College London have discovered that a mixture of left-handed and right-handed molecules can be converted to just one form by cycles of freezing and melting.

With these four recent advances – Dr. Szostak’s protocells, self-replicating RNA, the natural synthesis of nucleotides, and an explanation for handedness – those who study the origin of life have much to be pleased about, despite the distance yet to go. “At some point some of these threads will start joining together,” Dr. Sutherland said. “I think all of us are far more optimistic now than we were five or 10 years ago.”

One measure of the difficulties ahead, however, is that so far there is little agreement on the kind of environment in which life originated. Some chemists, like Günther Wächtershäuser, argue that life began in volcanic conditions, like those of the deep sea vents. These have the gases and metallic catalysts in which, he argues, the first metabolic processes were likely to have arisen.

But many biologists believe that in the oceans, the necessary constituents of life would always be too diluted. They favor a warm freshwater pond for the origin of life, as did Darwin, where cycles of wetting and evaporation around the edges could produce useful concentrations and chemical processes.

No one knows for sure when life began. The oldest generally accepted evidence for living cells are fossil bacteria 1.9 billion years old from the Gunflint Formation of Ontario. But rocks from two sites in Greenland, containing an unusual mix of carbon isotopes that could be evidence of biological processes, are 3.830 billion years old.

How could life have gotten off to such a quick start, given that the surface of the Earth was probably sterilized by the Late Heavy Bombardment, the rain of gigantic comets and asteroids that pelted the Earth and Moon around 3.9 billion years ago? Stephen Mojzsis, a geologist at the University of Colorado who analyzed one of the Greenland sites, argued in Nature last month that the Late Heavy Bombardment would not have killed everything, as is generally believed. In his view, life could have started much earlier and survived the bombardment in deep sea environments.

Recent evidence from very ancient rocks known as zircons suggests that stable oceans and continental crust had emerged as long as 4.404 billion years ago, a mere 150 million years after the Earth’s formation. So life might have had half a billion years to get started before the cataclysmic bombardment.

But geologists dispute whether the Greenland rocks really offer signs of biological processes, and geochemists have often revised their estimates of the composition of the primitive atmosphere. Leslie Orgel, a pioneer of prebiotic chemistry, used to say, “Just wait a few years, and conditions on the primitive Earth will change again,” said Dr. Joyce, a former student of his.

Chemists and biologists are thus pretty much on their own in figuring out how life started. For lack of fossil evidence, they have no guide as to when, where or how the first forms of life emerged. So they will figure life out only by reinventing it in the laboratory.

From the kitchen to the backyard, WebMD uncovers common household activities that could affect your health.  

By Lisa Zamosky
Reviewed by Louise Chang, MD

They say that home is where the heart is. But what you may not know is that it’s also where 65% of colds and more than half of food-borne illnesses are contracted. The things we do around the house every day have a big impact on both our long- and short-term health.  Here are six common household activities that may be making you sick.


  • Using a Sponge

The dirtiest room in everybody’s home is the kitchen, says Phillip Tierno, PhD, director of clinical microbiology and diagnostic immunology at the New York University Langone Medical Center and author of The Secret Life of Germs. “That’s because we deal with dead animal carcasses on our countertops and in the sink.” Raw meat can carry E. coli and salmonella, among other viruses and bacteria.

Most people clean their countertops and table after a meal with the one tool found in almost all kitchens: the sponge. In addition to sopping up liquids and other messes, the kitchen sponge commonly carries E. coli and fecal bacteria, as well as many other microbes. “It’s the single dirtiest thing in your kitchen, along with a dishrag,” says Tierno.

Ironically, the more you attempt to clean your countertops with a sponge, the more germs you’re spreading around. “People leave [the sponge] growing and it becomes teaming with [millions of] bacteria, and that can make you sick and become a reservoir of other organisms that you cross-contaminate your countertops with, your refrigerator, and other appliances in the kitchen,” Tierno explains.

  • Solution:

Tierno suggests dipping sponges into a solution of bleach and water before wiping down surfaces. “That is the best and cheapest germicide money can buy — less than a penny to make the solution — so that you can clean your countertops, cutting boards, dishrags, or sponges after each meal preparation.”

In addition, once you’ve used your sponge, be sure to let it air-dry. Dryness kills off organisms. Another way to keep bacteria from building up in your sponge is to microwave it for one to two minutes each week. “Put a little water in a dish and put the sponge in that,” Tierno advises. “That will boil and distribute the heat evenly [throughout the sponge] and kill the bacteria.”


  • Opening Your Windows

When the weather turns nice, many of us throw open our windows to breath in the fresh spring air. But that may be an unhealthy move, considering the combination of seasonal allergies and poor air quality of many cities throughout the U.S. According to a recent report by the American Lung Association, 60% of Americans are breathing unhealthy air. And the pollution inside our homes may be worse than outdoors. The Environmental Protection Agency lists poor indoor air quality as the fourth largest environmental threat to our country. Bacteria, molds, mildew, tobacco smoke, viruses, animal dander, house dust mites, and pollen are among the most common household pollutants. 

  • Solution:

Shut the windows and run the air conditioner. All air-conditioning systems have a filter that protects the mechanical equipment and keeps them clean of debris.

“Pollen and mold spores that have made their way indoors will be run through the air-conditioning system and taken out of the air as they go through the duct work,” MacIntosh says.

But much like with the vacuum cleaner, these filters can only capture the largest particles. “The conventional filters just pick up big things, such as hair or cob webs,” says MacIntosh. “Filters intended to remove the inhalable particles, which are very small, exist on the market and some are very effective.”

They may also be worth the investment. A recent study published in The New England Journal of Medicine showed that cleaner air might add as much as five months to a person’s life.

Tierno says that air purification systems are important, particularly in a bedroom where bacteria are teaming.

  • Vacuuming

Conventional vacuum cleaners are intended to pick up and retain big pieces of dirt, like the dust bunnies we see floating about on our floors. But it’s the tiny dust particles that pass right through the porous vacuum bags and up into the air. So, while our floors may look cleaner after running a vacuum over them, plenty of dust, which can exacerbate allergies and asthma, remains.

Pet allergens and indoor dust, which contains the highest concentrations of hazardous materials like heavy metals, lead, pesticides, and other chemicals, are found in higher concentrations in the smallest particles of the dust, explains David MacIntosh, MD. He is principal scientist at Environmental Health & Engineering (EH&E), an environmental consulting and engineering services firm based in Needham, Mass.

“The everyday habit of cleaning with a conventional vacuum cleaner results in a burst of particles in the air and then they settle back down over the course of hours,” says MacIntosh.

  • Solution:

 Look for a vacuum cleaner with a high efficiency particulate air (HEPA) filter. Unlike those in conventional vacuums, HEPA filters are able to retain the small particles and prevent them from passing through and contaminating the air you breathe in your home.

  • Sleeping With Pillows and a Mattress

The average person sheds about 1.5 million skin cells per hour and perspires one quart every day even while doing nothing, says Tierno. The skin cells accumulate in our pillows and mattresses and dust mites grow and settle.

If that’s not gross enough for you, Tierno explains that a mattress doubles in weight every 10 years because of the accumulation of human hair, bodily secretions, animal hair and dander, fungal mold and spores, bacteria, chemicals, dust, lint, fibers, dust mites, insect parts, and a variety of particulates, including dust mite feces. After five years, 10% of the weight of a pillow is dust mites. This is what you’re inhaling while you sleep.  “What you’re sleeping on can exacerbate your allergies or your asthma,” says Tierno.

  • Solution:

Cover your mattress, box springs, and pillows with impervious outer covers.

“Allergy-proof coverings seal the mattress and pillow, preventing anything from getting in or out, which protects you,” Tierno says. He also suggests that you wash your sheets weekly in hot water. Make sure the temperature range of the water is between 130 to 150 degrees Fahrenheit.

  • Grilling Meat

So much for the summertime staple: Barbecuing meat creates the cancer-causing compounds polycyclic aromatic hydrocarbons (PAHs) and heterocyclic amines (HCAs). When fat drips from the meat onto the hot grill, catches fire, and produces smoke, PAHs form. That’s what’s contained in that delicious-looking charred mark we all look for on our burger. HCAs form when meat is cooked at a high temperature, which can occur during an indoor cooking process as well.

  • Solution:

“Limiting your outdoor cooking, using tin foil, or microwaving the meat first is a sensible precaution,” says Michael Thun, MD. He is emeritus vice president for epidemiology and surveillance research with the American Cancer Society.

Wrapping meat in foil with holes poked in it allows fat to drip off, but limits the amount of fat that hits the flames and comes back onto the meat, Thun tells WebMD. Some of the excess fat can also be eliminated by first microwaving meat and choosing cuts of meat that are leaner.


A federal study identified chicken as the most common source of food poisoning in 2006.  

The New York Times, June 12, 2009, by Gardiner Harris  —  Poultry was the most commonly identified source of food poisoning in the United States in 2006, followed by leafy vegetables and fruits and nuts, according to a report released Thursday by the Centers for Disease Control and Prevention.

The report is the first effort by federal researchers to identify how most people in the United States become sickened by contaminated foods. Its findings, while not surprising, were welcomed by food-safety advocates.

“It’s a nice first step,” said Donna Rosenbaum, executive director of the nonprofit Safe Tables Our Priority. “The problem is that it’s based on a very small data set.”

After a concerted campaign by the federal Department of Agriculture to improve the safety of chickens, the number of people sickened by contaminated poultry in 2006 declined compared with an average of the previous five years, according to C.D.C. researchers.

But problems persist. Most of the poultry-related illnesses, the centers found, were associated with Clostridium perfringens, a bacterium that commonly causes abdominal cramping and diarrhea usually within 10 to 12 hours after ingestion. The spores from this bacterium often survive cooking, so keeping poultry meat at temperatures low enough to prevent contamination during processing and storage is critical.

Researchers counted leafy vegetables, fungi, root vegetables, sprouts and vegetables from vines or stalks as separate categories. Caroline Smith DeWaal, director of food safety at the Center for Science in the Public Interest, an advocacy group, noted that if all of the produce categories were combined, outbreaks associated with vegetables would have far exceeded those in poultry.

“We’re very glad that C.D.C. is finally coming out with good food attribution data,” Ms. DeWaal said. “It clearly shows the need for improvements, not only at F.D.A. but at U.S.D.A.’s food safety programs as well.”

A bill that would substantially reform the food safety program at the Food and Drug Administration edged a step closer to a vote on Wednesday during a markup session at the House Energy and Commerce Subcommittee on Health. A companion measure is being considered in the Senate. Margaret A. Hamburg, the F.D.A. commissioner, said last week that she supported the legislation, although she had asked for some changes.

While poultry is the most common source of illnesses among the 17 different foods tracked by federal officials, the C.D.C. found that two-thirds of all food-related illnesses traced to a lone ingredient were caused by viruses, which are often added to food by restaurant workers who fail to wash their hands. Such viruses often cause what many people refer to as a “stomach flu,” one to two days of nausea and vomiting that is unrelated to the flu virus.

Salmonella, the bacteria found in nationwide outbreaks of contaminated peanut butter, spinach and tomatoes, was the second-leading cause of sole-source food illnesses, the centers found.

While dairy products accounted for just 3 percent of traceable food-related outbreaks, 71 percent of these cases were traced to unpasteurized milk, the researchers found.

The findings resulted from an analysis of reports of food-related illnesses submitted to the C.D.C. by state and local health departments. Although the system is the best available, it is far from perfect. Most of the estimated 76 million cases of food-related illnesses a year go unreported in the United States. And of those that are reported, most are not thoroughly investigated.


Photo: David Royal/Monterey County Herald

Leafy vegetables, including spinach, are another common source of food-related illness.

Vital Signs

The New York Times, nytimes.com, June 2009, by Eric Nagourney  —  Millions of people may suffer from inner-ear disorders that affect their balance but not be aware that they have a problem, a new study has found.

Writing in The Archives of Internal Medicine, researchers noted the connection between balance problems and falls, especially among the elderly.

The findings of the study, they said, suggest that doctors should make balance tests a routine part of checkups. This is especially true in nursing and assisted-living homes, they said.

“The big deal here really is falls,” the lead author, Dr. Yuri Agrawal of Johns Hopkins, said in an e-mail message, adding that a serious fall can be the beginning of the end for an older patient.

The researchers drew on data from a federal study in which more than 5,000 people age 40 and over were surveyed about their history of falls and balance problems. They were then given examinations to determine how well they could maintain their balance in a variety of situations, including with their eyes closed.

More than a third of the subjects, the researchers found, had the balance disorder known as vestibular dysfunction – a figure that would translate to 69 million Americans.

They also found that 32 percent of the volunteers who did not report problems with dizziness showed evidence of balance problems. Though they did not experience symptoms, they were still at higher risk for falls, the study said.

For doctors, Dr. Agrawal said, detecting balance problems in a patient is not very complicated. And treatment is available, including exercises that help people compensate for inner-ear problems that lead to poor balance.

The cost of the treatment, they said, would most likely be less than medical costs associated with falls.

Vital Signs

The New York Times, nytimes,com, June 2009, by Eric Nagourney  —  Many women entering menopause say their brains do not seem to work as well as they used to.

A new study suggests that they may be right, but it also appears the lapses are temporary.

Researchers studied more than 2,300 women, ages 42 to 52, over four years.

Some women in the study, which appears in Neurology, were still menstruating regularly. Others had completed menopause, and the remainder were in so-called perimenopause – that is, they were still having some periods but their bodies were experiencing changes as they neared menopause.

The researchers, led by Dr. Gail Greendale of the University of California, Los Angeles, gave the women a series of tests to determine cognitive skills. The tests measured memory and how quickly they processed information.

When it came to processing speed, the study found, all the women except those in the late perimenopausal stage improved their scores when they took the test repeatedly. The researchers made similar findings with the tests for verbal memory.

The differences between the women were less a matter of some scoring worse than others, but more of some failing to improve as much as others over time. And women with lower scores as they entered menopause did better after it.

NEUROLOGY 2009;72:1850-1857
© 2009 American Academy of Neurology

G. A. Greendale, MD, M-H Huang, DrPh, R. G. Wight, PhD, T. Seeman, PhD, C. Luetters, MS, N. E. Avis, PhD, J. Johnston, PhD and A. S. Karlamangla, PhD, MD

From the Division of Geriatrics, David Geffen School of Medicine (G.A.G., M.-H.H., T.S., A.S.K.), and Department of Community Health Sciences, School of Public Health (R.G.W.), University of California, Los Angeles; Irvine School of Medicine (C.L.), University of California, Irvine; Division of Public Health Sciences (N.E.A.), Wake Forest University School of Medicine, Winston-Salem, NC; and Alaska Native Epidemiology Center (J.J.), Alaska Native Tribal Health Consortium, Anchorage.

Address correspondence and reprint requests to Dr. Gail A. Greendale, Division of Geriatrics, David Geffen School of Medicine at UCLA, 10945 Le Conte Avenue, Suite 2339, Los Angeles, CA 90095-1687 ggreenda@mednet.ucla.edu

Background: There is almost no longitudinal information about measured cognitive performance during the menopause transition (MT).

Methods: We studied 2,362 participants from the Study of Women’s Health Across the Nation for 4 years. Major exposures were time spent in MT stages, hormone use prior to the final menstrual period, and postmenopausal current hormone use. Outcomes were longitudinal performance in three domains: processing speed (Symbol Digit Modalities Test [SDMT]), verbal memory (East Boston Memory Test [EBMT]), and working memory (Digit Span Backward).

Results: Premenopausal, early perimenopausal, and postmenopausal women scored higher with repeated SDMT administration (p 0.0008), but scores of late perimenopausal women did not improve over time (p = 0.2). EBMT delayed recall scores climbed during premenopause and postmenopause (p 0.01), but did not increase during early or late perimenopause (p 0.14). Initial SDMT, EBMT-immediate, and EBMT-delayed tests were 4%-6% higher among prior hormone users (p 0.001). On the SDMT and EBMT, compared to the premenopausal referent, postmenopausal current hormone users demonstrated poorer cognitive performance (p 0.05) but performance of postmenopausal nonhormone users was indistinguishable from that of premenopausal women.

Conclusions: Consistent with transitioning women’s perceived memory difficulties, perimenopause was associated with a decrement in cognitive performance, characterized by women not being able to learn as well as they had during premenopause. Improvement rebounded to premenopausal levels in postmenopause, suggesting that menopause transition-related cognitive difficulties may be time-limited. Hormone initiation prior to the final menstrual period had a beneficial effect whereas initiation after the final menstrual period had a detrimental effect on cognitive performance.

Abbreviations: CVD = cardiovascular disease; DSB = Digit Span Backward; EBMT = East Boston Memory Test; FMP = final menstrual period; MT = menopause transition; SDMT = Symbol Digit Modalities Test; SWAN = Study of Women’s Health Across the Nation.

Supplemental data at www.neurology.org

The Study of Women’s Health Across the Nation (SWAN) has grant support from the NIH, DHHS, through the National Institute on Aging (NIA), the National Institute of Nursing Research (NINR), and the NIH Office of Research on Women’s Health (ORWH) (Grants NR004061; AG012505, AG012535, AG012531, AG012539, AG012546, AG012553, AG012554, AG012495).

Disclosure: The authors report no disclosures.

Disclaimer: The content of this manuscript is solely the responsibility of the authors and does not necessarily represent the official views of the NIA, NINR, ORWH, or the NIH.

Received October 6, 2008. Accepted in final form February 25, 2009.

FierceBiotech.com, June 13, 2009, by John Carroll  —  For the biotech industry, New York has always been the place to go to get money, make a presentation and eat out at a great restaurant. But base your biotech company in the city, or even the state? Fuhgeddaboutit. Too expensive. Too far from a major cluster. And no ready access to subsidized incubator space.

But New York state (and city) have been trying to change all that.

Inspired by the Bush administration’s restrictive policies on ESC work, lawmakers passed a $600 million initiative back in 2007. The money has been flowing to top institutions like the Albert Einstein College of Medicine, Columbia University, Cornell and Mt. Sinai School of Medicine.

New York recently funneled $118.3 million to researchers involved in embryonic stem cell work in the state. And an initiative underway now would open up half of the money for stem cell research to private companies.

The Qualified Emerging Technology Company capital tax credit was designed to help fledgling biotech companies with an infusion of more than a million dollars over several years. And the city is looking to duplicate the program while the state moves closer to doubling the cap to $2.5 million.

To help keep the city’s young scientific talent at home in New York, city government started to create subsidized space for small and mid-sized biotechs at the Brooklyn Army Terminal and the East River Science Park, scheduled to be opened soon.

To get into the big time, New York has to compete with clusters like Boston, San Francisco and San Diego, which are all astronomically expensive to do business in. So maybe it’s not too far a stretch to suggest that New York has been leveling the playing field to a point where start-ups could make a good case for setting up shop in a city that never sleeps.

New York has scientific talent to spare. With the right kind of support, it can also create a surge of new biotech businesses to commercialize the scientists’ discoveries.

FDI (foreign direct investment)

FDIMagazine.com, June 12, 2009, by Jacqueline Hegarty  —  New York, San Francisco, Tampa and Greenville are the top cities in the 2009/10 North American Cities of the Future competition.

Although New York did not feature in the top rankings of the previous competition in 2007, this year it has pipped Chicago to the top spot to become fDi Magazine’s North American City of the Future 2009/10.

This is primarily due to the high economic potential of New York attributed to the large number of FDI projects into and out of the city and the number of megaprojects associated with the city, according to fDi Markets’ database, as well as its large base of post-secondary students.

Although Chicago has been pushed down to second place in the major city category, it still maintains the top position in terms of FDI promotion strategy and it scored highly in the categories of economic potential and infrastructure.

Mexico City is the highest ranking major Mexican city, in 12th position. Although it has performed well in terms of cost effectiveness, with the cheapest town office rental costs according to Regus, and in the human resources category, with the largest number of postsecondary students, the areas of quality of life and infrastructure have curbed its score.

fDi Magazine’s North American Cities of the Future 2009/10 shortlists, which took more than six months to research and involved the data collection of nearly 400 North American cities, ranks San Francisco, California, as the top large city of the future, followed closely by Austin, Texas. Of the large cities surveyed, San Luis Potosí in Mexico ranks top for cost effectiveness, while Charlotte, North Carolina, ranks top for FDI strategy according to the judging panel.

The small cities category comprised of the most cities surveyed this year, with Tampa, Florida, coming out in top position followed by Minneapolis, Minnesota. The Canadian city of Halifax has received the highest small city score for FDI strategy, while Raleigh in North Carolina ranks at number one position in terms of economic potential.

Richmond, Virginia, ranks ninth overall in the small cities category, primarily due to a high scoring in terms of its FDI promotion strategy.

The top micro North American City of the Future 2009/10 is Greenville, South Carolina, due to its strong economic potential, good human resources and high scores in business friendliness. The top Canadian city in the small city category is Victoria, British Columbia, ranking third overall, helped by high scores in both infrastructure and FDI promotion strategy.


In April 2008, the Financial Times Ltd acquired fDi Markets and fDi Benchmark. fDi Markets is an independent database which tracks global FDI on a realtime basis whereas fDi Benchmark is an independent database which benchmarks global locations on how appealing they are to foreign investors. This division compiled the majority of the data for the Cities of the Future competition, with the exception of the FDI promotion strategy which was submitted by individual cities and assessed by the judging panel. These changes have made the competition even more objective.


fDi Cities of the Future shortlists are created by the independent collection of data by fDi Benchmark, across nearly 400 North American cities. This information was set under six categories: economic potential, human resources, cost effectiveness, quality of life, infrastructure and business friendliness. A seventh category was added to the scoring – FDI promotion strategy. In this category, 128 North American cities  submitted details about their promotion strategy and this was judged and scored by the independent judging panel as well as fDi‘s research team.

This year, cities were categorized by their city population only – not the metro area – in order to make the data collected more comparative across city levels. The categories were as follows:

  • Major cities have a population of more than 1,000,000.
  • Large cities have a population greater than 500,000 and smaller than 1,000,000.
  • Small cities have a population greater than 100,000 and smaller than 500,000.
  • Micro cities have a population of less than 100,000.
  • Cities could score up to a maximum of 10 points for each individual criteria, which were weighted by importance to give the overall scores.


Carlos Angulo-Parra
Partner, Baker & McKenzie Abogados
Don Holbrook
Board member, International Economic Development Council
Daniel Malachuk
View from America columnist, fDi; and an adviser on global direct investment strategies
Miguel Noyola
Partner, Baker & McKenzie LLP
Mark O’Connell
Chief executive officer, OCO Global







New York

New York











San Francisco












Los Angeles









North Carolina



San Jose




The company hopes to develop powerful, lightweight lithium-air batteries.

MIT Technology Review, June 17, 2009, by Katherine Bourzac  —  IBM Research is beginning an ambitious project that it hopes will lead to the commercialization of batteries that store 10 times as much energy as today’s within the next five years. The company will partner with U.S. national labs to develop a promising but controversial technology that uses energy-dense but highly flammable lithium metal to react with oxygen in the air. The payoff, says the company, will be a lightweight, powerful, and rechargeable battery for the electrical grid and the electrification of transportation.

Lithium metal-air batteries can store a tremendous amount of energy–in theory, more than 5,000 watt-hours per kilogram. That’s more than ten-times as much as today’s high-performance lithium-ion batteries, and more than another class of energy-storage devices: fuel cells. Instead of containing a second reactant inside the cell, these batteries react with oxygen in the air that’s pulled in as needed, making them lightweight and compact.

IBM is pursuing the risky technology instead of lithium-ion batteries because it has the potential to reach high enough energy densities to change the transportation system, says Chandrasekhar Narayan, manager of science and technology at IBM’s Almaden Research Center, in San Jose, CA. “With all foreseeable developments, lithium-ion batteries are only going to get about two times better than they are today,” he says. “To really make an impact on transportation and on the grid, you need higher energy density than that.” One of the project’s goals, says Narayan, is a lightweight 500-mile battery for a family car. The Chevy Volt can go 40 miles before using the gas tank, and Tesla Motors’ Model S line can travel up to 300 miles without a recharge.

One of the main challenges in making lithium metal-air batteries is that “air isn’t just oxygen,” says Jeff Dahn, a professor of materials science at Dalhousie University, in Nova Scotia. Where there’s air there’s moisture, and “humidity is the death of lithium,” says Dahn. When lithium metal meets water, an explosive reaction ensues. These batteries will require protective membranes that exclude water but let in oxygen, and are stable over time.

IBM does not currently have battery research programs in place. However, Narayan says that IBM has the expertise needed to tackle the science problems. In addition to Oak Ridge, IBM will partner with Lawrence Berkeley, Lawrence Livermore, Argonne, and Pacific Northwest national labs. The company and its collaborators are currently working on a proposal for funding from the U.S. Department of Energy under the Advanced Research Projects Agency-Energy.

Research on lithium-metal batteries stalled about 20 years ago. In 1989, Canadian company Moli Energy recalled its rechargeable lithium-metal batteries, which used not air but a more traditional cathode, after one caught fire; the incident led to legal action, and the company declared bankruptcy. Soon after, Sony brought to market the first rechargeable lithium-ion batteries, which were safer, and research on lithium-metal electrodes slowed nearly to a halt. (After restructuring, Moli Energy refocused its research efforts and is now selling lithium-ion batteries under the name Molicel.) Only a handful of labs around the world, including those at PolyPlus Battery, in Berkeley, CA, Japan’s AIST, and St. Andrews University, in Scotland, are currently working on lithium-air batteries.

Safety problems with lithium-metal batteries can arise when they’re recharged. “When you charge and discharge, you have to electroplate and strip the metal over and over again,” says Dahn, who is not a contributor to the IBM project. Over time, just as in a lithium-ion battery, the lithium-metal surface becomes rough, which can lead to thermal runaway, when the battery literally burns until all the reactants inside are used up. But Narayan says that lithium-air batteries are inherently safer than previously developed lithium-metal batteries as well as today’s lithium-ion batteries because only one of the reactants is contained in the cell. “A lithium-air cell needs air from outside,” says Narayan. “You will never get a runaway reaction because air is limited.”

PolyPlus Battery has been working on lithium metal-air technology for about six years and has some dramatic evidence of the technology’s viability: floating among clownfish in an aquarium tank at the company’s headquarters, a lithium-metal battery pulls in oxygen from the salt water to power a green LED. The company has also developed a prototype battery that pulls oxygen from ambient air. But Steven Visco, founder and vice president of research at the company, says that lithium metal-air batteries are “still a young technology that’s not ready to be commercialized.”

IBM’s Narayan points to two remaining major problems with lithium metal-air technology. First, the design of the cathode needs to be optimized so that the lithium oxide that forms when oxygen is pulled inside the battery won’t block the oxygen intake channels. Second, better catalysts are needed to drive the reverse reaction that recharges the battery.

Narayan says that it won’t be clear how much money and how much time the project will take until about a year and half from now, after research has begun. He estimates that the company will devote about five years to the project. IBM will probably not make the batteries but will license the technology to manufacturers.