Science Daily, Aug 12, 2007, — As people lose the ability to walk unaided, they tend to suffer further deterioration that can interfere with other daily living activities. As the U.S. population ages, it becomes increasingly important to identify and target interventions for those people who are at risk for further disabilities and illness.

In a paper published in the September issue of the American Journal of Preventive Medicine, researchers closely examined the factors that affected Health-Related Quality-of-Life for a group of older Americans. The study revealed that mobility is a key factor impacting quality of life for older adults.

Lifestyle Interventions and Independence for Elders–Pilot study (LIFE–P) was a randomized, controlled trial that compared a physical activity intervention to a non-exercise educational intervention with 424 older adults at risk for disability. Baseline information included demographics, medical history, the Quality of Well-Being Scale (QWB-SA), a timed 400 m walk, and the Short Physical Performance Battery (SPPB). Using these data, the authors looked for those factors that affected HRQOL.

The mean QWB-SA score for a sample of older adults considered at risk for disability was 0.634, below the 0.704 score found for healthy older adults. According to Erik J. Groessl, PhD, of the VA San Diego Healthcare System and University of California San Diego, the difference of 0.070 is “more than the amount attributed to a variety of diseases including colitis, migraine, arthritis, stroke, ulcer, asthma, and anxiety….Surprisingly, however, mobility was a stronger correlate of HRQOL than an index of comorbidity, suggesting that interventions addressing mobility limitations may provide significant health benefits to this population. …Taken together with past research, which has demonstrated that loss of mobility predicts loss of independence, mortality, and nursing home admission, it is clear that interventions that can preserve or improve mobility in older adults could produce increases in both quantity and quality of life.”

These results highlight the need to develop effective interventions for older adults at risk for disability.

Computers will make more than half of all stock purchases by the year 2010, according to Boston-based consultants Aite Group LLC. The perfect system will be a combination of the best human investors (like Warren Buffett) and computer systems that can calculate without emotions like greed or fear (like the fictional HAL 9000, from 2001: A Space Odyssey). Last year (2006), just one-third of all trades were driven by automated stock-picking programs.

(HAL-Buffett 9000 Computer [concept])

Computerized analysis can generate a quantifiable advantage. According to a November, 2005 study big-cap U.S. stock funds run using quantitative investing beat those run by ordinary mortals.

Mathematicians and brokers labor in secret at Wall Street firms like Lehman Brothers Holdings, Inc. Their algorithms spot trading advantages in the patterns of world markets. At the Stern School of Business, Vasant Dhar is trying to program a computer to factor in the impact of unexpected events (like the death of an executive).

The AIs of the future will take advantage of a variety of techniques that are in development now:

* Extract trends from massive data sets
* Understanding human language (natural language processing) will open huge new sources of information, like emails, blogs and even recorded conversations (one start-up, Collective Intellect, uses basic NLP programs to look through 55 million blogs for hedge fund tips)
* Lehman Brothers has used machine learning programs to examine millions of bids, offers, specific prices and buy/sell orders to find patterns in volatility and prices.

Advanced computer trading programs may one day tell human investors to relax and let them do the work, just like HAL did:

By Jason Kelley

(Bloomberg News) — Way up in a New York skyscraper, inside the headquarters of Lehman Brothers Holdings Inc., Computer Scientist, Michael Kearns is trying to teach a computer to do something other machines can’t: think like a Wall Street trader.

In his cubicle overlooking the trading floor, Kearns, 44, consults with Lehman Brothers traders as Ph.D.s tap away at secret software. The programs they’re writing are designed to sift through billions of trades and spot subtle patterns in world markets.

Kearns, a computer scientist who has a doctorate from Harvard University, says the code is part of a dream he’s been chasing for more than two decades: to imbue computers with artificial intelligence, or AI.

His vision of Wall Street conjures up science fiction fantasies of HAL 9000, the sentient computer in “2001: A Space Odyssey.” Instead of mindlessly crunching numbers, AI-powered circuitry one day will mimic our brains and understand our emotions — and outsmart human stock pickers, he says.

“This is going to change the world, and it’s going to change Wall Street,” says Kearns, who spent the 1990s researching AI at Murray Hill, New Jersey-based Bell Laboratories, birthplace of the laser and the transistor.

As finance Ph.D.s, mathematicians and other computer-loving disciples of quantitative analysis challenge traditional traders and money managers, Kearns and a small band of AI scientists have set out to build the ultimate money machine.

For decades, investment banks and hedge fund firms have employed quants and their computers to uncover relationships in the markets and exploit them with rapid-fire trades.


Quants seek to strip human emotions such as fear and greed out of investing. Today, their brand of computer-guided trading has reached levels undreamed of a decade ago. A third of all U.S. stock trades in 2006 were driven by automatic programs, or algorithms, according to Boston-based consulting firm Aite Group LLC. By 2010, that figure will reach 50 percent, according to Aite.

AI proponents say their time is at hand. Vasant Dhar, a former Morgan Stanley quant who teaches at New York University’s Stern School of Business in Manhattan’s Greenwich Village, is trying to program a computer to predict the ways in which unexpected events, such as the sudden death of an executive, might affect a company’s stock price.

Uptown, at Columbia University, computer science professor Kathleen McKeown says she imagines building an electronic Warren Buffett that would be able to answer just about any kind of investing question.

“We want to be able to ask a computer, `Tell me about the merger of corporation A and corporation B,’ or `Tell me about the impact on the markets of sending more troops to Iraq,”’ McKeown, 52, says.

Kubrick’s Dream

Some executives and scientists would rather not talk about AI. It recalls dashed hopes of artificially intelligent machines that would build cities in space and mind the kids at home. In “2001,” the novel written by Arthur C. Clarke and made into a movie directed by Stanley Kubrick in 1968, HAL, a computer that can think, talk and see, is invented in the distant future — 1997.

Things didn’t turn out as ’60s cyberneticians predicted. Somewhere between sci-fi and sci-fact, the dream fell apart. People began joking that AI stood for “Almost Implemented.”

“The promise has always been more than the delivery,” says Brian Hamilton, chief executive officer of Raleigh, North Carolina-based software maker Sageworks Inc., which uses computer formulas to automatically read stock prices, company earnings and other data and spit out reports for investors.

Hamilton, 43, says today’s AI-style programs can solve specific problems within a given set of parameters.

Chess vs Markets

Take chess. Deep Blue, a chess-playing supercomputer developed by International Business Machines Corp., defeated world champion Garry Kasparov in 1997. The rules of chess never change, however. Players have one goal: to capture the opponent’s king. There are only so many moves a player can make, and Deep Blue could evaluate 200 million such positions a second.

Financial markets, on the other hand, can be influenced by just about anything, from skirmishes in the Middle East to hurricanes in the Gulf of Mexico. In computerspeak, chess is a closed system and the market is an open one.

“AI is very effective when there’s a specific solution,” Hamilton says. “The real challenge is where judgment is required, and that’s where AI has largely failed.”

AI researchers have made progress over the years. Peek inside your Web browser or your car’s cruise control, and you’ll probably find AI at work. Meanwhile, computer chips keep getting more powerful. In February, Santa Clara, California-based Intel Corp. said it had devised a chip the size of a thumbnail that could perform a trillion calculations a second.

AI Believers

Ten years ago, such a computational feat would have required 10,000 processors.

To believers such as Dhar, Kearns and McKeown, all of this is only the beginning. One day, a subfield of AI known as machine learning, Kearns’s specialty, may give computers the ability to develop their own smarts and extract rules from massive data sets. Another branch, called natural language processing, or NLP, holds out the prospect of software that can understand human language, read up on companies, listen to executives and distill what it learns into trading programs.

Collective Intellect Inc., a Boulder, Colorado-based startup, already employs basic NLP programs to comb through 55 million Web logs and turn up information that might make money for hedge funds.

“There’s some nuggets of wisdom in the sea,” says Collective Intellect Chief Technology Officer Tim Wolters.

Another AI area, neural networking, involves building silicon versions of the cerebral cortex, the part of our brain that governs reason.

`It’s Here’

The hope is that these systems will ape living neurons, think like people and, like traders, understand that some things are neither black nor white but rather in varying shades of gray.

Stock analyst Ralph Acampora, who caused a stir in 1999 by correctly predicting that the Dow Jones Industrial Average would top 10,000, says investment banks are racing to profit from advanced computing such as AI.

“It’s here, and it’s growing,” says Acampora, 65, chief technical analyst at Knight Capital Group Inc. in Jersey City, New Jersey. “Everybody’s trying to outdo everyone else.”

The computers have done well. A November 2005 study by Darien, Connecticut-based Casey, Quirk & Associates, an investment management consulting firm, says that from 2001 to ’05, big-cap U.S. stock funds run by quants beat those run by nonquants.

Quants Rise

The quants posted a median annualized return of 5.6 percent, while nonquants returned an annualized 4.5 percent. Both groups beat the Standard & Poor’s 500 Index, which returned an annualized negative 0.5 percent during that period.

Rex Macey, director of equity management at Wilmington Trust Corp. in Atlanta, says computers can mine data and see relationships that humans can’t. Quantitative investing is on the rise, and that’s bound to spur interest in AI, says Macey, who previously developed computer models at Marietta, Georgia-based American Financial Advisors LLC, to weigh investment risk and project clients’ wealth.

“It’s all over the place and, greed being what it will, people will try anything to get an edge,” Macey, 46, says. “Quant is everywhere, and it’s seeping into everything.”

AI proponents are positioning themselves to become Wall Street’s hyperquants. Kearns, who previously ran the quant team within the equity strategies group at Lehman Brothers, splits his time between the University of Pennsylvania in Philadelphia, where he teaches computer science, and the New York investment bank, where he tries to put theory into practice.

Inside Lehman

Neither he nor Lehman executives would discuss how the firm uses computers to trade, saying the programs are proprietary and that divulging information about them would cost the firm its edge in the markets.

On an overcast Monday in late January, Kearns is at work in his cubicle on the eighth floor at Lehman Brothers when a few members of his team drop by for advice. At Lehman, Kearns is the big thinker on AI. He leaves most of the actual programming to a handful of Ph.D.s, most of whom he’s recruited at universities or computer conferences.

Kearns himself was plucked from Penn. Ian Lowitt, who studied with Kearns at the University of Oxford and is now co-chief administrative officer of Lehman Brothers, persuaded him to come to the firm as a consultant in 2002.

Kearns hardly looks the part of a professor. He has closely cropped black hair and sports a charcoal gray suit and a crisp blue shirt and tie. At Penn, his students compete to design trading strategies for the Penn-Lehman Automated Trading Project, which uses a computerized trading simulator.

`Catastrophic Risk’

Tucking into a lunch of tempura and sashimi at a Japanese restaurant near Lehman Brothers, Kearns says AI’s failure to live up to its sci-fi hype has created many doubters on Wall Street. He says people should be skeptical: Trading requires institutional knowledge that is difficult, if not impossible, to program into a computer.

AI holds perils as well as promise for Wall Street, Kearns says. Right now, even sophisticated AI programs lack common sense, he says.

“When something is going awry in the markets, people can quickly sense it and stop trading,” he says. “If you have completely automated something, it might not be able to do that, and that makes you subject to catastrophic risk.”

The dream of duplicating human intelligence may be as old as humanity itself. The intellectual roots of AI go back to ancient myths and tales such as Ovid’s story of Pygmalion, the sculptor who fell so in love with his creation that the gods brought his work to life. In the 19th century, English mathematician and proto-computer scientist Charles Babbage originated the idea of a programmable computer.

Turing Test

It wasn’t until 1951, however, that British mathematician Alan Turing proposed a test for a machine’s capability for thought. In a paper titled “Computing Machinery and Intelligence,” Turing, a computer pioneer who’d worked at Bletchley Park, Britain’s World War II code-breaking center, suggested the following:

A human judge engages in a text-only conversation with two parties, one human and the other a machine. If the judge can’t reliably tell which is which, the machine passes and can be said to possess intelligence.

No computer has ever done that. Turing committed suicide in 1954. Two years later, computer scientist John McCarthy coined the phrase artificial intelligence to refer to the science of engineering thinking machines.

The Turing Test, as it’s now known, has fueled almost six decades of controversy. Some computer scientists and philosophers say human-like interaction is essential to human-like intelligence. Others say it’s not. The debate still shapes AI research and raises questions about whether traders’ knowledge, creativity, intuition and appetite for risk can ever be programmed into a computer.

Wall Street Smarts

During the 1960s and ’70s, AI research yielded few commercial applications. As Wall Street firms deployed computer-driven program trading in the ’80s to automatically execute orders and allow arbitrage between stocks, options and futures, the AI world began to splinter. Researchers broke away into an array of camps, each focusing on specific applications rather than on building HAL-like machines.

Some scientists went off to develop computers that could mimic the human retina in its ability to see and recognize complex images such as faces. Some began applying AI to robotics. Still others set to work on programs that could read and understand human languages.

Thomas Mitchell, chairman of the Machine Learning Department at Carnegie Mellon University in Pittsburgh, says many AI researchers have decided to reach for less and accomplish more.

“It’s really matured from saying there’s one big AI label to being a little more refined and realizing there are some specific areas where we really have made progress,” Mitchell, 55, says.


Financial service companies have already begun to deploy basic machine-learning programs, Kearns says. Such programs typically work in reverse to solve problems and learn from mistakes.

Like every move a player makes in a game of chess, every trade changes the potential outcome, Kearns says. Machine-learning algorithms are designed to examine possible scenarios at every point along the way, from beginning to middle to end, and figure out the best choice at each moment.

Kearns likens the process to learning to play chess. “You would never think about teaching a kid to play chess by playing in total silence and then saying at the end, `You won’ or `You lost,”’ he says.

As an exercise, Kearns and his colleagues at Lehman Brothers used such programs to examine orders and improve how the firm executes trades, he says. The programs scanned bids, offers, specific prices and buy and sell orders to find patterns in volatility and prices, he says. Using this information, they taught a computer how to determine the most cost-effective trades.

Language Barrier

The program worked backward, assessing possible trades and enabling trader-programmers to evaluate the impact of their actions. By working this way, the computer learns how to execute trades going forward.

Language represents one of the biggest gulfs between human and computer intelligence, Dhar says. Closing that divide would mean big money for Wall Street, he says.

Unlike computers, human traders and money managers can glimpse a CEO on television or glance at news reports and sense whether news is good or bad for a stock. In conversation, a person’s vocal tone or inflection can alter — or even reverse — the meaning of words.

Let’s say you ask a trader if he thinks U.S. stocks are cheap and he responds, “Yeah, right.” Does he mean stocks are inexpensive or, sarcastically, just the opposite? What matters is not just what people say, but how they say it. Traders also have a feel for what other investors are thinking, so they can make educated guesses about how people will react.

`Acid Test’

For Dhar, the markets are the ultimate AI lab. “Reality is the acid test,” says Dhar, a 1978 graduate of the Indian Institutes of Technology, or ITT, whose campuses are India’s best schools for engineering and computer science. He collected his doctorate in artificial intelligence from the University of Pittsburgh.

A professor of information systems at Stern, Dhar left the school to become a principal at Morgan Stanley from 1994 to ’97, where he founded the data-mining group and focused on automated trading and the profiling of asset management clients. He still builds computer models to help Wall Street firms predict markets and figure out clients’ needs. Since 2002, his models have correctly predicted the stock prices from month to month 61 percent of the time, he says.

`Next Frontier’

Dhar says AI programs typically start with a human hunch about the markets. Let’s say you think that rising volatility in stock prices may signal a coming “breakout,” Wall Street-speak for an abrupt rise or fall in prices. Dhar says he would select market indicators for volatility and stock prices, feed them into his AI algorithms and let them check whether that intuition is right. If it is, the program would look for market patterns that hold up over time and base trades on them.

Surrounded by stacks of papers and books in his Greenwich Village office, Dhar, wearing jeans and a black V-neck sweater, says many AI scientists are questing after NLP programs that can understand human language.

“That’s the next frontier,” he says.

At Columbia, McKeown leads a team of researchers trying to make sense of all the words on the Internet. When she arrived at the university 25 years ago, NLP was still in its infancy. Now, the Internet has revolutionized the field, she says. Just about anyone with a computer can access news reports, blogs and chat rooms in languages from all over the world.

Information Flow

Rather than flowing sequentially, from point A to point B, information moves around the Web haphazardly. So, instead of creating sequential rules to instruct computers to read the information, AI specialists create an array of rules and try to enable computers to figure out what works.

McKeown, who earned her doctorate from Penn, has spent the past 10 years developing a program called NewsBlaster, which collects and sorts news and information from the Web and draws conclusions from it.

Sitting in her seventh-floor office in a building tucked behind Columbia’s Low Library, McKeown describes how NewsBlaster crawls the Web each night to produce summaries on topics from politics to finance. She decided to put the system on line after the terrorist attacks of Sept. 11, 2001, to monitor the unfolding story.

What if?

NewsBlaster, which isn’t available for commercial use, can “read” two news stories on the same topic, highlight the differences and describe what’s changed since it last scanned a report on the subject, McKeown says. The program can be applied to market-moving topics such as corporate takeovers and interest rates, she says.

McKeown is trying to upgrade her program so it can answer broad “what-if” questions, such as, “What if there’s an earthquake in Indonesia?” Her hope is that one day, perhaps within a few years, the program will be able to write a few paragraphs or pages of answers to such open-ended questions.

Dhar says computer scientists eventually will stitch together advances in machine learning and NLP and set the combined programs loose on the markets.

A crucial step will be figuring out the types of data AI programs should employ. The old programmer principle of GIGO — garbage in, garbage out — still applies. If you tell a computer to look for relationships between, say, solar flares and the Dow industrials and base trades on the patterns, the computer will do it. You might not make much money, however.

Courting Hedge Funds

“If I give an NLP algorithm ore, it might give me gold,” Dhar says. “If I give it garbage, it’ll give me back garbage.”

Collective Intellect, financed by Denver-based venture capital firm Appian Ventures Inc., is trying to sell hedge funds and investment banks on NLP technology.

Wolters says traders and money managers simply can’t stay on top of all the information flooding the markets these days.

Collective Intellect seeds its NLP programs with the names of authors, Web sites and blogs that its programmers think might yield moneymaking information. Then, the company lets the programs search the Web, make connections and come up with lists of sources they can monitor and update. Collective Intellect is pitching the idea to hedge funds, Wolters says.

Technology has upended the financial services industry before. Just think of automated teller machines. Michael Thiemann, CEO of San Diego-based hedge fund firm Investment Science Corp., likens traditional Wall Street traders to personal loan officers at U.S. banks back in the ’80s. Many of these loan officers lost their jobs when banks began assigning scores to customers based on a statistical analysis of their credit histories. In the U.S., those are known as FICO scores, after Minneapolis-based Fair Isaac Corp., which developed them.

Wall Street’s Future

Computers often did a better job of assessing risk than human loan officers, Thiemann, 50, says.

“And that is where Wall Street is going,” he says. Human traders will still provide insights into the markets, he says; more and more, however, those insights will be based on data rather than intuition.

Thiemann, who has a master’s degree in engineering from Stanford University and an MBA from Harvard Business School, knows algorithms. During the ’90s, he helped HNC Software Inc., now part of Fair Isaac, develop a tracking program called Falcon to spot credit card fraud.

Falcon, which today watches over more than 450 million credit and debit cards, uses computer models to evaluate the likelihood that transactions are bogus. It weighs that risk against customers’ value to the credit card issuer and suggests whether to let the charges go through or terminate them.


“If it’s a customer with a questionable transaction and you don’t mind losing them as a customer, you just deny it,” Thiemann says. “If it’s a great customer and a small transaction, you let it go through, but maybe follow up with a call a day or so later.”

Thiemann says he’s taking a similar approach with a trading system he’s building. He calls his program Deep Green. The name recalls IBM’s Deep Blue — and money.

DeepGreen evaluates market data, learns from it and scores trading strategies for stocks, options and other investments, he says. Thiemann declines to discuss his computerized hedge fund, beyond saying that he’s currently investing money for friends and family and that he plans to seek other investors this year.

“This is hard, like a moon launch is hard,” Thiemann says of the task ahead of him.

Searching for HAL

As AI invades Wall Street, even the quants will have to change with the times. The kind of conventional trading programs that hunt out arbitrage opportunities between stocks, options and futures, for example, amount to brute-force computing. Such programs, much like Deep Blue, merely crunch a lot of numbers quickly.

“They just have to be fast and comprehensive,” Thiemann says. AI systems, by contrast, are designed to adapt and learn as they go.

Dhar says he doubts thinking computers will displace human traders anytime soon. Instead, the machines and their creators will learn to work together.

“This doesn’t get rid of the rule of human creativity; it actually makes it more important,” he says. “You have to be in tune with the market and be able to say, ‘I’m smelling something here that’s worth learning about.”’

At Collective Intellect, Vice President Darren Kelly, a former BMO Nesbitt Burns Inc. stock analyst, says tomorrow’s quants will rely on AI to spot patterns that no one has imagined in the free- flowing type of information that can be found in e-mails, on Web pages and in voice recordings. After all, such unstructured information accounts for about 80 percent of all the info out there.

“The next generation of quant may be around unstructured analytics,” Kelly says.

After more than 50 years, the quest for human-level artificial intelligence has yet to yield its HAL 9000. Kearns says he’d settle for making AI pay off on Wall Street

“We’re building systems that can wade out in the human world and understand it,” Kearns says. Traders may never shoot the breeze with a computer at the bar after work. But the machines just might help them pay the bill.

The new system reliably produced 3-D, nanometer-scale silicon oxide nanostructures through a process called anodization nanolithography. (Credit: Image courtesy of Duke University)

Duke University, News Release – In an assist in the quest for ever smaller electronic devices, Duke University engineers have adapted a decades-old computer aided design and manufacturing process to reproduce nanosize structures with features on the order of single molecules.

The new automated technique for nanomanufacturing suggests that the emerging nanotechnology industry might capitalize on skills already mastered by today’s engineering workforce, according to the researchers.

“These tools allow you to go from basic, one-off scientific demonstrations of what can be done at the nanoscale to repetitively engineering surface features at the nanoscale,” said Rob Clark, Thomas Lord Professor and chair of the mechanical engineering and materials science department at Duke University’s Pratt School of Engineering.

The feat was accomplished by using the traditional computing language of macroscale milling machines to guide an atomic force microscope (AFM). The system reliably produced 3-D, nanometer-scale silicon oxide nanostructures through a process called anodization nanolithography, in which oxides are built on semiconducting and metallic surfaces by applying an electric field in the presence of tiny amounts of water.

“That’s the key to moving from basic science to industrial automation,” Clark said. “When you manufacture, it doesn’t matter if you can do it once, the question is: Can you do it 100 million times and what’s the variability over those 100 million times” Is it consistent enough that you can actually put it into a process””

Clark and Matthew Johannes, who recently received his doctoral degree at Duke, will report their findings in the August 29 issue of the journal Nanotechnology (now available online) and expect to make their software and designs freely available online. The work was supported by the National Science Foundation.

Atomic force microscopes (AFMs), which can both produce images and manipulate individual atoms and molecules, have been the instrument of choice for researchers creating localized, two-dimensional patterns on metals and semiconductors at the nanoscale. Yet those nanopatterning systems have relied on the discrete points of a two-dimensional image for laying out the design.

“Now we’ve added another dimension,” Johannes said.

The researchers showed they could visualize 3-D structures–including a series of squares that differed in size, and a star–in a computerized design environment and then automatically build them at the nanoscale. The structures they produced were measured in nanometers–one billionth of a meter–about 80,000 times smaller than the diameter of a human hair.

Johannes had to learn to carefully control the process by adjustments to the humidity, voltage, and scanning speed, relying on sensors to guide the otherwise invisible process.

The new technique suggests that the nanotechnology factories of the future might not operate so differently from existing manufacturing plants.

“If you can take prototyping and nanomanufacturing to a level that leverages what engineers know how to do, then you are ahead of the game,” Clark said. “Most engineers with conventional training don’t think about nanoscale manipulation. But if you want to leverage a workforce that’s already in place, how do you set up the future of manufacturing in a language that engineers already use to communicate” That’s what we’re focused on doing here.”

Daniel Cole of the University of Pittsburgh was a collaborator on the study.

Note: This story has been adapted from a news release issued by Duke University.

HHMI, August 10, 2007, Throughout human history, mother’s milk has been regarded as the perfect food. Rich, nutritious and readily available, it is the drink of choice for tens of millions of human infants, not to mention all mammals from mice to whales.

But even mother’s milk can turn toxic if the molecular pathways that govern its production are disrupted, according to a new study by Howard Hughes Medical Institute (HHMI) researchers at The Salk Institute for Biological Studies.

“It’s one of those unexpected observations. It tells you the mother can transmit quite a bit more than nutrition through the milk.”
Ronald M. Evans

Writing in the August 2007 issue of the journal Genes & Development, a group led by HHMI investigator Ronald M. Evans reports that female mice that are deficient in the protein PPAR gamma produce toxic milk. The milk that had been nutritious instead causes inflammation, growth retardation and loss of hair in nursing mouse pups.

“We all think of milk as the ultimate food, the soul food for young animals,” said Evans. “The quality of that milk is also something that is genetically predetermined.”

In essence, the new finding reveals a genetic program for ensuring that mother’s milk is the wonder food it is hailed to be: “We stumbled onto a hidden quality control system. Milk has to be a very clean product. It seems there is a whole process the body uses so that milk is scrubbed and doesn’t have anything toxic in it.”

Evans said the finding was unanticipated, discovered when his group engineered mice to be deficient in PPAR gamma, a protein that helps regulate the body’s sugar and fat stores. Mouse pups developed growth retardation and hair loss when they nursed on mothers who lacked the gene to produce PPAR gamma in blood cells and cells that line the interior of blood and lymph vessels.

“It’s one of those unexpected observations,” Evans explained. “It tells you the mother can transmit quite a bit more than nutrition through the milk.”

Evans’s group found they could reverse the toxic effects of the milk by letting the affected mouse pups nurse on a mother without the genetic variation in PPAR gamma.

Further studies showed that the mouse mothers with the PPAR-gamma deficiency produced milk with oxidized fatty acids, toxic substances that can prompt inflammation.

Evans and his colleagues showed that they could reverse the toxic effects of the milk by administering aspirin or other anti-inflammatory agents. “If you suppress the inflammation, the hair grows back,” said Evans.

PPARs are a widely studied family of nuclear receptors, proteins that are responsible for sensing hormones and other molecules. They work in concert with other proteins to switch genes on or off and are intimately connected to the cellular metabolism of carbohydrates, fats and proteins.

Although their discovery came as a surprise, Evans said it should have been obvious that there would be a mechanism in place to ensure the quality of milk.

“We should have realized there is something very special about it,” he said. “The reason we haven’t heard about toxic milk is because there is a system that keeps it clean. It is logical and should have been anticipated.”

In Evans’s view, PPAR gamma’s role in ensuring the quality of mother’s milk is likely to be a fundamental feature of evolution.

Lactating mothers, he noted, are not protected from inflammation, yet the milk they produce must be a pristine product: “Healthfulness in the body or products of the body is due to a (genetic) program, a process designed over the course of evolutionary history to maintain health.”

PPAR gamma’s role in cleansing milk is “a very straightforward variation on how this system controls both lipid metabolism and inflammation. It’s the secret of keeping them apart. That may be the reason the whole system exists,” Evans said.

In the human population, there are variants in the genetic program that governs PPAR gamma, which alters the fate of sugar and fat in the body. The system is already the target of anti-inflammatory drug therapy used to manage conditions such as diabetes.

Co-authors of the new Genes & Development article include Yihong Wan, Ling-Wa Chong and Chun-Li Zhang, all of The Salk Institute; and Alan Saghatelian and Benjamin F. Cravatt of The Scripps Research Institute.

Ryan Pyle for The New York Times

Liu Minghong, a farmer in Sichuan Province, said he lost 70 pigs to a virulent disease, leaving him with just a few.


The New York Times

Published: August 16, 2007

CHENGDU, China, Aug. 9 — A highly infectious swine virus is sweeping China’s pig population, driving up pork prices and creating fears of a global pandemic among domesticated pigs.

Animal virus experts say Chinese authorities are playing down the gravity and spread of the disease.

So far, the mysterious virus — believed to cause an unusually deadly form of an infection known as blue-ear pig disease — has spread to 25 of this country’s 33 provinces and regions, prompting a pork shortage and the strongest inflation in China in a decade.

More than that, China’s past lack of transparency — particularly over what became the SARS epidemic — has created global concern.

“They haven’t really explained what this virus is,” says Federico A. Zuckermann, a professor of immunology at the University of Illinois College of Veterinary Medicine. “This is like SARS. They haven’t sent samples to any international body. This is really irresponsible of China. This thing could get out and affect everyone.”

There are no clear indications that blue-ear disease — if that is what this disease is — poses a threat to human health.

In Gu Yi, a village in Sichuan Province, a veterinarian’s banner claims he can cure blue-ear disease, but the virus still spreads.

MOLNDAL, Sweden, Aug. 7 (UPI) — A Swedish-led team of scientists is developing a “tool kit” for personalized medicine based on a person’s genetic characteristics.

Fredrik Nyberg, Gyorgy Marko-Varga, Atsushi Ogiwara and colleagues at AstraZeneca’s research and development center in Molndal, Sweden, note cancer therapy already is moving toward individualized treatments selected according to tumor cell type and patients’ predicted responses to different kinds of anti-cancer drugs.

The researchers have developed a system of state-of-the-art proteomic profiling, in which blood tests are used to analyze single proteins and multiple “fingerprint” protein patterns, including proteins that can serve as biomarkers for disease.

The aim is to create a “tool kit” that physicians could use in everyday medicine, including rapid methods for identifying proteins in the blood and processing the resulting data.

The project is presented in the current issue of the Journal of Proteome Research.

Target Health is working closely with Dr. Sam Weinstein and a device company, on the development of a novel device to be used in cardiac surgery. Target Health provided full clinical, regulatory, data management, statistical and medical writing services for this program. Dr. Jules Mitchel, President of Target Health was at Montefiore Hospital working with Dr. Weinstein on the day the article was released.

image0027.jpgWHITE PLAINS, N.Y. (AP)-A top New York heart surgeon who was doing a mercy-mission operation on an 8-year-old boy in El Salvador had to scrub out in the middle of the procedure so he could donate his own rare-type blood to the patient.

Dr. Samuel Weinstein said he had his blood drawn, ate a Pop-Tart, returned to the operating table and watched as his blood helped the boy survive the complex surgery.

“It was a little bit surreal,” Weinstein said by phone Friday from the Children’s Hospital at Montefiore Medical Center in the Bronx, where he is chief of pediatric cardio-thoracic surgery. He said that on his charity trips with Heart Care International, “We don’t sleep a lot, we don’t eat a lot, and we were working very hard, and here it was 11 o’clock at night and they hung my blood and he was getting my blood.”

In the May 11 operation, which had begun 12 hours earlier at Bloom Hospital in San Salvador, the boy’s failing aortic valve was replaced with his pulmonary valve and the pulmonary valve was replaced with an artificial valve.

“The surgery had been going well, everything was working great, but he was bleeding a lot and they didn’t have a lot of the medicines we would use to stop the bleeding,” Weinstein said. “After a while they said they couldn’t give him blood because they were running out and he had a rare type.”

“We realized he might bleed to death, so I asked what blood type he was and they said he was B-negative and I said, “You know, I’m B-negative.”

Dr. Robert Michler, founder of the group, was standing next to him and said, “I support you.”

Weinstein, who said he was an occasional blood donor-“but never like this”-said the interruption lasted about 20 minutes.

“It’s not like I was going to lie down and have cookies,” he said.

But after he gave his pint, “They gave me a couple of bottles of water and a cardiologist who has more important things to do came out to check on me and gave me a Pop-Tart. Yeah, I think surreal is the right word.”

The American Red Cross says 2 percent of the population has B-negative blood. Only AB-negative is rarer.

The patient, Francisco Calderon Anthony Fernandez of San Salvador, came off the ventilator the next day and had some lunch with Weinstein. He has since gone home from the hospital, said Weinstein, who is 43 and lives in Chappaqua.

“His mother was very happy with me and she said to me, ‘Does this mean that he’s going to grow up and become an American doctor?”’

Along the same lines, his colleagues told him the boy “has developed a craving for smoked fish, which they know I happen to like.”

“Because it all worked out well, they had fun with it,” he said.

Spokeswomen at the American Medical Association and the American College of Surgeons both said they knew of no similar case and no statistics were kept on doctor-patient blood donations.

Weinstein said he has gone abroad more than a dozen times with Heart Care International, which flies in about 50 doctors, nurses and respiratory technicians “to work with local physicians, help teach them and advance their techniques while helping at the same time to provide care for children who might not otherwise have the resources.”

He and most of the others give up vacation time for the trips, he said.

“It’s a real team effort,” he said. “I’m getting the attention because I’m the one who gave the blood, but there wasn’t anybody on the team-I mean anybody, the nurses, the clerks-who wouldn’t have done it.”


The New York Times, August 14, 2007

Until I talked to Nick Bostrom, a philosopher at Oxford University, it never occurred to me that our universe might be somebody else’s hobby. I hadn’t imagined that the omniscient, omnipotent creator of the heavens and earth could be an advanced version of a guy who spends his weekends building model railroads or overseeing video-game worlds like the Sims.

But now it seems quite possible. In fact, if you accept a pretty reasonable assumption of Dr. Bostrom’s, it is almost a mathematical certainty that we are living in someone else’s computer simulation.

This simulation would be similar to the one in “The Matrix,” in which most humans don’t realize that their lives and their world are just illusions created in their brains while their bodies are suspended in vats of liquid. But in Dr. Bostrom’s notion of reality, you wouldn’t even have a body made of flesh. Your brain would exist only as a network of computer circuits.

You couldn’t, as in “The Matrix,” unplug your brain and escape from your vat to see the physical world. You couldn’t see through the illusion except by using the sort of logic employed by Dr. Bostrom, the director of the Future of Humanity Institute at Oxford.

Dr. Bostrom assumes that technological advances could produce a computer with more processing power than all the brains in the world, and that advanced humans, or “posthumans,” could run “ancestor simulations” of their evolutionary history by creating virtual worlds inhabited by virtual people with fully developed virtual nervous systems.

Some computer experts have projected, based on trends in processing power, that we will have such a computer by the middle of this century, but it doesn’t matter for Dr. Bostrom’s argument whether it takes 50 years or 5 million years. If civilization survived long enough to reach that stage, and if the posthumans were to run lots of simulations for research purposes or entertainment, then the number of virtual ancestors they created would be vastly greater than the number of real ancestors.

There would be no way for any of these ancestors to know for sure whether they were virtual or real, because the sights and feelings they’d experience would be indistinguishable. But since there would be so many more virtual ancestors, any individual could figure that the odds made it nearly certain that he or she was living in a virtual world.

The math and the logic are inexorable once you assume that lots of simulations are being run. But there are a couple of alternative hypotheses, as Dr. Bostrom points out. One is that civilization never attains the technology to run simulations (perhaps because it self-destructs before reaching that stage). The other hypothesis is that posthumans decide not to run the simulations.

“This kind of posthuman might have other ways of having fun, like stimulating their pleasure centers directly,” Dr. Bostrom says. “Maybe they wouldn’t need to do simulations for scientific reasons because they’d have better methodologies for understanding their past. It’s quite possible they would have moral prohibitions against simulating people, although the fact that something is immoral doesn’t mean it won’t happen.”

Dr. Bostrom doesn’t pretend to know which of these hypotheses is more likely, but he thinks none of them can be ruled out. “My gut feeling, and it’s nothing more than that,” he says, “is that there’s a 20 percent chance we’re living in a computer simulation.”

My gut feeling is that the odds are better than 20 percent, maybe better than even. I think it’s highly likely that civilization could endure to produce those supercomputers. And if owners of the computers were anything like the millions of people immersed in virtual worlds like Second Life, SimCity and World of Warcraft, they’d be running simulations just to get a chance to control history — or maybe give themselves virtual roles as Cleopatra or Napoleon.

It’s unsettling to think of the world being run by a futuristic computer geek, although we might at last dispose of that of classic theological question: How could God allow so much evil in the world? For the same reason there are plagues and earthquakes and battles in games like World of Warcraft. Peace is boring, Dude.

A more practical question is how to behave in a computer simulation. Your first impulse might be to say nothing matters anymore because nothing’s real. But just because your neural circuits are made of silicon (or whatever posthumans would use in their computers) instead of carbon doesn’t mean your feelings are any less real.

David J. Chalmers, a philosopher at the Australian National University, says Dr. Bostrom’s simulation hypothesis isn’t a cause for skepticism, but simply a different metaphysical explanation of our world. Whatever you’re touching now — a sheet of paper, a keyboard, a coffee mug — is real to you even if it’s created on a computer circuit rather than fashioned out of wood, plastic or clay.

You still have the desire to live as long as you can in this virtual world — and in any simulated afterlife that the designer of this world might bestow on you. Maybe that means following traditional moral principles, if you think the posthuman designer shares those morals and would reward you for being a good person.

Or maybe, as suggested by Robin Hanson, an economist at George Mason University, you should try to be as interesting as possible, on the theory that the designer is more likely to keep you around for the next simulation. (For more on survival strategies in a computer simulation, go to

Of course, it’s tough to guess what the designer would be like. He or she might have a body made of flesh or plastic, but the designer might also be a virtual being living inside the computer of a still more advanced form of intelligence. There could be layer upon layer of simulations until you finally reached the architect of the first simulation — the Prime Designer, let’s call him or her (or it).

Then again, maybe the Prime Designer wouldn’t allow any of his or her creations to start simulating their own worlds. Once they got smart enough to do so, they’d presumably realize, by Dr. Bostrom’s logic, that they themselves were probably simulations. Would that ruin the fun for the Prime Designer?

If simulations stop once the simulated inhabitants understand what’s going on, then I really shouldn’t be spreading Dr. Bostrom’s ideas. But if you’re still around to read this, I guess the Prime Designer is reasonably tolerant, or maybe curious to see how we react once we start figuring out the situation.

It’s also possible that there would be logistical problems in creating layer upon layer of simulations. There might not be enough computing power to continue the simulation if billions of inhabitants of a virtual world started creating their own virtual worlds with billions of inhabitants apiece.

If that’s true, it’s bad news for the futurists who think we’ll have a computer this century with the power to simulate all the inhabitants on earth. We’d start our simulation, expecting to observe a new virtual world, but instead our own world might end — not with a bang, not with a whimper, but with a message on the Prime Designer’s computer.

It might be something clunky like “Insufficient Memory to Continue Simulation.” But I like to think it would be simple and familiar: “Game Over.”

The Beam of Light That Flips a Switch That Turns on the Brain

Kim Thompson, Viviana Gradinaru and Karl Deisseroth/Stanford University
In an optical switch in a mammalian neuron, red marks synapses and green shows photosensitive protein on the cell membrane.

The New York Times, August 14, 2007

It sounds like a science-fiction version of stupid pet tricks: by toggling a light switch, neuroscientists can set fruit flies a-leaping and mice a-twirling and stop worms in their squiggling tracks.

STOPPING ON YELLOW A genetically modified C. elegans worm stopped in response to yellow light that inhibits its neural activity.

But such feats, unveiled in the past two years, are proof that a new generation of genetic and optical technology can give researchers unprecedented power to turn on and off targeted sets of cells in the brain, and to do so by remote control.

These novel techniques will bring an “exponential change” in the way scientists learn about neural systems, said Dr. Helen Mayberg, a clinical neuroscientist at Emory University, who is not involved in the research but has seen videos of the worm experiments.

“A picture is worth a thousand words,” Dr. Mayberg said.

Some day, the remote-control technology might even serve as a treatment for neurological and psychiatric disorders.

These clever techniques involve genetically tinkering with nerve cells to make them respond to light.

Thor Swift for The New York Times
Karl Deisseroth and fiber-optic wires with laser light.
Raag Airan and Karl Deisseroth/Stanford University
Light stimulation every 200 milliseconds generates electrical activity, right, in an area of the brain associated with depression.

One of the newest, fastest strategies co-opts a photosensitive protein called channelrhodopsin-2 from pond scum to allow precise laser control of the altered cells on a millisecond timescale. That speed mimics the natural electrical chatterings of the brain, said Dr. Karl Deisseroth, an assistant professor of bioengineering at Stanford.

“We can start to sort of speak the language of the brain using optical excitation,” Dr. Deisseroth said. The brain’s functions “arise from the orchestrated participation of all the different cell types, like in a symphony,” he said.

Laser stimulation can serve as a musical conductor, manipulating the various kinds of neurons in the brain to reveal which important roles they play.

This light-switch technology promises to accelerate scientists’ efforts in mapping which clusters of the brain’s 100 billion neurons warble to each other when a person, for example, recalls a memory or learns a skill. That quest is one of the greatest challenges facing neuroscience.

The channelrhodopsin switch is “really going to blow the lid off the whole analysis of brain function,” said George Augustine, a neurobiologist at Duke University in Durham, N.C.

Dr. Deisseroth, who is also a psychiatrist who treats patients with autism or severe depression, has ambitious goals. Brain cells in those disorders show no damage, yet something is wrong with how they talk to one another, he said.

“The high-speed dynamics of the system are probably off,” Dr. Deisseroth said. He wants to learn whether, in these neuropsychiatric diseases, certain neurons falter or go haywire, and then to find a way to tune patients’ faulty circuits.

A first step is establishing that it is possible to tweak a brain circuit by remote control and observe the corresponding behavioral changes in freely moving lab animals. On a recent Sunday at Stanford, Dr. Deisseroth and Feng Zhang, a graduate student, hovered over a dark brown mouse placed inside a white plastic tub. Through standard gene-manipulating tricks, the rodent had been engineered to produce channelrhodopsin only in one particular kind of neuron found throughout the brain, to no apparent ill effect.

Mr. Zhang had implanted a tiny metal tube into the right side of the mouse’s partly shaved head.

Now he carefully threaded a translucent fiber-optic cable not much wider than a thick human hair into that tube, positioned over the area of the cerebral cortex that controls movement.

“Turn it on,” Dr. Deisseroth said.

Mr. Zhang adjusted a key on a nearby laser controller box, and the fiber-optic cable glowed with blue light. The mouse started skittering in a left-hand spin, like a dog chasing its tail.

“Turn it off, and then you can see him stand up,” Dr. Deisseroth continued. “And now turn it back on, and you can see it’s circling.”

Because the brain lacks pain receptors, the mouse felt no discomfort from the fiber optic, the scientists said, although it looked a tad confused. Scientists have long known that using electrodes to gently zap one side of a mouse’s motor cortex will make it turn the opposite way. What is new here is that for the first time, researchers can perturb specific neuron types using light, Dr. Deisseroth said.

Electrode stimulation is the standard tool for rapidly driving nerve cells to fire. But in brain tissue, it is unable to target single types of neurons, instead rousing the entire neural neighborhood.

“You activate millions of cells, or thousands at the very least,” said Ehud Isacoff, a professor of neurobiology at the University of California, Berkeley. All variety of neurons are intermixed in the cortex, he said.

Neuroscientists have long sought a better alternative than electrode stimulation. In the past few years, some have jury-rigged ways to excite brain cells by using light; one technique used at Yale made headless fruit flies flap away. But these methods had limitations. They worked slowly, they could not target specific neurons or they required adding a chemical agent.

More recently, Dr. Isacoff, with Dirk Trauner, a chemistry professor at the University of California, Berkeley, and other colleagues engineered a high-speed neural switch by refurbishing a channel protein that anchors in the cell membrane of most human brain cells. The scientists tethered to the protein a light-sensitive synthetic molecular string that has glutamate, a neurotransmitter, dangling off the end.

Upon absorbing violet light, the string plugs the glutamate into the protein’s receptor and sparks a neuron’s natural activation process: the channel opens, positive ions flood inside, and the cell unleashes an electrical impulse.

In experiments published in May in the journal Neuron, the Berkeley team bred zebrafish that carried the artificial glutamate switch within neurons that help sense touch.

“If I were a fish, and somebody poked me in the side,” (in this case, with a fine glass tip), Dr. Isacoff said, “I would escape.” But when the translucent fish were strobed with violet light, the overstimulated creatures no longer detected being prodded. Blue-green light reversed the effect.

One advantage of the Berkeley approach, Dr. Isacoff said, is that it can be adapted for many types of proteins so they could be activated by light. But for the method to work, scientists must periodically douse cells with the glutamate string.

In contrast, Dr. Deisseroth’s laboratory at Stanford has followed nature’s simpler design, borrowing a light-sensitive protein instead of making a synthetic one.

In 2003, Georg Nagel, a biophysicist then at the Max Planck Institute of Biophysics in Frankfurt, and colleagues characterized channelrhodopsin-2 from green algae. This channel protein lets positive ions stream into cells when exposed to blue light. It functioned even when inserted into human kidney cells, the researchers showed.

Neuroscientists realized that this pond scum protein might be used to hot-wire a neuron with light. In 2005, Edward Boyden, then a graduate student at Stanford, Mr. Zhang and Dr. Deisseroth, joining with the German researchers, demonstrated that the idea worked. And in separate research published last spring, Mr. Zhang and Dr. Boyden, now at the Massachusetts Institute of Technology, each found a way to also silence neurons: a bacterial protein called halorhodopsin, when placed in a brain cell, can cause the cell to shut down in response to yellow light.

The Stanford-Germany team put both the “on” and “off” toggles into the motor neurons or muscle cells of transgenic roundworms. Blue light made the creatures contract their muscles and pull back; yellow let them relax their muscles and inch forward.

Dr. Augustine and associates at Duke next collaborated with Dr. Deisseroth to create transgenic mice with channelrhodopsin in different brain cell populations. By quickly scanning with a blue laser across brain tissue, they stimulated cells containing the switch. They simultaneously monitored for responses in connecting neurons, by recording from an electrode or using sensor molecules that light up.

“That way, you can build up a two-dimensional or, in principle, even a three-dimensional map” of the neural circuitry as it functions, Dr. Augustine said.

Meanwhile, other researchers are exploring light-switch technology for medical purposes. Jerry Silver, a neuroscientist at Case Western Reserve University in Cleveland, and colleagues are testing whether they can restore the ability to breathe independently in rats with spinal cord injuries, by inserting channelrhodopsin into specific motor neurons and pulsing the neurons with light.

And in Detroit, investigators at Wayne State University used blind mice lacking photoreceptors in their eyes and injected a virus carrying the channelrhodopsin gene into surviving retinal cells. Later, shining a light into the animals’ eyes, the scientists detected electrical signals registering in the visual cortex. But they are still investigating whether the treatment actually brings back vision, said Zhuo-Hua Pan, a neuroscientist.

At Stanford, Dr. Deisseroth’s group has identified part of a brain circuit, in the hippocampus, that is underactive in rats, with some symptoms resembling depression. The neural circuit’s activity — and the animals’ — perked up after antidepressant treatment, in findings reported last week in the journal Science. Now the team is examining whether they can lift the rats’ low-energy behavior by using channelrhodopsin to rev up the sluggish neural zone.

But human depression is complex, probably involving several brain areas; an easy fix is not expected. The light-switch technologies are not likely to be used for depression or other disorders in people any time soon. One concern is making sure that frequent light exposure does not harm neurons.

Another challenge — except in eye treatments — is how to pipe light into neural tissue. Dr. Deisseroth’s spinning mouse demonstration suggests that fiber optics could solve that issue. Such wiring would be no more invasive, he said, than deep brain stimulation using implanted electrodes, currently a treatment for Parkinson’s disease.

An even bigger obstacle, however, is that gene therapy, a technology that is still unproven, would be needed to slip light-switch genes into a patient’s nerve cells. Clinical trials are now testing other gene therapies against blindness and Parkinson’s in human patients.

But even if those succeed, introducing a protein like channelrhodopsin from a nonmammal species could set off a dangerous immune reaction in humans, warned Dr. Howard Federoff, a neuroscientist at Georgetown University and chairman of the National Institutes of Health committee that reviews all gene-therapy clinical trial protocols in the United States.

In the near term, Dr. Deisseroth predicts that the remote-control technology will lead to new insights from animal studies about how diseases arise, and help generate other treatment ideas.

Such research benefits could extend beyond the realm of neuroscience: The Stanford group has sent DNA copies of the “on” and “off” light-switch genes to more than 175 researchers eager to try them in all stripes of electrically excitable cells, from insulin-releasing pancreas cells to heart cells.

← Previous PageNext Page →