Target Health Inc. is a New York corporation, concerned with the crisis on Wall Street and interested in financial reform.

Commentary: How to prevent the next Wall Street crisis

* Joseph Stiglitz: Fed pumped too much money, aiding housing bubble
* New-fangled instruments hid overuse of borrowing, Stiglitz says
* Executives followed short-term interests and magnified risks, he says
* Stiglitz: Widespread changes needed to prevent future crises

By Joseph Stiglitz
Special to CNN

Editor’s note: Joseph E. Stiglitz, professor at Columbia University, was awarded the Nobel Prize in Economics in 2001 for his work on the economics of information and was on the climate change panel that shared the Nobel Peace Prize in 2008. Stiglitz, a supporter of Barack Obama, was a member and later chairman of the Council of Economic Advisers during the Clinton administration before joining the World Bank as chief economist and senior vice president. He is the co-author with Linda Bilmes of the “Three Trillion Dollar War: The True Costs of the Iraq Conflict.”

NEW YORK (CNN) — Many seem taken aback by the depth and severity of the current financial turmoil. I was among several economists who saw it coming and warned about the risks.

There is ample blame to be shared; but the purpose of parsing out blame is to figure out how to make a recurrence less likely.

President Bush famously said, a little while ago, that the problem is simple: Too many houses were built. Yes, but the answer is too simplistic: Why did that happen?

One can say the Fed failed twice, both as a regulator and in the conduct of monetary policy. Its flood of liquidity (money made available to borrow at low interest rates) and lax regulations led to a housing bubble. When the bubble broke, the excessively leveraged loans made on the basis of overvalued assets went sour.

For all the new-fangled financial instruments, this was just another one of those financial crises based on excess leverage, or borrowing, and a pyramid scheme.

The new “innovations” simply hid the extent of systemic leverage and made the risks less transparent; it is these innovations that have made this collapse so much more dramatic than earlier financial crises. But one needs to push further: Why did the Fed fail?

First, key regulators like Alan Greenspan didn’t really believe in regulation; when the excesses of the financial system were noted, they called for self-regulation — an oxymoron.

Second, the macro-economy was in bad shape with the collapse of the tech bubble. The tax cut of 2001 was not designed to stimulate the economy but to give a largesse to the wealthy — the group that had been doing so well over the last quarter-century.

The coup d’grace was the Iraq War, which contributed to soaring oil prices. Money that used to be spent on American goods now got diverted abroad. The Fed took seriously its responsibility to keep the economy going.

It did this by replacing the tech bubble with a new bubble, a housing bubble. Household savings plummeted to zero, to the lowest level since the Great Depression. It managed to sustain the economy, but the way it did it was shortsighted: America was living on borrowed money and borrowed time.

Finally, at the center of blame must be the financial institutions themselves. They — and even more their executives — had incentives that were not well aligned with the needs of our economy and our society.

They were amply rewarded, presumably for managing risk and allocating capital, which was supposed to improve the efficiency of the economy so much that it justified their generous compensation. But they misallocated capital; they mismanaged risk — they created risk.

They did what their incentive structures were designed to do: focusing on short-term profits and encouraging excessive risk-taking.

This is not the first crisis in our financial system, not the first time that those who believe in free and unregulated markets have come running to the government for bail-outs. There is a pattern here, one that suggests deep systemic problems — and a variety of solutions:

1. We need first to correct incentives for executives, reducing the scope for conflicts of interest and improving shareholder information about dilution in share value as a result of stock options. We should mitigate the incentives for excessive risk-taking and the short-term focus that has so long prevailed, for instance, by requiring bonuses to be paid on the basis of, say, five-year returns, rather than annual returns.

2. Secondly, we need to create a financial product safety commission, to make sure that products bought and sold by banks, pension funds, etc. are safe for “human consumption.” Consenting adults should be given great freedom to do whatever they want, but that does not mean they should gamble with other people’s money. Some may worry that this may stifle innovation. But that may be a good thing considering the kind of innovation we had — attempting to subvert accounting and regulations. What we need is more innovation addressing the needs of ordinary Americans, so they can stay in their homes when economic conditions change.

3. We need to create a financial systems stability commission to take an overview of the entire financial system, recognizing the interrelations among the various parts, and to prevent the excessive systemic leveraging that we have just experienced.

4. We need to impose other regulations to improve the safety and soundness of our financial system, such as “speed bumps” to limit borrowing. Historically, rapid expansion of lending has been responsible for a large fraction of crises and this crisis is no exception.

5. We need better consumer protection laws, including laws that prevent predatory lending.

6. We need better competition laws. The financial institutions have been able to prey on consumers through credit cards partly because of the absence of competition. But even more importantly, we should not be in situations where a firm is “too big to fail.” If it is that big, it should be broken up.

These reforms will not guarantee that we will not have another crisis. The ingenuity of those in the financial markets is impressive. Eventually, they will figure out how to circumvent whatever regulations are imposed. But these reforms will make another crisis of this kind less likely, and, should it occur, make it less severe than it otherwise would be.

The opinions expressed in this commentary are solely those of the writer.

September 17, 2008, PharmaLive.com – GlaxoSmithKline and Roche are both expected to win approval for new drugs in the all-important U.S. market in the coming days, providing a potential boost for their shares.

The two medicines — Promacta and Actemra — should expand Glaxo and Roche’s businesses into new territories, although industry analysts say both are likely to start off as niche treatments.

Glaxo’s Promacta is designed to treat a rare clotting disorder that can cause dangerous bleeding, while Roche’s Actemra is the first in a new class of injectable medicines for rheumatoid arthritis.

In both cases, the U.S. Food and Drug Administration is due to give its approval decision by Sept. 19.

The two drugs have already received positive recommendations from FDA advisory panels, but investors are cautious about the final green light in view of the some surprising recent decisions by the U.S. watchdog not to approve new drugs.

Morgan Stanley analysts believe Promacta will be approved for the treatment of chronic idiopathic thrombocytopenic purpura (ITP) but its sales may be capped by restrictions on the label limiting its use to only a small minority of patients.

Promacta, a pill, will compete with Amgen’s Nplate injection, which won U.S. marketing clearance last month. Both medicines help stimulate the production of blood platelets.

Citigroup analysts forecasts sales of Promacta reaching a modest 286 million pounds ($509.6 million) by 2012, equivalent to just 1 percent of group revenue.

Roche’s Actemra, meanwhile, should provide a new therapeutic option for hard-to-treat patients who fail to respond adequately to the existing class of anti-TNF medicines.

As such, it will compete against Roche’s own established medicine Rituxan, which is already given to anti-TNF drug non-responders.

Actemra, which Roche sees as a potential billion-dollar seller, is approved in Japan, where it is sold by Roche’s partner Chugai Pharmaceutical.

Citi sees sales of the product reaching 714 million Swiss francs ($638 million) by 2012.

FORBES.COM – WEDNESDAY, Sept. 17 (HealthDay News) — A single cell can repopulate damaged skeletal muscle in mice, says a Stanford University School of Medicine study that’s the first to confirm that muscle stem cells can be found in so-called satellite cells encircling muscle fibers.

Being able to identify and isolate such muscle stem cells in humans may prove important in the treatment of disorders such as muscular dystrophy, muscle injury, or muscle wasting due to aging, disuse or disease, according to the study authors.

For this study, the Stanford researchers developed a method of making certain satellite cells glow in order to track them in living mice. The results showed stem cell-like behavior.

The findings were published online Sept. 17 in the journal Nature.

“We were able to show at the single-cell level that these cells are true, multipotent stem cells. They fit the classic definition: They can both self-renew and give rise to specialized progeny,” study senior author Helen Blau, a professor of pharmacology and director of the Baxter Laboratory of Genetic Pharmacology, said in a Stanford news release.

“We are thrilled with the results,” added study first author Alessandra Sacco, senior research scientist in Blau’s laboratory. “It’s been known that these satellite cells are crucial for the regeneration of muscle tissue, but this is the first demonstration of self-renewal of a single cell.”

The researchers now plan to identify similar muscle stem cells in humans.

More information

The U.S. National Institute of Neurological Disorders and Stroke has more about muscular dystrophy.

eCRO Target Health Inc. creates it’s own software, and is interested in the cutting edge software of others.

A Face-Finding Search Engine

E5064CA1-B931-42E4-9C97-4AD25666295F.jpg
Fuzzy faces: A new face-recognition system from researchers at Carnegie Mellon works even on low-resolution images.
Credit: Pablo Hennings-Yeomans

A new approach to face recognition is better at handling low-resolution video.

By Kate Greene, September 17, 2008, MIT Technology Review – Today there are more low-quality video cameras–surveillance and traffic cameras, cell-phone cameras and webcams–than ever before. But modern search engines can’t identify objects very reliably in clear, static pictures, much less in grainy YouTube clips. A new software approach from researchers at Carnegie Mellon University could make it easier to identify a person’s face in a low-resolution video. The researchers say that the software could be used to identify criminals or missing persons, or it could be integrated into next-generation video search engines.

Today’s face-recognition systems actually work quite well, says Pablo Hennings-Yeomans, a researcher at Carnegie Mellon who developed the system–when, that is, researchers can control the lighting, angle of the face, and type of camera used. “The new science of face recognition is dealing with unconstrained environments,” he says. “Our work, in particular, focuses on the problem of resolution.”

In order for a face-recognition system to identify a person, explains Hennings-Yeomans, it must first be trained on a database of faces. For each face, the system uses a so-called feature-extraction algorithm to discern patterns in the arrangement of image pixels; as it’s trained, it learns to associate some of those patterns with physical traits: eyes that slant down, for instance, or a prominent chin.

The problem, says Hennings-Yeomans, is that existing face-recognition systems can identify faces only in pictures with the same resolution as those with which the systems were trained. This gives researchers two choices if they want to identify low-resolution pictures: they can either train their systems using low-resolution images, which yields poor results in the long run, or they can add pixels, or resolution, to the images to be identified.

The latter approach, which is achieved by using so-called super-resolution algorithms, is common, but its results are mixed, says Hennings-Yeomans. A super-resolution algorithm makes assumptions about the shape of objects in an image and uses them to sharpen object boundaries. While the results may look impressive to the human eye, they don’t accord well with the types of patterns that face-recognition systems are trained to look for. “Super-resolution will give you an interpolated image that looks better,” says Hennings-Yeomans, “but it will have distortions like noise or artificial [features].”

Together with B. Vijaya Kumar, a professor of electrical and computer engineering at Carnegie Mellon, and Simon Baker of Microsoft Research, Hennings-Yeomans has tested an approach that improves upon face-recognition systems that use standard super-resolution. Instead of applying super-resolution algorithms to an image and running the results through a face-recognition system, the researchers designed software that combines aspects of a super-resolution algorithm and the feature-extraction algorithm of a face-recognition system. To find a match for an image, the system first feeds it through this intermediary algorithm, which doesn’t reconstruct an image that looks better to the human eye, as super-resolution algorithms do. Instead, it extracts features that are specifically readable by the face-recognition system. In this way, it avoids the distortions characteristic of super-resolution algorithms used alone.

In prior work, the researchers showed that the intermediary algorithm improved face-matching results when finding matches for a single picture. In a paper being presented at the IEEE International Conference on Biometrics: Theory, Systems, and Applications later this month, the researchers show that the system works even better, in some cases, when multiple images or frames, even from different cameras, are used.

The approach shows promise, says Pawan Sinha, a professor of brain and cognitive sciences at MIT. The problem of low-resolution images and video “is undoubtedly important and has not been adequately tackled by any of the commercial face-recognition systems that I know of,” he says. “Overall, I like the work.”

Ultimately, says Hennings-Yeomans, super-resolution algorithms still need to be improved, but he doesn’t think it would take too much work to apply his group’s approach to, say, a Web tool that searches YouTube videos. “You’re going to see face-recognition systems for image retrieval,” he says. “You’ll Google not by using text queries, but by giving an image.

———

A new approach to face recognition is better at handling low-resolution video.

By Kate Greene, September 17, 2008, MIT Technology Review – Today there are more low-quality video cameras–surveillance and traffic cameras, cell-phone cameras and webcams–than ever before. But modern search engines can’t identify objects very reliably in clear, static pictures, much less in grainy YouTube clips. A new software approach from researchers at Carnegie Mellon University could make it easier to identify a person’s face in a low-resolution video. The researchers say that the software could be used to identify criminals or missing persons, or it could be integrated into next-generation video search engines.

Today’s face-recognition systems actually work quite well, says Pablo Hennings-Yeomans, a researcher at Carnegie Mellon who developed the system–when, that is, researchers can control the lighting, angle of the face, and type of camera used. “The new science of face recognition is dealing with unconstrained environments,” he says. “Our work, in particular, focuses on the problem of resolution.”

In order for a face-recognition system to identify a person, explains Hennings-Yeomans, it must first be trained on a database of faces. For each face, the system uses a so-called feature-extraction algorithm to discern patterns in the arrangement of image pixels; as it’s trained, it learns to associate some of those patterns with physical traits: eyes that slant down, for instance, or a prominent chin.

The problem, says Hennings-Yeomans, is that existing face-recognition systems can identify faces only in pictures with the same resolution as those with which the systems were trained. This gives researchers two choices if they want to identify low-resolution pictures: they can either train their systems using low-resolution images, which yields poor results in the long run, or they can add pixels, or resolution, to the images to be identified.

The latter approach, which is achieved by using so-called super-resolution algorithms, is common, but its results are mixed, says Hennings-Yeomans. A super-resolution algorithm makes assumptions about the shape of objects in an image and uses them to sharpen object boundaries. While the results may look impressive to the human eye, they don’t accord well with the types of patterns that face-recognition systems are trained to look for. “Super-resolution will give you an interpolated image that looks better,” says Hennings-Yeomans, “but it will have distortions like noise or artificial [features].”

Together with B. Vijaya Kumar, a professor of electrical and computer engineering at Carnegie Mellon, and Simon Baker of Microsoft Research, Hennings-Yeomans has tested an approach that improves upon face-recognition systems that use standard super-resolution. Instead of applying super-resolution algorithms to an image and running the results through a face-recognition system, the researchers designed software that combines aspects of a super-resolution algorithm and the feature-extraction algorithm of a face-recognition system. To find a match for an image, the system first feeds it through this intermediary algorithm, which doesn’t reconstruct an image that looks better to the human eye, as super-resolution algorithms do. Instead, it extracts features that are specifically readable by the face-recognition system. In this way, it avoids the distortions characteristic of super-resolution algorithms used alone.

In prior work, the researchers showed that the intermediary algorithm improved face-matching results when finding matches for a single picture. In a paper being presented at the IEEE International Conference on Biometrics: Theory, Systems, and Applications later this month, the researchers show that the system works even better, in some cases, when multiple images or frames, even from different cameras, are used.

The approach shows promise, says Pawan Sinha, a professor of brain and cognitive sciences at MIT. The problem of low-resolution images and video “is undoubtedly important and has not been adequately tackled by any of the commercial face-recognition systems that I know of,” he says. “Overall, I like the work.”

Ultimately, says Hennings-Yeomans, super-resolution algorithms still need to be improved, but he doesn’t think it would take too much work to apply his group’s approach to, say, a Web tool that searches YouTube videos. “You’re going to see face-recognition systems for image retrieval,” he says. “You’ll Google not by using text queries, but by giving an image.”

E3202DE1-4204-4544-995B-D33C2D52A1ED.jpg
A bright idea: Bacteria that are genetically engineered to glow a specific color in response to a particular chemical help researchers spot contaminants more quickly and cheaply than traditional tests do. In this image, magnified 1,000 times, bacteria that normally glow pink glow green when polyaromatic hydrocarbons are present.
Credit: Olivier Binggeli and Robin Tecon, University of Lausanne

Color-coded bacteria light the way to oil spills at sea

By Jocelyn Rice, September 17, 2008, MIT Technology Review – Last spring, on a research vessel cruising through the North Sea, Swiss scientists examined tiny vials of bacteria mixed with seawater for hints of fluorescent light. By analyzing how brightly the bacteria glowed, and with which colors, they were able to diagnose and characterize the early aftermath of an oil spill.

“We were actually very happy that we could do this, and that it turned out so well,” says Jan Van der Meer, an environmental microbiologist at the University of Lausanne, in Switzerland. He announced his team’s results last week at the Society for General Microbiology’s autumn meeting in Dublin.

Living biosensors like these bacteria, which are engineered to glow a particular color in response to a given chemical, have graced petri dishes in research laboratories for decades. But it is only recently that they are being put to practical use, as scientists adapt and deploy them to test for environmental contaminants. Sensor bacteria give faster and cheaper–if somewhat less precise–results than traditional chemical tests do, and they may prove increasingly important in detecting pollutants in seawater, groundwater, and foodstuffs.

In preparation for their research expedition, Van der Meer and his team created three different strains of bacteria, each tailored to sense a particular kind of toxic chemical that leeches into seawater from spilled oil. They began with different strains of bacteria that naturally feast upon these chemicals, each releasing specialized enzymes when they come in contact with their chemical of choice. By hooking up the gene for a fluorescent or bioluminescent protein to the cellular machinery that makes those enzymes, the scientists effectively created a living light switch: whenever the chemical was present, the bacteria would glow.

For each class of toxic chemical, Van der Meer used a different color protein, so that he could easily determine which chemicals were present based on the wavelength of emitted light. And whenever possible, he transferred the entire switch mechanism into another strain of bacteria more suited to a highly controlled lab life than its exotic, oil-eating cousins.

The research team, working in concert with several other European labs, obtained permission from the Dutch government to create a small, artificial oil spill in the waters of the North Sea. They sampled seawater at various time points after the spill, using a luminometer to measure whether sensor bacteria added to each sample had detected the corresponding chemical. Unlike traditional chemical analyses, which can take weeks and require large, expensive instruments, the biosensor test could be performed on site in a matter of minutes.

“Analytical methods can potentially take a long time and a lot of processing,” says Ruth Richardson, a bioenvironmental engineer at Cornell University. “It certainly isn’t something you can do remotely.”

Van der Meer adds that bacterial sensing, which is inexpensive compared with chemical methods, could be particularly useful for routine monitoring. “The extreme simplicity of this is that the heart of the sensor is the bacterial cell, and that the cell is a multiplying entity,” says Van der Meer. “It’s extremely simple to reproduce them, and then you have enough for thousands of tests.”

Catching an oil leak in its earliest stages is critical for directing appropriate cleanup efforts, says Van der Meer. A spill may not leave a visible trace, in the form of tar, until long after its most toxic effects have come and gone. By allowing for quick and easy detection of spills very soon after they occur, biosensor bacteria may make possible an earlier, more effective intervention.

Chemical testing will still likely be necessary, however. The bacterial sensors can give a rough estimate of the relative amounts of each chemical class, but only rigorous chemical analysis can determine exactly how much of each substance is present. “We tried to develop this method to be relatively quick, and to give you an overview,” says Van der Meer, adding that biosensors could perhaps identify areas where more-extensive testing is warranted.

Van der Meer ultimately hopes to incorporate the glowing bacteria into buoy-based devices, which would continuously monitor seawater for hints of an oil spill and relay pertinent information back to a laboratory. His group is developing microfluidic systems that could maintain a constant, contained population of sensor bacteria to periodically test the waters.

Such a device would be subject to the vagaries of living organisms: its usefulness would be entirely dependent on whether the bacteria were alive and thriving. A negative reading could mean that no toxins are present, but it could also mean that the bacteria have died. “If they’re not healthy,” says Richardson, “the system is broken.” Deploying living sensors also raises the risk of releasing genetically altered organisms into the environment. In this case, the chemical-sensing bacteria are theoretically harmless and unlikely to survive long in the harsh open environment.

Beyond detecting oil spills, Van der Meer’s group has developed and tested a bacterial strain that detects arsenic in rice. Other potential applications include testing for pollutants in soil and groundwater, and for antibiotics in meat and milk. But for now, his vision for the future of biosensor bacteria remains largely aquatic.

“Why not have a robotic fish that swims through the water,” he speculates, “and if it detects something, it could send out a signal by GPS? Technically, I think these things are possible.”

By Lisa LaMotta, September 17, 2008, FORBES.COM – Ranbaxy Laboratories responded with shock Wednesday to the U.S. Food and Drug Administration’s actions to limit the importation of its drugs into the U.S.

“Ranbaxy is very disappointed in the action FDA has taken. The company has responded to each concern FDA has raised during the past two years and had thought that progress was being made,” the company said.

The U.S. Food and Drug Administration warned the American public on Tuesday that two factories run by India-based generic drug-maker Ranbaxy Laboratories did not meet safety and contamination standards for manufacturing drugs.

The regulatory agency has been in discussions with Ranbaxy Laboratories for several months concerning perceived deficiencies exhibited at the facilities in Dewas and Paonta Sahib. After much discussion and what it deemed inadequate responses from the drugmaker, the FDA decided to issue the warning and to encourage the confiscation of products coming from the two facilities into the United States.

Ranbaxy added that it had “just received the warning letters that FDA has issued and has not had the opportunity to review those concerns that FDA has determined are unresolved.”

Ranbaxy is one of the worlds largest providers of generic drugs, a multi-billion dollar market that has become of increasing interest to large U.S. pharmaceutical companies as their blockbuster compounds lose patent protection.

Ranbaxy makes about 30 different generic drugs for the U.S. market at the two facilities. The factories allegedly were not taking the proper precautions to prevent cross-contamination between products and to assure adequate sterilization procedures. The FDA said that it does not expect shortages of any of the drugs due to the confiscation, and has determined that other suppliers should be able to meet market demand for the products. Ranbaxy has three facilities in New York and New Jersey under the name Ohm Laboratories where 59 other drugs are produced. The FDA said there were no problems with any of Ranbaxy’s other sites.

One product, Ganciclovir Sodium, is not on the confiscation list because it is only produced by Ranbaxy and not by any of its competitors. Ganciclovir Sodium is the active ingredient in an antiviral medication that treats retinitis, the inflammation of the retina in the eye. The FDA said it will conduct additional oversight over the shipments of this medication.

“This is a preventive action taken to protect the quality of the drugs used each day by millions of Americans by assuring that the process used to make these drugs adheres to the FDA standards of quality manufacturing,” said Dr. Doug Throckmorton, deputy director of the FDA’s Center for Drug Evaluation and Research.

The FDA assured the public that it has no evidence that any harm has been caused by any drugs that have come from the two facilities and recommends that patients continue taking their medications until they consult a doctor.

Shares of Ranbaxy were down 26.80 rupees (58 cents), or 6.6%, to close at 379.10 rupees ($8.14), in Mumbai on Wedesday after the announcement.