The New York Times, February 20, 2009, by Roni Caryn Rabin — One of the largest clinical trials to compare stent therapy with traditional heart bypass surgery in patients with severe heart disease has found that those receiving stents were not at higher risk for having a heart attack or dying and were less likely to suffer strokes.

But patients receiving stents were more likely to need some sort of additional treatment, the study found.

Although the study’s authors concluded that coronary artery bypass graft surgery, or C.A.B.G., remains the gold standard treatment for patients with severe coronary artery disease, the new report paints a complex and nuanced picture of the pros and cons of each therapy. Stenting, the insertion of tiny metal “scaffolds” designed to keep arteries open, may be a good option for some patients with severe disease who have traditionally been referred to bypass surgery, the authors said.

The study was published online Tuesday in The New England Journal of Medicine.

“This gives patients more information about the choices they have as they make their own decisions about what therapy they would like to be treated with,” said Dr. David R. Holmes, professor of medicine at the Mayo Graduate School of Medicine and one author of the multicenter study.

“In the past, we really only had data saying that C.A.B.G. was really the only thing that could be done,” he added. “Now we know patients can be offered stenting.”

The data also provide additional details about the advantages and disadvantages of each approach, Dr. Holmes said. The study is one of the largest international multicenter controlled trials to randomly assign patients to either bypass surgery or percutaneous coronary intervention with a drug-eluting stent, which is coated with a compound designed to prevent blood clots from forming.

The 1,800 patients were treated at 85 medical centers in the United States and in Europe, and all had severe, untreated, three-vessel disease or left main coronary artery disease. But the participants were mostly men, and they were tracked for just a year.

Patients were randomly assigned to undergo either C.A.B.G. or stenting, and then monitored for adverse events including deaths, strokes, heart attacks or repeat revascularization procedures.

Over all, the stent patients had a higher risk of adverse outcomes, with 17.8 percent suffering an adverse outcome, compared with 12.4 percent for C.A.B.G. patients.

The two groups had similar risks for deaths and heart attacks, but C.A.B.G. patients were more likely to have strokes, with 2.2 percent suffering a stroke compared with 0.6 percent of stent patients.

Stent patients were more likely to need repeat procedures, the researchers found. Some 13.5 percent needed repeat revascularization, compared with 5.9 percent of bypass patients.

C.A.B.G. was determined to be a better treatment over all, because the guidelines of the clinical trial called for evaluating the risk of all adverse events combined. But some experts questioned whether an increased risk for an additional procedure, associated with stenting, should carry the same weight as the increased risk of stroke, associated with C.A.B.G.

“Having another procedure is not optimal, but it’s a heck of a lot better than dying or having a stroke,” said Dr. L. David Hillis, chairman of the department of medicine at University of Texas Medical School in San Antonio, who co-wrote an editorial accompanying the study.

“What they’re telling us is that these procedures are similar in many respects,” he added. “For individual patients, one is often better than the other. For a patient who can have either one, there are pluses or minuses to each one.”

For about two-thirds of the patients, bypass surgery was preferable, a subsequent analysis found, while for about one-third of the patients, stenting was preferable, said Dr. Patrick W. Serruys, the principal investigator of the study and a professor of medicine at Erasmus University Medical Center in Rotterdam.

“For the most severe left main coronary artery and three-vessel disease,” he said, “surgery is probably most appropriate.”

The study was supported by Boston Scientific, which manufacturers drug-eluting stents.

8617860F-387F-4851-9FD0-7F74CBDDEB7A.jpg
Jeffrey Coolidge/Getty Images In the quest to cut spiraling health care costs, what happens to the doctor-patient bond?

The New York Times, Tara Parker-Pope –As the medical system struggles with spiraling costs, one solution is pay-for-performance. Instead of giving doctors flat fees, insurers pay doctors based on whether they have met quality goals — such as helping patients get their diabetes or blood sugar under control.

But inherent in pay-for-performance systems is a push to reduce costs. That, asks Dr. Pauline Chen, in her latest Doctor and Patient column, raises questions about how pay-for-performance will affect the doctor-patient relationship.

Dr. Chen recently spoke with a colleague about his new pay-for-performance contract.

“I do worry about how this will affect my relationship with patients,” she said. “If my patient comes in with a headache and wants a CAT scan, but I don’t order it because I think it’s not medically indicated, will that patient think I’m just trying to save money?”

Unfortunately, as Dr. Chen reports, very little has been done to explore the impact pay-for-performance can have on patient trust. What do you think of your doctor’s pay being tied to performance?

…………………………………………………………………………………………………….

Are Insurance Companies Really Interested in Doctors’ & Their Patients’ Outcomes?

The New York Times, February 20, 2009, by Pauline W. Chen MD — During medical school, I learned about randomized clinical trials. Every experimental drug went through three types of clinical trials before approval. There were Phase I clinical trials which tested for toxicity and dosing. Phase II trials then examined optimal dosing and efficacy. Finally, Phase III trials compared the efficacy of the new drug with the current “gold standard” treatment.

I’m now learning that health care policy doesn’t always undergo the same kind of rigorous study.

I met up recently with a friend who is a primary care physician. His practice has just signed a contract with the state’s largest insurer that reimburses not according to the traditional fee-for-service, which pays doctors a set price for each visit, test or procedure they do, but according to a newer standard known as “pay-for-performance.” The insurance company will give his practice a budget for each patient; the doctors in the practice can earn more by cutting costs and by meeting certain quality goals, like controlling blood sugar or high blood pressure in patients.

I asked my friend if he was happy about the new contract.

“I guess so,” he replied with some hesitation. “I’m not sure how else we are going to stop spiraling health care costs.” But then he added, “I do worry about how this will affect my relationship with patients. If my patient comes in with a headache and wants a CAT scan, but I don’t order it because I think it’s not medically indicated, will that patient think I’m just trying to save money?”

I thought about his concerns, and at first could not see any downside to linking quality to financial incentives. When doctors are working on a fee-for-service plan, there’s just not that much incentive for them to take time to promote healthy living or strong doctor-patient relationships; the payment system rewards for high turnover and pits doctors against the clock.

I knew that other industries, like business corporations and education, had successfully used pay-for-performance to improve quality. And health care quality, which is a secondary concern with fee-for-service incentives, could definitely use a boost. A 2003 New England Journal of Medicine study showed that only about half of patients received the standard, nationally recognized care for preventive health issues and acute and chronic conditions.

I felt, too, that my own area of training, liver transplantation, could benefit from linking financial incentives to quality, not quantity. A surgeon must, for example, weigh the relative importance of multiple, sometimes counterbalancing factors when deciding to accept a donor liver for a waiting patient. Will the donor liver be too big or too small for the patient? Is the donor liver cirrhotic or too fatty or too old? Is the recipient too sick to receive this particular organ or able to accept the risk of waiting for the next one?

These are vital questions, because they help liver transplant surgeons avoid some of the worst complications of transplantation — body-wide infections, bleeding so profuse that staff cannot change the bed sheets quickly enough, and a coma so deep that it can only end in death.

Yet every liver transplant surgeon has seen these devastating complications. And as multiple studies have shown, despite a surgeon’s best predictions, from 5 to 10 percent of all transplanted livers will not work for reasons that none of us know or understand.

Nonetheless, in a fee-for-service payment system that reimburses hospitals anywhere from $400,000 to $500,000 per liver transplant, it’s hard not to suspect less-than-savory intentions from any surgeon or hospital with high transplant numbers and consistently more complications than the accepted norms. Last fall, in a piece titled “Doing a Volume Business in Liver Transplants,” The Wall Street Journal reported on one such case involving a surgeon hired six years ago by the University of Pittsburgh Medical Center. Allegations against this surgeon and the medical center include using questionable donor livers in relatively healthy patients in order to increase the number of transplants performed.

I could not help but think that if the medical center’s reimbursement had been based on pay-for performance — and had been subject to the inherent quality checks and transparency of such a plan — perhaps fewer patients and families would have been so terribly affected for so long. And perhaps the disastrous repercussions on trust between potential liver transplant patients and their surgeons in the wake of this case would have been averted.

Still, I wondered how pay-for-performance might affect the doctor-patient relationship in daily interactions, as my primary care doctor friend mentioned. He is one of the best clinicians I know, a devoted patient advocate.

So I began searching for clinical trials on pay-for-performance plans. In this era of evidence-based medicine, I was certain I would find plenty of well-designed studies focusing on cost, quality and the doctor-patient relationship.

What I found was this: waves of new pay-for-performance payment plans across the country, a relatively modest number of articles on the subject and, most disturbingly, very few high quality studies on efficacy. Looking for a few good studies, it turned out, was like searching for a needle in a massive haystack of social experimentation.

But what an impressive haystack. Since early this decade, an ever-increasing number of employer groups and health plans have chosen to adopt pay-for-performance initiatives. For example, more than half of the private sector health maintenance organizations (H.M.O.’s) now have pay-for-performance programs. California adopted a voluntary pay-for-performance program in 2003 and now has the largest such program in the country, involving 225 participating physician organizations, some 35,000 physicians and over 6 million state residents. And in the Deficit Reduction Act of 2005, Congress mandated that the Center for Medicare and Medicaid Services adopt a pay-for-performance plan into Medicare.

The details of these myriad plans vary. Reimbursements and bonuses can be based on the work of, and thus are paid to, individual doctors, groups of doctors, or groups of providers who are part of a “medical home.” Some plans, such as HealthPartners in Minnesota, refuse to pay for so-called “never events,” rare and preventable complications like giving a mismatched blood transfusion, making a major medication error or operating on the wrong body part. Other plans transfer the responsibility for traditional insurance risks such as unknown genetics, unpredictable events or simply bad luck away from insurance companies and over to doctors. Still others, such as the Prometheus Payment model now being tested in four communities across the United States, attempt to make doctors responsible for only the risks they can control with high quality care by adjusting payments to reflect that risk alone.

But in this profusion of programs, I found it nearly impossible to find the kind of randomized control study I have come to trust when evaluating experimental drugs or even new devices or surgical therapy. And the few studies that have been published are only mixed or guardedly positive in their conclusions regarding pay-for-performance plans. Only one study has focused on cost-effectiveness.

Even more concerning, however, are the unintended consequences that some of these studies have uncovered. Some pay-for-performance plans have resulted in increased administrative costs. Certain physicians and hospitals have found ways to “work” the system, avoiding patients who require more costly care or who skew quality records and exaggerating the severity of patient status in order to document the kind of dramatic improvement that might result in a bonus.

And none of the studies focused on the effect of pay-for-performance on the relationship between patients and their doctors.

In other words, we are continuing to charge ahead with pay-for-performance plans without stopping to look at what we’ve already done. And what we’ve already done may or may not be as promising as we believe or would like to hope.

Wondering if I had missed something, I called Dr. Laura A. Petersen, lead author of the most recent review of studies on pay-for-performance plans and chief of the section of health services research at Baylor College of Medicine in Houston.

Dr. Petersen had had similar concerns when she began first began sifting through clinical studies for her review. “Pay-for-performance was being implemented everywhere, so I thought that there had to be a lot of evidence,” she told me. “But I was shocked. There was only this tiny group of studies. I called everyone I knew, contacted people around the country; but there just was really nothing. I found it fascinating that a widespread policy intervention like this could spread like wildfire on the basis of no evidence.”

She mentioned the appeal of the idea behind pay-for-performance as a possible reason. “We are having runaway costs because the incentive is for individual doctors to ramp up their volume to increase income,” she said. “Pay-for-performance is a really interesting policy intervention because it’s an attempt to switch to incentives based on quality rather than on volume and intensity of services. We want to be able to pay people for good quality.”

I asked Dr. Petersen about some of the unintended negative consequences, like the avoidance of certain patients and increased administrative costs. “I think the answer is to try to design pay-for-performance schemes where those negative consequences are minimized or accounted for,” she said. “And all the efforts of the current administration to implement electronic medical records have the potential to help with administrative costs and to make this more doable.”

I mentioned my primary care physician friend and his concerns. Did she know about the effects of pay-for-performance on the doctor-patient relationship? Could it pit one against the other?

“There’s potential for that if the programs aren’t designed well and if we don’t think through all the incentives properly,” she said. “But if we align incentives, if we change from being volume-driven to somehow paying for patients being satisfied, we can ameliorate the relationship. I hope that by getting off this volume treadmill, we will ultimately do more for the doctor-patient relationship.”

“Actually,” she then deadpanned, “it’s worse than a treadmill. It’s like one of those gerbil wheels.”

I asked Dr. Petersen about her current research. Last year with funding from the National Institutes of Health and the Veterans’ Administration, she started a 20-month study looking at pay-for-performance in 12 different Veterans’ Administration hospitals across the country. She and her co-investigators plan to examine the effects on primary care physicians, clinical staff, administrative staff, hospital leaders and patients. They hope to improve understanding of the relationships among incentives, costs, quality and the doctor-patient relationship.

“My passion is health care quality, improving and figuring out ways we can structure and get high quality,” Dr. Petersen said. “Even in the best hospitals, you find that things happen that shouldn’t. And it is not that people are bad or dumb, but that the system is not ensuring the best care. I’m interested in those enforcing functions that make it almost impossible to do the wrong thing.”

She added, “I am pretty optimistic right now. I think our current crisis of payment is going to stimulate action.”

And, it appears, the kind of long overdue research, that we need, in order to make the best choices as doctors and patients.

Target Health Inc. is pleased to have Ferring Pharmaceuticals as a client.

For this approved product we…………………

European Commission grants Ferring Pharmaceuticals approval of FIRMAGON® (degarelix) for treatment of prostate cancer

New gonadotrophin-releasing hormone (GnRH) receptor antagonist demonstrates rapid, long-term suppression of testosterone

Saint-Prex, Switzerland, 19 February 2009 – Ferring Pharmaceuticals announced today that it has received marketing authorisation from the European Commission, for FIRMAGON® (degarelix), a new GnRH receptor antagonist indicated for patients with advanced, hormone-dependent prostate cancer.

In Phase III studies degarelix produced a significant reduction in levels of testosterone [1] [2], within three days in more than 96% of study patients.[3] Testosterone plays a major role in the growth and spread of prostate cancer cells.

The data show that degarelix provided an extremely fast effect on testosterone levels, close to the immediate effect achieved with surgery (orchidectomy).[2] [3]

“We are delighted with the approval of FIRMAGON® (degarelix), which demonstrated in clinical trials both an immediate onset of action and a profound long-term suppression of testosterone and PSA” commented Dr. Pascal Danglas, Executive Vice President Clinical & Product Development at Ferring Pharmaceuticals. “We will work with local authorities to ensure the launch of FIRMAGON to patients across European Union countries as soon as possible.”

The European Commission approval for FIRMAGON® (degarelix) follows approval from the FDA in the US in December 2008.

– ENDS –

Notes to Editors:

About Prostate Cancer
Prostate cancer is the most common form of cancer in men, and the second leading cause of cancer death. In the US 218,890 new cases were estimated for 2007, with a mortality rate of 27,050. In 2005 127,490 new cases were diagnosed in the 5 biggest European countries and 18,310 in Japan.

About degarelix
Degarelix is a GnRH receptor antagonist indicated for advanced prostate cancer. Ferring plans to communicate a range of information about the treatment at the European Academy of Urology (EAU) congress in Stockholm in March.

About Ferring
Ferring is a Swiss-headquartered, research driven, speciality biopharmaceutical group active in global markets. The company identifies, develops and markets innovative products in the areas of urology, endocrinology, gastroenterology, gynaecology, and fertility. In recent years Ferring has expanded beyond its traditional European base and now has offices in over 45 countries. To learn more about Ferring or our products please visit www.ferring.com.

……………………………………………………………………………………

FDA Approves Drug for Patients with Advanced Prostate Cancer

The U.S. Food and Drug Administration recently approved the injectable drug degarelix, the first new drug in several years for prostate cancer.

Degarelix is intended to treat patients with advanced prostate cancer. It belongs to a class of agents called gonadotropin releasing hormone (GnRH) receptor inhibitors. These agents slow the growth and progression of prostate cancer by suppressing testosterone, which plays an important role in the continued growth of prostate cancer.

Hormonal treatments for prostate cancer may cause an initial surge in testosterone production before lowering testosterone levels. This initial stimulation of the hormone receptors may temporarily prompt tumor growth rather than inhibiting it. Degarelix doesn’t do this.

“Prostate cancer is the second leading cause of cancer death among men in the United States and there is an ongoing need for additional treatment options for these patients,” said Richard Pazdur, M.D., director of the Office of Oncology Drug Products, Center for Drug Evaluation and Research, FDA.

Prostate cancer is one of the most commonly diagnosed cancers in the United States. In 2004, the most recent year for which statistics are currently available, nearly 190,000 men were diagnosed with prostate cancer and 29,000 men died from the cancer.

Several treatment options exist for different stages of prostate cancer including observation, prostatectomy (surgical removal of the prostate gland), radiation therapy, chemotherapy, and hormone therapy with agents that affect GnRH receptors.

The efficacy of degarelix was established in a clinical trial in which patients with prostate cancer received either degarelix or leuprolide, a drug currently used for hormone therapy in treating advanced prostate cancer. Degarelix treatment did not cause the temporary increase in testosterone that is seen with some other drugs that affect GnRH receptors.

In fact, nearly all of the patients on either drug had suppression of testosterone to levels seen with surgical removal of the testes.

The most frequently reported adverse reactions in the clinical study included injection site reactions (pain, redness, and swelling), hot flashes, increased weight, fatigue, and increases in some liver enzymes.

Degarelix is manufactured for Ferring Pharmaceuticals Inc., Parsippany, N.J., by Rentschler Biotechnologie Gmbh, Laupheim, Germany.

#

RSS Feed for FDA News Releases

9

The New York Times, February 23, 2009, by Donald G. McNeil Jr — In a discovery that could radically change how the world fights flu, researchers have engineered antibodies that protect against many strains of influenza, including even the 1918 Spanish flu and the H5N1 bird flu.

The discovery, experts said, could lead to the development of a flu vaccine that would not have to be changed yearly. And the antibodies already developed can be injected as a treatment, targeting the virus in ways that drugs like Tamiflu do not. Clinical trials to prove they are safe in humans could begin within three years, a researcher estimated.

“This is a really good study,” said Dr. Anthony S. Fauci, the head of the National Institute of Allergy and Infectious Diseases, who was not part of the study. “It’s not yet at the point of practicality, but the concept is really quite interesting.”

The work is so promising that his institute will offer the researchers grants and access to its ferrets, which can catch human flu.

The study, done by researchers from Harvard Medical School, the Centers for Disease Control and Prevention and the Burnham Institute for Medical Research, was published Sunday in the journal Nature Structural & Molecular Biology.

In an accompanying editorial, Dr. Peter Palese, a leading flu researcher from Mount Sinai Medical School, said the researchers apparently found “a viral Achilles heel.”

Dr. Anne Moscona, a flu specialist at Cornell University’s medical school, called it “a big advance in itself, and one that shows what’s possible for other rapidly evolving pathogens.”

But Henry L. Niman, a biochemist who tracks flu mutations, was skeptical, arguing that human immune systems would have long ago eliminated flu if the virus were as vulnerable in one spot as this discovery suggests. Also, he noted, protecting the mice in the study took huge doses of antibodies, which today are expensive and cumbersome to infuse.

One team leader, Dr. Wayne A. Marasco of Harvard, said the team began by screening a library of 27 billion antibodies he had created, looking for ones that target the hemagglutinin “spikes” on the shells of flu viruses.

Antibodies are proteins normally produced by white blood cells that attach to invaders, either neutralizing them by clumping on, or tagging them so white cells can find and engulf them. Today, they can be built in the laboratory and then “farmed” in plants, driving prices down, Dr. Marasco said.

The flu virus uses the lollipop-shaped hemagglutinin spike to invade nose and lung cells. There are 16 known types of spikes, H1 through H16.

The spike’s tip mutates constantly, which is why flu shots have to be reformulated each year. But the team found a way to expose the spike’s neck, which apparently does not mutate, and picked antibodies that clamp onto it. Once its neck is clamped, a spike can still penetrate a human cell, but it cannot unfold to inject the genetic instructions that hijack the cell’s machinery to make more virus.

The team then turned the antibodies into full-length immunoglobulins and tested them in mice.

Immunoglobulin — antibodies derived from the blood of survivors of an infection — has a long history in medicine. As early as the 1890s, doctors injected blood from sheep that had survived diphtheria to save a girl dying of it. But there can be dangerous side effects, including severe immune reactions or accidental infection with other viruses.

The mice in the antibody experiments were injected both before and after getting doses of H5N1. In 80 percent of cases, they were protected. The team then showed that their new antibodies could protect against both H1 and H5 viruses. Most of this season’s flu is H1, and experts still fear that the lethal H5N1 bird flu might start a human pandemic.

However, each year’s other seasonal flu outbreaks are usually caused by H3 or B strains, so flu shots must also contain those. But there is always at least a partial mismatch because vaccine makers must pick from among strains circulating in February since it takes months to make supplies. By the time the flu returns in November, its “lollipop heads” have often mutated.

Therefore, other antibodies that clamp to and disable H3 and B will have to be found before doctors even think of designing a once-a-lifetime flu shot. It is also unclear how long an antibody-producing vaccine will offer protection; new antibodies themselves fade out of the blood after about three weeks.

Dr. Marasco said his team had already found a stable neck in the H3 “and we’re going after that one too.” They have not tried with B strains yet.

To make a vaccine work, researchers also need a way to teach the immune system to expose the spike’s neck for attack. It is hidden by the fat lollipop head, whose rapid mutations may act as a decoy, attracting the immune system.

As a treatment for people already infected with flu, Dr. Marasco said, the antibodies are “ready to go, no additional engineering needed.”

They will, of course, need the safety testing required by the Food and Drug Administration.

Anti-flu drugs like Tamiflu, Relenza and rimantadine do not target the hemagglutinin spike at all.

Tamiflu and Relenza inhibit neuraminidase (the “N” in flu names like H5N1), which has been described as a helicopter blade on the outside of the virus that chops up the receptors on the outside of the infected cell so the new virus being made inside can escape. Rimantidine is believed to attack a layer of the virus’s shell.

21

New Berkeley Lab Report Shows Significant Historical Reductions in the Installed Costs of Solar Photovoltaic Systems in the U.S.

Berkeley, CA — A new study on the installed costs of solar photovoltaic (PV) power systems in the U.S. shows that the average cost of these systems declined significantly from 1998 to 2007, but remained relatively flat during the last two years of this period.

Researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) who conducted the study say that the overall decline in the installed cost of solar PV systems is mostly the result of decreases in nonmodule costs, such as the cost of labor, marketing, overhead, inverters, and the balance of systems.

“This suggests that state and local PV deployment programs — which likely have a greater impact on nonmodule costs than on module prices — have been at least somewhat successful in spurring cost reductions,” states the report, which was written by Ryan Wiser, Galen Barbose, and Carla Peterman of Berkeley Lab’s Environmental Energy Technologies Division.

Installations of solar PV systems have grown at a rapid rate in the U.S., and governments have offered various incentives to expand the solar market.

“A goal of government incentive programs is to help drive the cost of PV systems lower. One purpose of this study is to provide reliable information about the costs of installed systems over time,” says Wiser.

The study examined 37,000 grid-connected PV systems installed between 1998 and 2007 in 12 states. It found that average installed costs, in terms of real 2007 dollars per installed watt, declined from $10.50 per watt in 1998 to $7.60 per watt in 2007, equivalent to an average annual reduction of 30 cents per watt or 3.5 percent per year in real dollars.

The researchers found that the reduction in nonmodule costs was responsible for most of the overall decline in costs. According to the report, this trend, along with a reduction in the number of higher-cost “outlier” installations, suggests that state and local PV-deployment policies have achieved some success in fostering competition within the industry and in spurring improvements in the cost structure and efficiency of the delivery infrastructure for solar power.

Costs differ by region and type of system

Other information about differences in costs by region and by installation type emerged from the study. The cost reduction over time was largest for smaller PV systems, such as those used to power individual households. Also, installed costs show significant economies of scale. Systems completed in 2006 or 2007 that were less than two kilowatts in size averaged $9.00 per watt, while systems larger than 750 kilowatts averaged $6.80 per watt.

Installed costs were also found to vary widely across states. Among systems completed in 2006 or 2007 and less than 10 kilowatts, average costs range from a low of $7.60 per watt in Arizona, followed by California and New Jersey, which had average installed costs of $8.10 per watt and $8.40 per watt respectively, to a high of $10.60 per watt in Maryland. Based on these data, and on installed-cost data from the sizable Japanese and German PV markets, the authors suggest that PV costs can be driven lower through sizable deployment programs.

The study also found that the new construction market offers cost advantages for residential PV systems. Among small residential PV systems in California completed in 2006 or 2007, those systems installed in residential new construction cost 60 cents per watt less than comparably-sized systems installed as retrofit applications.

Cash incentives declined

The study also found that direct cash incentives provided by state and local PV incentive programs declined over the 1998-2007 study period. Other sources of incentives, however, have become more significant, including federal incentive tax credits (ITCs). As a result of the increase in the federal ITC for commercial systems in 2006, total after-tax incentives for commercial PV were $3.90 per watt in 2007, an all-time high based on the data analyzed in the report. Total after-tax incentives for residential systems, on the other hand, averaged $3.1 per watt in 2007, their lowest level since 2001.

Because incentives for residential PV systems declined over this period, the net installed cost of residential PV has remained relatively flat since 2001. At the same time, the net installed cost of commercial PV has dropped — it was $3.90 per watt in 2007, compared to $5.90 per watt in 2001, a drop of 32 percent, thanks in large part to the federal ITC.

“Tracking the Sun: The Installed Cost of Photovoltaics in the U.S. from 1998–2007,” by Ryan Wiser, Galen Barbose, and Carla Peterman, may be downloaded from http://eetd.lbl.gov/ea/emp/reports/lbnl-1516e.pdf. The research was supported by funding from the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (Solar Energy Technologies Program) and Office of Electricity Delivery and Energy Reliability (Permitting, Siting and Analysis Division), and by the Clean Energy States Alliance.

Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California.

……………..
3
4Solar Energy

The sun has produced energy for billions of years. Solar energy is the sun’s rays (solar radiation) that reach the earth.

Solar energy can be converted into other forms of energy, such as heat and electricity. In the 1830s, the British astronomer John Herschel used a solar thermal collector box (a device that absorbs sunlight to collect heat) to cook food during an expedition to Africa.

Solar energy can be converted to thermal (or heat) energy and used to:

· Heat water – for use in homes, buildings, or swimming pools.

· Heat spaces – inside greenhouses, homes, and other buildings.

Solar energy can be converted to electricity in two ways:

· Photovoltaic (PV devices) or “solar cells” – change sunlight directly into electricity. PV systems are often used in remote locations that are not connected to the electric grid. They are also used to power watches, calculators, and lighted road signs.

· Solar Power Plants – indirectly generate electricity when the heat from solar thermal collectors is used to heat a fluid which produces steam that is used to power generator. Out of the 15 known solar electric generating units operating in the United States at the end of 2006, 10 of these are in California, and 5 in Arizona. No statistics are being collected on solar plants that produce less than 1 megawatt of electricity, so there may be smaller solar plants in a number of other states.

The major disadvantages of solar energy are:

· The amount of sunlight that arrives at the earth’s surface is not constant. It depends on location, time of day, time of year, and weather conditions.

· Because the sun doesn’t deliver that much energy to any one place at any one time, a large surface area is required to collect the energy at a useful rate.

Photovoltaic Energy

Photovoltaic energy is the conversion of sunlight into electricity. A photovoltaic cell, commonly called a solar cell or PV, is the technology used to convert solar energy directly into electrical power. A photovoltaic cell is a nonmechanical device usually made from silicon alloys.
5
Sunlight is composed of photons, or particles of solar energy. These photons contain various amounts of energy corresponding to the different wavelengths of the solar spectrum. When photons strike a photovoltaic cell, they may be reflected, pass right through, or be absorbed. Only the absorbed photons provide energy to generate electricity. When enough sunlight (energy) is absorbed by the material (a semiconductor), electrons are dislodged from the material’s atoms. Special treatment of the material surface during manufacturing makes the front surface of the cell more receptive to free electrons, so the electrons naturally migrate to the surface.

When the electrons leave their position, holes are formed. When many electrons, each carrying a negative charge, travel toward the front surface of the cell, the resulting imbalance of charge between the cell’s front and back surfaces creates a voltage potential like the negative and positive terminals of a battery. When the two surfaces are connected through an external load, electricity flows.
The photovoltaic cell is the basic building block of a photovoltaic system. Individual cells can vary in size from about 1 centimeter (1/2 inch) to about 10 centimeter (4 inches) across. However, one cell only produces 1 or 2 watts, which isn’t enough power for most applications. To increase power output, cells are electrically connected into a packaged weather-tight module. Modules can be further connected to form an array. The term array refers to the entire generating plant, whether it is made up of one or several thousand modules. The number of modules connected together in an array depends on the amount of power output needed.

The performance of a photovoltaic array is dependent upon sunlight. Climate conditions (e.g., clouds, fog) have a significant effect on the amount of solar energy received by a photovoltaic array and, in turn, its performance. Most current technology photovoltaic modules are about 10 percent efficient in converting sunlight. Further research is being conducted to raise this efficiency to 20 percent.

The photovoltaic cell was discovered in 1954 by Bell Telephone researchers examining the sensitivity of a properly prepared silicon wafer to sunlight. Beginning in the late 1950s, photovoltaic cells were used to power U.S. space satellites. The success of PV in space generated commercial applications for this technology. The simplest photovoltaic systems power many of the small calculators and wrist watches used everyday. More complicated systems provide electricity to pump water, power communications equipment, and even provide electricity to our homes.

Some advantages of photovoltaic systems are:

1. Conversion from sunlight to electricity is direct, so that bulky mechanical generator systems are unnecessary.

2. PV arrays can be installed quickly and in any size required or allowed.

3. The environmental impact is minimal, requiring no water for system cooling and generating no by-products.

Photovoltaic cells, like batteries, generate direct current (DC) which is generally used for small loads (electronic equipment). When DC from photovoltaic cells is used for commercial applications or sold to electric utilities using the electric grid, it must be converted to alternating current (AC) using inverters, solid state devices that convert DC power to AC.

Historically, PV has been used at remote sites to provide electricity. In the future PV arrays may be located at sites that are also connected to the electric grid enhancing the reliability of the distribution system.

Solar Thermal Heat

Solar thermal(heat) energy is often used for heating swimming pools, heating water used in homes, and space heating of buildings. Solar space heating systems can be classified as passive or active.

Passive space heating is what happens to your car on a hot summer day. In buildings, the air is circulated past a solar heat surface(s) and through the building by convection (i.e. less dense warm air tends to rise while more dense cooler air moves downward) . No mechanical equipment is needed for passive solar heating.

6
Active heating systems require a collector to absorb and collect solar radiation. Fans or pumps are used to circulate the heated air or heat absorbing fluid. Active systems often include some type of energy storage system.

Solar collectors can be either nonconcentrating or concentrating.

1) Nonconcentrating collectors – have a collector area (i.e. the area that intercepts the solar radiation) that is the same as the absorber area (i.e., the area absorbing the radiation). Flat-plate collectors are the most common and are used when temperatures below about 200o degrees F are sufficient, such as for space heating.

2) Concentrating collectors – where the area intercepting the solar radiation is greater, sometimes hundreds of times greater, than the absorber area.

Solar Thermal Power Plants

Solar thermal power plants use the sun’s rays to heat a fluid, from which heat transfer systems may be used to produce steam. The steam, in turn, is converted into mechanical energy in a turbine and into electricity from a conventional generator coupled to the turbine. Solar thermal power generation works essentially the same as generation from fossil fuels except that instead of using steam produced from the combustion of fossil fuels, the steam is produced by the heat collected from sunlight. Solar thermal technologies use concentrator systems due to the high temperatures needed to heat the fluid. The three main types of solar-thermal power systems are:

Solar Energy And The Environment

Solar energy is free, and its supplies are unlimited. Using solar energy produces no air or water pollution but does have some indirect impacts on the environment. For example, manufacturing the photovoltaic cells used to convert sunlight into electricity, consumes silicon and produces some waste products. In addition, large solar thermal farms can also harm desert ecosystems if not properly managed.

………………………………………………………………………………………………………….

Easy, Clean and Cheap

7
8
Spanish Trade Group Visits German PV Installation. German manufacturers produce the most PV systems, followed by the Japanese, with increasing competition from cheaper Chinese PV systems.

IBM is running an ad that shows two workers sinking into quick sand. The article below is a perfect example of what that really means.

13
Gov. Arnold Schwarzengger announces that he will sign the newly approved state …

Another High-Profile SAP Failure: State Of California (SAP)

BusinessInsider.com, February 21, 2009, by Eric Krangel — Yet another black eye for German software giant SAP (SAP). Last month Jeweler Shane Co. blamed difficulties trying to get SAP software running as partly responsible for the company’s bankruptcy. Now the State of California — already in dire financial straits — is giving up on its SAP implementation after sinking $25 million into the project and seeing nothing out of it.

CIOinsight: Schwarzenegger has long eyed the payroll as a way to stave off financial shortfalls until a budget reaches his desk for signing. Last July, the governor attempted to cut back payroll by temporarily dropping workers down to minimum wage until a budget deal was hammered out by lawmakers.

This plan was obstructed not by political wrangling or labor lobbyists — it was held up by absolutely ancient IT infrastructure and a beleaguered project to upgrade to SAP.

The California State Controller’s Office (SCO) is currently running on an old COBOL-based payroll system that dates back to the 1970s. The SCO began an initiative in 2006 to update this system, with initial estimates targeting full implementation by 2009. State Controller John Chiang said that the systems needed to carry out Schwarzenegger’s minimum wage plan would not be available for six months. That was last summer.

Just this January, the SCO announced that it was canceling its contract with the consulting company in charge of the project and had not estimated when it would hire another firm to carry on. That was $25 million into an estimated $69 million project.

Earlier this month, SAP co-CEO Leo Apotheker angrily denied there were problems with SAP’s software, and blamed consulting firms like IBM (IBM) and Accenture (ACN) for sending people who knew nothing about the software to clients as experts on SAP. Leo also has said SAP’s new cloud-like package, SAP Business Suite 7, should be easier to implement.

Plenty of blame to go around, we think. At least in the California bomb, the consulting firm involved was BearingPoint, which yesterday filed for bankruptcy. Accenture has already moved to acquire part of BearingPoint’s operations

Forbes.com, February 20, 2009, by George Gilder — EZchip CEO, Eli Fruchter is a kindly, tough, humble, inspiring man, with sandy hair above a broad, blunt weather-beaten face. You would not recognize him as a miracle worker. He does not make grand claims. He is not an agile debater on a panel. He is not full of artful analogies and elegant prose or riveting details or luminous power points. He does not have avian or angular features or dark hair or other prototypical Israeli characteristics. He tells his story lucidly but without embellishment in careful Hebrew accented English. Take it or leave it.

If you believe his glib rivals with their claims of chips that will excel and eclipse his own, he does not seem to care. He knows what the customers say, what he has accomplished, and he seems indifferent to the hyperbole of others. Wall Street, the journals and magazines, the tech blogs, they will learn in time. His competitors will learn in time. In an unimpressive gray glass-clad multistory building by a pitted road on a hill in Yokneam, far from the centers of Israeli enterprise, with no architectural distinction or flourish, Fruchter has performed a miracle. But Eli does not preen as a miracle worker. He is embarrassed by prophetic language. When I informed him of my plans for this book, describing Israeli entrepreneurship and technology as the consummation of the Jewish science of the Twentieth Century, he balked, waving me aside.

“I am not important,” he said.

Then he asked me about Einstein.

“You are going to put me in a book with Einstein?” the entrepreneur of EZchip asked incredulously.

“Yes,” I said, “Einstein, and Bohr, and Pauli, and Von Neumann, and Feynman. All those guys were just preparing the way for you, Eli, providing the theoretical foundations for network processors that can compute at the speed of fiber optic communications, at the speed of light.”

Eli peered back at me full of skepticism.

I tried to explain.

Science finds its test in engineering. If scientific theories cannot be incorporated in machines that work, they are a form of theology. Throughout most of the history of science, the pioneers actually built the devices that proved their theories. Faraday, Hertz, Michelson, all those guys described by George Johnson in his book on the great experiments, they all proved their mastery of their ideas by creating the apparatus that tested them and embodied them. If you cannot build something that incorporates your idea, you cannot fully understand it and you probably cannot build on it.

The regnant physicists today are mostly mythopoeic metaphysicians reifying math: string theorists exploring dozens of mythical dimensions; exponents of mythical infinite parallel universes, with anthropic principles to explain us and our ideas as mere random happenings; nanotech evangelists who imagine a mythical reduction of all engineering to pure physics and its replication; cosmologists with their black holes and myriad particle types and unfathomable dark matter and dark energy dominating the universe. These guys cannot begin the construct anything that proves their increasingly fantastic theories.

You, Eli, take the best work of twentieth century science—quantum chemistry and solid state physics and optical engineering and computer science and information theory—and make it into an entirely new device: a network processor that can apply programmable computer intelligence to millions of frames of data and packets of information traveling at rates of a hundred billion bits a second. That’s one hundred gigabits a second. Equivalent to 100 thousand 400 page books, with each page or so scanned and addressed and sorted, and all sent in one second to the right destination.

When conditions on the network change, the network processor can be reprogrammed. With as many as eight “touches” of the data per packet, classifying the packet, looking up addresses, finding the best route—parsing, searching, resolving, modifying, resetting the packet headers. That means trillions of programmable operations per second. You make the most efficient computers on the planet.

You build things that the world has never seen before. In fact, even so, you and your team—Gil Koren, Amir Ayal, Ran Giladi and the rest—may well not fully understand everything that is going on in your machines. No one has fully fathomed the quantum mysteries underlying modern electronics. But I believe that von Neumann was the paramount figure of Twentieth Century science because he was the link between the pioneers of quantum theory and the machines that won World War II, that prevailed in the cold war, and that enabled the emergence of a global economy tied together and fructified by the Internet. The entire saga is one fabric. And you are the current embodiment of this great tradition, mainly a Jewish tradition.

Von Neumann was the man who outlined the path between the new quantum science of materials and the new computer science of information. “You Eli are a leading figure in the next generation of computer technology: the creation of parallel processors made of sand that can link at fiberspeed with the new optical communications technology.”

“But there are thousands of entrepreneurs in Israel more important than me,” Fruchter insisted. “Thousands. You should speak to Zohar Zisapel. He and his brother created RAD in 1981, put the first modem on a single chip and then started five companies that emerged from RAD. Today they have 2,500 employees in Israel and hundreds more around the world.” I looked it up. They do signaling for high speed trains, electronic messaging to motorists seeking free parking spaces, communications for remote surgery across the globe.

“Zohar’s an Israeli entrepreneur,” says Eli. “He laid the foundations of Israeli technology. EZchip is still just a small company…”

I first heard Eli describe his plans for a network processor at a forum in Atlanta called InterOp 2000. At the time, EZchip was one of at least fifty companies pursuing the technology. Linking the network to computers around the world, it was the most challenging target for the next generation of microchips. A network processor has to function at the speed of a network increasingly made of fiber optic lines. For most of the decade of the 1990s, fiber optics—light transmitted down glass threads—grew in bandwidth and capability at a pace at least three times as fast as the pace of advance of electronics. Called Moore’s Law after Gordon Moore of Intel, named and researched by Carver Mead of Caltech, this law of the pace of advance of computing capabilities ordains that computer technology doubles in cost effectiveness every 18 months to two years. During the first decade of the 21st century, fiber optic technology has been advancing nearly twice as fast as Moore’s law.

The network processor has to bridge this gap. Just as the Pentium is the microprocessor that makes the PC work, the network processor has become the device that makes the next generation Internet work—that does the crucial routing and switching at network nodes on the net.

I first encountered Eli Fruchter not in person but in a series of tapes. Intrigued by the promise of network processors, I ordered them from a major technology conference called InterOp that was holding a network processor forum. At InterOp, engineers have to prove that their technologies can interoperate with other networking technologies and standards. In communications, systems must interoperate or they are useless. Interoperation between systems that are rapidly changing requires devices that are programmable. In the 1990s the fastest changing technology in the world was the network. In those days, nearly anyone who was anybody in networking showed up at InterOp and made his interoperability pitch.

With the market tumbling and my own company in chaos, I had missed InterOp1999. But I was interested in network processors and InterOp hosted a two day forum on the subject. I ordered the tapes and drove around the Berkshires where I live, listening to all the vendors of new network processor designs.

At the time, the leaders were Motorola, Intel, IBM, Trimedia (now part of Alcatel), Cisco, Lucent, Texas Instruments, AMCC, Broadcom, and Agere. You name your technology champion, they were investing billions of dollars apiece in network processor projects. The largest electronics and computer companies in the world put more than $20 billion into network processor design and development over the last decade.

I listened to seven or eight hours of tapes, and I decided that the most plausible, scalable design for a network processor was presented by Eli Fruchter of EZchip. Alone among the presenters, Fruchter seemed to grasp that network processors would have to scale faster than computer technology. Ordinary arrays of parallel RISC (reduced instruction set computing) microprocessors might perform the role for a couple years. But within five years they would be obsolete. Fruchter saw that a new architecture would be needed.

This meant moving beyond the von Neumann computer architecture that had dominated computing since the beginning. The von Neumann model was based on the successive step-by-step movement of data and instructions from memory to a processor. With scores of homogeneous RISC machines requiring data and instructions at once, the performance of the system depended on the bandwidth to memory. It seemed to me that none of the existing network processors had addressed this challenge in a way that would scale with the constant acceleration of dataflows across the Internet.

Most router and switch companies, such as Cisco, Lucent, Juniper, Alcatel and others, had contrived specialized machines. These application specific devices could perform network processing at tremendous speeds for particular protocols and datatypes. But these processors could not change with the changes in the network. They could not adapt. They could not scale. Every time the network changed, the network processing function would have to change. That, it seemed to me, would not be a successful solution.

None the less, at InterOp, Motorola, Intel, AMCC, Agere, Bay Microsystems, and IBM among others were presenting programmable processors. Their devices were available in the market and were being produced in volume in workable programmable silicon devices.

As I said at the time, Eli Fruchter had developed a leading edge device, alright, and it met the challenge of changeability and scalability, because it was inscribed upon the easily adaptable and programmable substrate of PowerPoint slides.

Now I reminded Eli: “You had at least 50 competitors and no customers, and no product, and you invested maybe a hundredth of the money that they did.

“Now, just eight years later, you have more than 50 customers, six industry leading products, and virtually no serious competitors. All the large players—Intel, Motorola, IBM—have essentially left the field. That is stunning. How did you do it, Eli?”

Target Health Inc. is pleased to announce the release of Target Document Version 1.3. This 21 CFR Part 11 compliant version has a bulletin board for discussions about documents and allows the administrator to recover deleted documents. Other features include eSignatures, full user and group management, document check-in and check-out, folder templates, full audit trail of changes, version controls etc. Several CROs have already licensed Target Document and are experiencing significant cost savings. Target Health is managing a 90-center clinical trial with a fully paperless Trial Master File (TMF). We have no physical TMF binders and no paper copies of documents. For an analysis of cost savings, please go to our website and read Target Document – Cost Analysis and ROI.

For more information about Target Health or any of its software tools for paperless clinical trials, please contact Dr. Jules T. Mitchel (212-681-2100 ext 0) or Ms. Joyce Hays. Target Health’s software tools are designed to partner with both CROs and Sponsors

A Humanitarian Use Device (HUD) facilitates the development of medical devices intended to treat or diagnose a disease or condition affecting fewer than 4,000 people per year in the US. To receive approval, a company must demonstrate the safety and probable benefit of the device. The HUD provision of the regulation provides an incentive for the development of devices for use in the treatment or diagnosis of diseases affecting small populations. To obtain approval for an HUD, an humanitarian device exemption (HDE) application is submitted to FDA. An HDE is similar in both form and content to a premarket approval (PMA) application, but is exempt from the effectiveness requirements of a PMA. An HDE application is not required to contain the results of scientifically valid clinical investigations demonstrating that the device is effective for its intended purpose. The application, however, must contain sufficient information for FDA to determine that the device does not pose an unreasonable or significant risk of illness or injury, and that the probable benefit to health outweighs the risk of injury or illness from its use, taking into account the probable risks and benefits of currently available devices or alternative forms of treatment. Additionally, the applicant must demonstrate that no comparable devices are available to treat or diagnose the disease or condition, and that they could not otherwise bring the device to market.

The FDA has approved a humanitarian device exemption for the first implantable device that delivers intermittent electrical therapy deep within the brain to suppress the symptoms associated with severe OCD. Obsessive-Compulsive Disorder (OCD) is an anxiety disorder and is characterized by recurrent, unwanted thoughts (obsessions) and/or repetitive behaviors (compulsions). Repetitive behaviors such as handwashing, counting, checking, or cleaning are often performed with the hope of preventing obsessive thoughts or making them go away. Performing these actions provides only temporary relief, but not performing them markedly increases anxiety. The Reclaim system uses a small electrical generator known as a pulse generator to create electrical stimulation that blocks abnormal nerve signals in the brain. This small battery-powered device is implanted near the abdomen or the collar bone and connected to four electrodes implanted in the brain through an insulated electric wire known as the lead. Two device systems may be implanted to stimulate both sides of the brain or one device may be implanted with two lead outputs. The approval of the human device exemption was based on a review of data from 26 patients with severe treatment resistant OCD who were treated with the device at four sites. On average, patients had a 40% reduction in their symptoms after 12 months of therapy. While all patients reported adverse events, the majority of these events ended after an adjustment was made in the amount of electrical stimulation. Patients who require electroconvulsive shock therapy should not be implanted with the Reclaim device. Other patients who should not use the device include persons who will undergo magnetic resonance imaging (MRI) or deep tissue heat treatment known as diathermy.

For more information about our expertise in Regulatory Affairs, please contact Dr. Jules T. Mitchel or Dr. Glen Park.

A new theory has been proposed of how Alzheimer’s disease kills 1) ___ cells. It is hypothesized that a chemical mechanism that naturally prunes away unwanted brain cells during early brain development somehow gets hijacked in Alzheimer’s disease. Amyloid precursor protein (APP) which a key building block in brain 2) ___ found in Alzheimer’s disease, is the driving force behind this process. It is known that APP is a negative factor in Alzheimer’s, but it has been unclear how it participates. One theory is that somehow this self-destruction mechanism gets switched on in Alzheimer’s disease and starts killing healthy brain cells. The finding provides new clues about potential treatments for Alzheimer’s, a disease that gets worse over time and is marked by 3) ___ loss, confusion and eventually the inability to care for oneself. The researchers made the Alzheimer’s connection by accident while studying a process of nerve cell self-destruction that occurs as a part of normal embryonic development. When the brain and spinal cord are being formed, excess nerve cells are generated that have to be removed to refine the pattern of nerve cell 4) ___. They discovered a biochemical mechanism that activates when nerve cells are pruned back. A key component of this self-destruction program was none other than APP, this bad actor in Alzheimer’s disease. In Alzheimer’s disease, 5) ___ snip APP into beta amyloid pieces, which form the basis of beta amyloid plaques that are thought to be toxic. Many companies are working on drugs to remove beta amyloid from the brain, but so far have had little success in altering the course of the disease. The current theory suggests targeting APP and other components of this mechanism may help. In tests on human 6) ___ cells, the team showed it was able to interfere with the mechanism and block the degeneration of nerve cells. The researchers now plan to see if they can disrupt this mechanism in adult brain cells. The key question is, if we interfere with it, can we halt the progression of the disease?. There is no cure for Alzheimer’s, and current drugs merely delay symptoms. Alzheimer?s disease affects 5.2 million people in the US and 26 million globally.

ANSWERS

1) brain; 2) plaques; 3) memory; 4) connections; 5) enzymes; 6) embryonic

← Previous PageNext Page →