Nano power: An electron-microscope image of 40-nanometer-wide rod-shaped particles that make up a promising battery material.
Credit: Arumugam Manthiram, University of Texas at Austin

Researchers show a low-cost route to making materials for advanced batteries in electric cars and hybrids

By Kevin Bullis, July 30, 2008, MIT Technology Review – A new way to make advanced lithium-ion battery materials addresses one of their chief remaining problems: cost. Arumugam Manthiram, a professor of materials engineering at the University of Texas at Austin, has demonstrated that a microwave-based method for making lithium iron phosphate takes less time and uses lower temperatures than conventional methods, which could translate into lower costs.

Lithium iron phosphate is an alternative to the lithium cobalt oxide used in most lithium-ion batteries in laptop computers. It promises to be much cheaper because it uses iron rather than the much more expensive metal cobalt. Although it stores less energy than some other lithium-ion materials, lithium iron phosphate is safer and can be made in ways that allow the material to deliver large bursts of power, properties that make it particularly useful in hybrid vehicles.

Indeed, lithium iron phosphate has become one of the hottest new battery materials. For example, A123 Systems, a startup based in Watertown, MA, that has developed one form of the material, has raised more than $148 million and commercialized batteries for rechargeable power tools that can outperform conventional plug-in tools. The material is also one of the types being tested for a new electric car from General Motors.

But it has proved difficult and expensive to manufacture lithium iron phosphate batteries, which cuts into potential cost savings over more conventional lithium-ion batteries. Typically, the materials are made in a process that takes hours and requires temperatures as high as 700 °C.

Manthiram’s method involves mixing commercially available chemicals–lithium hydroxide, iron acetate, and phosphoric acid–in a solvent, and then subjecting this mixture to microwaves for five minutes, which heats the chemicals to about 300 °C. The process forms rod-shaped particles of lithium iron phosphate. The highest-performing particles are about 100 nanometers long and 25 nanometers wide. The small size is needed to allow lithium ions to move quickly in and out of the particles during charging and discharging of the battery.

To improve the performance of these materials, Manthiram coated the particles with an electrically conductive polymer, which was itself treated with small amounts of a type of sulfonic acid. The coated nanoparticles were then incorporated into a small battery cell for testing. At slow rates of discharge, the materials showed an impressive capacity: at 166 milliamp hours per gram, the materials came close to the theoretical capacity of lithium iron phosphate, which is 170 milliamp hours per gram. This capacity dropped off quickly at higher discharge rates in initial tests. But Manthiram says that the new versions of the material have shown better performance.

It’s still too early to say how much the new approach will reduce costs in the manufacturing of lithium iron phosphate batteries. The method’s low temperatures can reduce energy demands, and the fact that it is fast can lead to higher production from the same amount of equipment–both of which can make manufacturing more economical. But the cost of the conductive polymer and manufacturing equipment also needs to be figured in, and the process must be demonstrated at large scales. The process will also need to compete with other promising experimental manufacturing methods, says Stanley Whittingham, a professor of chemistry, materials science, and engineering at the State University of New York, at Binghamton.

Manthiram has recently published advances for two other types of lithium-ion battery materials and is working with ActaCell, a startup based in Austin, TX, to commercialize the technology developed in his lab. The company, which last week announced that it has raised $5.58 million in venture funding, has already licensed some of Manthiram’s technology, but it will not say which technology until next year.

TARGET HEALTH excels in Regulatory Affairs and works closely with many of its clients performing all FDA submissions. TARGET HEALTH receives daily updates of new developments at FDA. Each week, highlights of what is going on at FDA are shared to assure that new information is expeditiously made available.

The FDA has issued a final regulation that they say makes early Phase 1 clinical drug development safe and efficient by enabling a phased approach to complying with current good manufacturing practice (CGMP) statutes and FDA investigational requirements. To facilitate this new approach, the regulation exempts most Phase 1 investigational drugs from the requirements in 21 CFR part 211. FDA will continue to exercise oversight of the manufacture of these drugs under FDA’s general statutory CGMP authority and through review of investigational new drug (IND) applications. A companion guidance recommends an approach for complying with CGMP statutory requirements such as standards for the manufacturing facility and equipment, the control of components, as well as testing, stability, packaging, labeling, distribution, and record keeping. When FDA originally issued CGMP regulations for drug and biological products (21 CFR parts 210 and 211), the agency stated that the regulations applied to all types of pharmaceutical production, but explained in the preamble to the regulations that FDA was considering proposing regulations more appropriate for the manufacture of drugs used in investigational clinical trials. The reason for this is that certain requirements in part 211 are directed at the commercial manufacture of products — such as repackaging and relabeling of drug products, rotation of stock, and maintaining separate facilities for manufacturing and packaging. These types of requirements may be inappropriate to the manufacture of investigational drugs used in Phase 1 clinical trials, many of which are carried out in small-scale, academic environments, typically involving fewer than 80 subjects. The guidance, CGMP for Phase 1 Investigational Drugs, describes an approach manufacturers can use to implement manufacturing controls that are appropriate for the Phase 1 clinical trial stage of development. The approach described in this guidance reflects the fact that some manufacturing controls and the extent of manufacturing controls needed to achieve appropriate product quality differ among the various phases of clinical trials. Manufacturers will continue to submit detailed information about relevant aspects of the manufacturing process as part of the IND application. The FDA may inspect the manufacturing operation, suspend a clinical trial by placing it on “clinical hold,” or terminate the IND if there is evidence of inadequate quality control procedures that would compromise the safety of an investigational product.

To find the Guidance for Industry, CGMP for Phase 1, Investigational Drugs, visit: http://www.fda.gov/cder/guidance/GMP%20Phase1IND61608.pdf

To find Current Good Manufacturing Practice and Investigational New Drugs Intended for Use in Clinical Trials/Final rule: http://www.fda.gov/OHRMS/DOCKETS/98fr/oc07114.pdf.

For more information about our expertise in Regulatory Affairs, please contact Dr. Jules T. Mitchel or Dr. Glen Park.

In terms of electronic submissions to FDA, over the past 2 years, Target Heath prepared and submitted 2 eINDs, 1 PMA eCopy with numerous PMA eCopy supplements and 1 eCTD NDA, with numerous supplements. All submissions were accepted by FDA. We are currently preparing 2 INDs which will be submitted electronically. There are 3 main advantages of eSubmissions: 1) they are paperless and save trees; 2) they dramatically reduce the time from the end of Phase 3 to regulatory submissions; and 3), they earn money as time to market is accelerated.

For more information about Target Health or any of our software tools for clinical research, please contact Dr. Jules T. Mitchel or Ms. Joyce Hays. Our software tools are designed to partner with both CROs and Sponsors.


Streamlining desalination: Researcher Ho Bum Park holds two samples of the chlorine-tolerant desalination membrane. The one on the left is one-tenth of a micrometer thick and is made of a porous support with a thin coating of the membrane. The blue membrane is about 50 micrometers thick.
Credit: Beverly Barrett/University of Texas at Austin

A new chlorine-tolerant material may streamline desalination processes

By Jennifer Chu, July 31, 2008, MIT Technology Review – Getting access to drinking water is a daily challenge for more than one billion people in the world. Desalination may help relieve such water-stressed populations by filtering salt from abundant seawater, and there are more than 7,000 desalination plants worldwide, 250 operating in the United States alone. However, the membranes that these plants use to filter out salt tend to break down when exposed to an essential ingredient in the process: chlorine.

Now researchers at the University of Texas at Austin (UT Austin) and Virginia Polytechnic Institute have engineered a chlorine-tolerant membrane that filters out salt just as well as many commercial membranes. The researchers say that such a membrane would eliminate expensive steps in the desalination process and eventually be used to filter salt out of seawater. The results of their study appear in the most recent issue of the journal Angewandte Chemie.

The majority of desalination plants today use polyamide membranes to effectively separate salt from seawater. Since seawater harbors a variety of organisms that can form a thick film over membranes and clog the filter, plants use chlorine to disinfect incoming water before it is sent through membranes. The problem is, these membranes degrade after continuous chlorine exposure. So the desalination industry added another step, quickly dechlorinating water after it’s been treated with chlorine and before it’s run through the membrane. Once the water has been desalinated, chlorine is added again, before the water enters the drinking-water supply.

Benny Freeman, a professor of chemical engineering at UT Austin, says that a chlorine-tolerant membrane may help significantly streamline the desalination process. Freeman and James McGrath, a professor of chemistry at Virginia Polytechnic Institute, engineered a water-filtering membrane that stands up to repeated exposures of chlorine.

The new membrane is made from polysulfone, a sulfur-containing thermoplastic that is highly resistant to chlorine. Previous researchers have attempted to design chlorine-tolerant membranes using polysulfone but have been hampered because the material is extremely hydrophobic, and doesn’t easily let water through. Scientists have tried to chemically alter the polymer’s composition by adding hydrophilic, or water-attracting, compounds. However, timing is everything, and Freeman says that when researchers add such compounds after they synthesize the polymer, “eventually, you break the backbone of the polymer chain . . . to the point where it’s not useful.”

Instead, Freeman and McGrath added two hydrophilic, charged sulfonic acid groups during the polymerization process and found that they were able to synthesize a durable and reproducible polymer. They then performed a variety of experiments to gauge the material’s ability to tolerate chlorine and filter out salt, compared with commercial membranes.

First, the team carried out salt permeability tests, measuring the amount of salt passing through a membrane in a given amount of time. The less salt found in the filtered water, the better. Freeman and McGrath found that the new membrane performed just as well as many commercial membranes in filtering out water with low to medium salt content. For saltier samples comparable to seawater, the team’s membrane was slightly less permeable.

“We have materials that are competitive today with existing nano filtration and some of the brackish water membranes,” says Freeman. “We are now pushing the chemistry to get further into the seawater area, which is a significant market we’d like to access.”

The researchers also tested the polymer’s chlorine sensitivity. They found that, after exposure to concentrated solutions of chlorine for more than 35 hours, the new membrane suffered little change in composition, compared with commercial polyamide membranes, which were “eaten away by the chlorine.”

Currently, Freeman and his colleagues are further manipulating the polymer composition to try to tune various properties, in hopes of designing a more selective and chlorine-resistant membrane. They are also in talks with a leading manufacturer of desalination membranes, with the goal of bringing the new membrane to market.

“These membranes may represent a reasonable route to commercialization,” says Freeman. “If we’re successful, we’ll have the possibility of basically making these membranes on the same equipment that people use today.”

Eric Hoek, an assistant professor of civil and environmental engineering at the University of California, Los Angeles, works on engineering new desalination membranes at the California Nanosystems Institute. He says that the chlorine-tolerant membrane developed by Freeman’s team may be a promising alternative to today’s industrial counterparts.

“This work is among the most innovative and interesting research on membrane materials in the past decade,” says Hoek. “While the chlorine tolerance exhibited by these membranes is impressive, the basic separation performance is not yet where it needs to be for these materials to be touted as immediate replacements of commercial seawater membrane technology.”

How the New Quality Movement Is Transforming Medicine

By Abigail Zuger MD, July 29, 2008, The New York Times – There are more than 800,000 doctors in this country, more than two million nurses and several million other health care workers. Until recently no one really knew what any of them were up to. Hospital walls bulged with frenetic activity, but all the public saw were the happy successes and the occasional tragic complications.

Those days are pretty much over. From what has been called a perfect storm of disgruntled patients, legislators and medical professionals, the quality movement in health care has been born.

Thanks to its efforts, those hospital walls are slowly becoming transparent. Revealed is a world of tangled routines, many obsolescent, many downright stupid, that no one had carefully examined. The reformers are out to streamline the routines, retrain the workers and keep them permanently on display — an ant farm behind clear glass — to make sure things never get out of control again.

Their early work was invisible to the public, but even that is changing. Take, for example, the latest benchmark in transparency: on a Wednesday late last May, newspaper readers across the country could compare how local hospitals performed on two measurements of the quality of care, not by slogging through a news article but by scanning a large government-sponsored advertisement complete with graphs and a Web address (www.hospitalcompare.hhs.gov) for more details.

That is just the first installment of such data on display. Soon both hospitals and individual practitioners will be publicizing their own report cards. Insurers will be paying them for good grades, penalizing them for bad. Incentives to minimize errors, complications and inefficiency will mount. Health care will become perfectly safe, perfectly smooth, perfectly perfect.

How did this messianic movement arise and take root, and who are its prophets? These are the questions Charles Kenney valiantly tries to answer in what is the first large-scale history of the quality movement.

Mr. Kenney, a former Boston Globe reporter and editor who is a consultant for a Massachusetts health insurance company, has set himself a giant assignment. While the book is not a success — an uncritical paean to his subjects, it reads like a corporate annual report — he provides a reasonably complete and up-to-date picture of the ambition and complexity of the enterprise.

Part of the problem is that he is trying to describe a target in motion, with roots almost as tangled as the chaos it seeks to eradicate. Poor-quality health care takes a variety of forms, each attracting a different set of crusaders.

Some have taken on the big blunders — errors of misdosed medication and operations on the wrong leg. Some have tackled “complications,” like catheter infections, that were once thought to be inevitable risks of hospitalization and now seem entirely preventable.

Some have focused on the smaller inefficiencies — little details with costly consequences, like medical records that disappear just when they are most needed and laboratory results that vanish into giant black holes.

Some aim to rearrange the physician-heavy hospital hierarchy so that all health care workers, and even family members, have the opportunity to call the shots in a patient’s care.

Still others focus on getting sick people correctly cared for: tight blood-sugar control for diabetics, regular Pap smears for women, flu shots for all.

Government and industry have been sources of inspiration for these goals. Experts from NASA to Toyota have tutored health quality gurus in the basics, like needing to prevent errors rather than punish them and respecting the right of any worker to stop the assembly line when a mistake threatens.

Mr. Kenney is scornfully dismissive of unnamed physician naysayers who point out that “human beings are not cars” and shy away from health quality control. It is these doctors’ “crust of hubris,” he argues, that prevents them from seeing the merits of new algorithms. Indeed, it is hard to imagine how any sane person could fail to leap on the quality bandwagon as presented here; it is all so self-evidently fabulous.

But readers should be aware that Mr. Kenney’s story ignores a wide array of questions that have some thoughtful members of the health care world a little troubled by the quality evangelism.

What does quality care mean, for instance, in cases of hopeless illness? When the outcome of care will not be good, how should good care be redefined? Suppose patients sabotage their own care, as so many unwittingly do. Who takes the blame?

And most important, what does it mean when science impudently undercuts accepted quality benchmarks? Only this past spring, for instance, two giant trials suggested that for some diabetics, tight blood-sugar control did nothing to safeguard them against some feared complications of diabetes and might actually endanger them.

Quality is a clear goal in product development, but in health it is still a shimmering intangible. All credit to the quality mavens; they are certainly fighting the good fight, and most of them deserve every laudatory adjective in Mr. Kenney’s thesaurus.

But fortunately for us all, most of them are smart enough to realize that human beings are not cars.

The Best Practice

How the New Quality Movement Is Transforming Medicine.
By Charles Kenney. Public Affairs Books. 315 Pages. $26.95

By Joe Vanden Plas, July 28, 2008 – Electronic medical records have been designed to assist physicians, radiologists, and labs, but a partnership between two Milwaukee institutions and a medical software developer is shifting some of that focus to decision support for nurses.

The partnership of Aurora Health Care, the University of Wisconsin-Milwaukee College of Nursing, and Cerner Corp. has reached the go-live phase of an evidence-based nursing initiative. The objective is not only to improve health outcomes by reducing variation in nursing care, but make the nursing profession more attractive at a time of personnel shortages and possibly help Aurora respond to federal action to eliminate payments for avoidable health events.

“Across the country, very few companies and very few places were able to really focus on nurses and nursing care, and yet nurses are the ones that are most involved with data and data management,” said Norma Lang, a professor and former dean of the University of Wisconsin-Milwaukee College of Nursing, and a professor in the University of Pennsylvania School of Nursing. “When you think about the amount of data that nurses have to handle today, it’s pretty awesome.”

Evidence-based nursing

According to project leaders, stakeholder alignment was not difficult to achieve because each entity stands to benefit. The UWM School of Nursing conducted most of the research into actionable evidence-based practices, which have been built into the workflows of Aurora nurses via Cerner software and could, according to Lang, serve as the basis for curriculum development.

As the technology partner, Cerner will be able to share the evidence-based findings with clients, and feed the data into a clinical data repository from which business intelligence can be extracted.

Aurora, which has a longstanding relationship with Cerner, serves as the laboratory for the project and will use the evidence-based information and business intelligence to drive continuous improvement and give its nurses more time to care for patients.

As the project began several years ago, there was a considerable amount of evidence-based research, but it wasn’t in actionable form or in a form necessary for building software. “The thinking was that we could advance the work faster together than if we were trying to do it alone,” Lang said.

The project reached the deployment phase with a July 21 launch of evidence-based protocols at two Aurora St. Luke’s Medical Center nursing units. The protocols are for fall risk, prevention, and management, plus medication adherence and activity intolerance.

Upon admission, nurses conduct a bedside assessment of patients, and the assessment drives care interventions that show up on a computer screen as task lists. For example, if a patient used a cane or walks with a gait, they are at a higher risk for falls. If they have brittle bones or use blood thinners, they run the risk of serious injury or excessive bleeding as a result of falls. The nurses can refer to the software for evidence-based practices that help prevent falls.

“The whole point of documenting electronically was not just to replace paper, it was to provide information to the frontline person,” said Karlene Kerfoot, vice president and chief clinical officer for Aurora Health Care.

Business considerations

Starting this year, the Centers for Medicare and Medicaid Services will not reimburse hospitals for certain preventable medical errors, including certain types of falls, pressure ulcers, and catheter-associated urinary tract infections. The list is likely to grow each year. Several more avoidable events are under consideration for 2009.

These events represent considerable cost. In a study of people 72 and older, the average healthcare cost of a fall injury was $19,440, according to the Centers for Disease Control and Prevention.

Lang believes pay-for-performance considerations are a motivating factor. Kerfoot, however, said CMS reimbursement decisions are not necessarily a business driver for Aurora, but they could be. “We didn’t necessarily align it with pay-for-performance, but in the future we certainly can,” she said, noting that eventually CMS might not pay for any hospital-acquired complication.

In the past, Kerfoot said, nurses have entered data into EMRs but got nothing out of it. Electronic records placed the additional burden of data entry upon nurses and took time away from patient care.

While the jury is still out on whether the system adds more time for care, nurses already see benefits. Jan Mills, a registered nurse for 25 years, said Aurora nurses have been developing care plans on computers for a while, but with this system, care plans can be mapped to the best health outcome for individual patients, and if the patient assessment changes, new alerts are fired off to provide additional decision support.

The alerts come into play throughout hospital care. “Without the technology and the alerting piece embedded into the workflow, you can’t retain a high-reliability organization,” said Ellen Harper, an RN and healthcare executive director for Cerner. “It’s not intended to remove the critical thinking skills of the clinicians; it’s to augment them.”

“Patients that are informed of their risk factors become involved in their care,” Mills said, “so you’re really partnering with that patient.”

Laura Burke, an RN and director of system nursing research and scientific support for Aurora, said the system already is helping nurses more quickly identify potential problems. “A lot of nursing is about prevention, not actually treating things,” she noted.

Once a nurse chooses an intervention, this information goes into the data repository, which already is being populated by Cerner. With the help of business intelligence software from Business Objects, that data repository eventually will produce operational information that drives continuous improvement in clinical processes.

Eventually, participants hope to learn enough to remove unnecessary, time-consuming steps in nursing workflows. “Everybody knows in the quality world that if you do it right the first time, it’s the most cost effective way to do it,” Lang noted. “So we’re interested in putting in the right steps, the right processes of what we call nursing action.”


By Tara Parker-Pope, July 29, 2008, The New York Times – A growing chorus of discontent suggests that the once-revered doctor-patient relationship is on the rocks.

The relationship is the cornerstone of the medical system — nobody can be helped if doctors and patients aren’t getting along. But increasingly, research and anecdotal reports suggest that many patients don’t trust doctors.

About one in four patients feel that their physicians sometimes expose them to unnecessary risk, according to data from a Johns Hopkins study published this year in the journal Medicine. And two recent studies show that whether patients trust a doctor strongly influences whether they take their medication.

The distrust and animosity between doctors and patients has shown up in a variety of places. In bookstores, there is now a genre of “what your doctor won’t tell you” books promising previously withheld information on everything from weight loss to heart disease.

The Internet is bristling with frustrated comments from patients. On The New York Times’s Well blog recently, a reader named Tom echoed the concerns of many about doctors. “I, as patient, say stop acting like you know everything,” he wrote. “Admit it, and we patients may stop distrusting your quick off-the-line, glib diagnosis.”

Doctors say they are not surprised. “It’s been striking to me since I went into practice how unhappy patients are and, frankly, how mistreated patients are,” said Dr. Sandeep Jauhar, director of the heart failure program at Long Island Jewish Medical Center and an occasional contributor to Science Times.

He recounted a conversation he had last week with a patient who had been transferred to his hospital. “I said, ‘So why are you here?’ He said: ‘I have no idea. They just transferred me.’

“Nobody is talking to the patients,” Dr. Jauhar went on. “Everyone is so rushed. I don’t think the doctors are bad people — they are just working in a broken system.”

The reasons for all this frustration are complex. Doctors, facing declining reimbursements and higher costs, have only minutes to spend with each patient. News reports about medical errors and drug industry influence have increased patients’ distrust. And the rise of direct-to-consumer drug advertising and medical Web sites have taught patients to research their own medical issues and made them more skeptical and inquisitive.

“Doctors used to be the only source for information on medical problems and what to do, but now our knowledge is demystified,” said Dr. Robert Lamberts, an internal medicine physician and medical blogger in Augusta, Ga. “When patients come in with preconceived ideas about what we should do, they do get perturbed at us for not listening. I do my best to explain why I do what I do, but some people are not satisfied until we do what they want.”

Others say the problem also stems from a grueling training system that removes doctors from the world patients live in.

“By the time you’re done with your training, you feel, in many ways, that you are as far as you could possibly be from the very people you’ve set out to help,” said Dr. Pauline Chen, most recently a liver transplant surgeon at the University of California, Los Angeles, and the author of “Final Exam: A Surgeon’s Reflections on Mortality” (Knopf, 2007). “We don’t even talk the same language anymore.”

Dr. David H. Newman, an emergency room physician at St. Luke’s-Roosevelt Hospital Center in Manhattan, says there is a disconnect between the way doctors and patients view medicine. Doctors are trained to diagnose disease and treat it, he said, while “patients are interested in being tended to and being listened to and being well.”

Dr. Newman, author of the new book “Hippocrates’ Shadow: Secrets from the House of Medicine” (Scribner), says studies of the placebo effect suggest that Hippocrates was right when he claimed that faith in physicians can help healing. “It adds misery and suffering to any condition to not have a source of care that you trust,” Dr. Newman said.

But these doctors say the situation is not hopeless. Patients who don’t trust their doctor should look for a new one, but they may be able to improve existing relationships by being more open and communicative.

Go to a doctor’s visit with written questions so you don’t forget to ask what’s important to you. If a doctor starts to rush out of the room, stop him or her by saying, “Doctor, I still have some questions.” Patients who are open with their doctors about their feelings and fears will often get the same level of openness in return.

“All of us, the patients and the doctors, ultimately want the same thing,” Dr. Chen said. “But we see ourselves on opposite sides of a divide. There is this sense that we’re facing off with each other and we’re not working together. It’s a tragedy.”


Disabled by a painful skin condition, Robert Clark says, “I was a basket case who couldn’t put two and two together.”
Photo Credit: By Paul Connors

By Sandra G. Boodman, July 29, 2008, The Washington Post – Even before he entered the examining room to meet his new patient, dermatologist Howard Luber was confident he knew what was wrong with the man.

The diagnosis was so obvious, Luber recalled, that his nurse suggested it after taking Robert Clark’s history and looking at the angry, encrusted rash that blanketed nearly every inch of the 64-year-old’s body except his face.

Luber’s certainty was all the more surprising because of who the patient was and what he’d endured: A physician who specialized in infectious diseases, Clark had seen numerous doctors, including three dermatologists, immunologists, internists and infectious disease experts, all of whom had been stumped by the cause of his ferocious, uncontrollable itching. He had undergone two skin biopsies and taken countless drugs, but he would still awaken with fingernails bloody from scratching his skin raw. Doctors who had treated him for more than a year couldn’t decide whether his problem was severe eczema, a rare cancer, an unusual fungal infection, an autoimmune disorder or an unspecified allergy.

“It’s pretty hard to believe,” said Luber, who called Clark’s malady “bread and butter dermatology. I don’t have a good explanation” for why his problem went undiagnosed for so long. Maybe, he suggested, doctors were focused on more severe disorders and the skin’s worsening appearance camouflaged the underlying problem. “If you’re not thinking of it, you could miss it.”

Clark, a former researcher at the National Institutes of Health who lives in the Phoenix area, has a different perspective: He didn’t attempt to second-guess his doctors. “I just acted like a patient, and that’s what got me in trouble,” he said. “I never at any point until the end of this illness suspected they didn’t know what they were doing. Dr. Luber saved my life.”

Clark’s problem, which he called “devastating” and “life-changing,” began in 2004, when he developed an itchy rash on his left side. His internist wasn’t sure what was wrong but prescribed the usual treatment for such maladies: an antihistamine, cortisone cream, various ointments for dry skin and oatmeal baths. When the rash got worse, he sent Clark to dermatologist number one, who performed a skin biopsy and then prescribed Elidel, a topical medicine used to treat eczema.

That didn’t work, nor did the other antihistamines the dermatologist prescribed. By the time Clark got to dermatologist number two he was having trouble concentrating. Some nights he donned thick ski gloves or thin white cotton ones to try to prevent his furious scratching; he often awoke with lacerated skin or to find drops of blood on his sheets.

Several months later a new symptom arose: a painful fuzzy rash on Clark’s feet that was diagnosed as a rare fungal infection. Doctors also noticed that his eosinophil count, a measurement of a type of white blood cell, was extremely high, suggesting either a rare skin cancer or an allergy. But to what? No one could say.

The rash now covered much of his body, and dermatologist number three, along with two infectious disease specialists, an immunologist and an endocrinologist, wasn’t sure what was wrong. One doctor suggested chemotherapy. Another thought the problem might be a drug reaction. A third prescribed a high dose of prednisone, a steroid Clark took for six months. It blunted the itching but led to severe pain in his hips, which was diagnosed as avascular necrosis, permanent bone damage linked to long-term use of corticosteroids.

Clark said he was so disabled by the pain and itching that he had stopped practicing; he is now retired. In an effort to give him some relief, the immunologist prescribed narcotic pain medication and insisted that Clark see dermatologist number four: Luber. Clark balked, but the immunologist insisted, so he went, after canceling an initial appointment.

Clark recalled that Luber “was in the room less than a minute when he said, ‘You will be feeling better in a few days.’ ” The dermatologist gently scraped Clark’s inflamed, leathery skin and then had him look at the slide under the microscope.

The problem was immediately obvious: The skin sample was teeming with a common parasite called scabies, a tiny mite passed from direct contact with an infected person. The eight-legged mite thrives in overcrowded conditions or among people with substandard hygiene, but it can affect anyone, according to the American Academy of Dermatology.

Outbreaks have plagued humans for more than 2,500 years and can occur in institutions such as homeless shelters, nursing homes and sometimes hospitals. Diagnosis may be delayed because scabies mimics other skin conditions and mites are difficult to see with the naked eye.

Its most characteristic symptom is itching at night so ferocious it can keep sufferers from getting any sleep. The mite burrows into the skin, laying eggs and producing toxins, causing an allergy that triggers the itching. Mites are attracted to warmth and human scent, and can live up to 24 hours on bedding.

Clark had the most severe form of scabies, called Norwegian or crusted scabies. In these cases, thousands of mites hide under skin, which becomes thickened, retarding penetration of topical medicines.

Treatment with topical medicines and, in severe cases, an anti-parasitic drug called ivermectin — Clark took both — is standard, and the residence of an infected person must be thoroughly cleaned and clothing washed in the hottest water possible. All members of a household must be treated, because the incubation period can be as long as eight weeks.

Luber, who diagnoses about six cases annually, recalled that Clark was “very surprised. I remember him saying that no one had mentioned scabies,” which would not show up on a biopsy.

Clark said that his wife turned out to have a milder case, as did the couple’s housekeeper. And as Luber predicted, Clark started to feel better within a day, although it took weeks before the itching subsided. He said he doesn’t know where he contracted the disease but suspects it might have been from a patient.

When Clark told some of the physicians who examined him what had happened, he said they were not sympathetic.

“Several told me I was an infectious disease specialist and I should have figured it out,” he recalled. “That was very unfair and made me angry. I was a basket case who couldn’t put two and two together.”


Mark Interrante

ENIGMA Molten glass being worked into an ornament. Understanding glass could lead to better products and offer headway in other scientific problems.

By Kenneth Chang, July 29, 2008, The New York Times – It is well known that panes of stained glass in old European churches are thicker at the bottom because glass is a slow-moving liquid that flows downward over centuries.

Well known, but wrong. Medieval stained glass makers were simply unable to make perfectly flat panes, and the windows were just as unevenly thick when new.

The tale contains a grain of truth about glass resembling a liquid, however. The arrangement of atoms and molecules in glass is indistinguishable from that of a liquid. But how can a liquid be as strikingly hard as glass?

“They’re the thickest and gooiest of liquids and the most disordered and structureless of rigid solids,” said Peter Harrowell, a professor of chemistry at the University of Sydney in Australia, speaking of glasses, which can be formed from different raw materials. “They sit right at this really profound sort of puzzle.”

Philip W. Anderson, a Nobel Prize-winning physicist at Princeton, wrote in 1995: “The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition.”

He added, “This could be the next breakthrough in the coming decade.”

Thirteen years later, scientists still disagree, with some vehemence, about the nature of glass.

glass
Bloomberg News, top, and Keystone/Corbis

COMPLEX Glass in sheet and molten forms. Glass transition differs from usual phase transition.

Peter G. Wolynes, a professor of chemistry at the University of California, San Diego, thinks he essentially solved the glass problem two decades ago based on ideas of what glass would look like if cooled infinitely slowly. “I think we have a very good constructive theory of that these days,” Dr. Wolynes said. “Many people tell me this is very contentious. I disagree violently with them.”

Others, like Juan P. Garrahan, professor of physics at the University of Nottingham in England, and David Chandler, professor of chemistry at the University of California, Berkeley, have taken a different approach and are as certain that they are on the right track.

“It surprises most people that we still don’t understand this,” said David R. Reichman, a professor of chemistry at Columbia, who takes yet another approach to the glass problem. “We don’t understand why glass should be a solid and how it forms.”

Dr. Reichman said of Dr. Wolynes’s theory, “I think a lot of the elements in it are correct,” but he said it was not a complete picture. Theorists are drawn to the problem, Dr. Reichman said, “because we think it’s not solved yet — except for Peter maybe.”

Scientists are slowly accumulating more clues. A few years ago, experiments and computer simulations revealed something unexpected: as molten glass cools, the molecules do not slow down uniformly. Some areas jam rigid first while in other regions the molecules continue to skitter around in a liquid-like fashion. More strangely, the fast-moving regions look no different from the slow-moving ones.

Meanwhile, computer simulations have become sophisticated and large enough to provide additional insights, and yet more theories have been proffered to explain glasses.

David A. Weitz, a physics professor at Harvard, joked, “There are more theories of the glass transition than there are theorists who propose them.” Dr. Weitz performs experiments using tiny particles suspended in liquids to mimic the behavior of glass, and he ducks out of the theoretical battles. “It just can get so controversial and so many loud arguments, and I don’t want to get involved with that myself.”

For scientists, glass is not just the glass of windows and jars, made of silica, sodium carbonate and calcium oxide. Rather, a glass is any solid in which the molecules are jumbled randomly. Many plastics like polycarbonate are glasses, as are many ceramics.

Understanding glass would not just solve a longstanding fundamental (and arguably Nobel-worthy) problem and perhaps lead to better glasses. That knowledge might benefit drug makers, for instance. Certain drugs, if they could be made in a stable glass structure instead of a crystalline form, would dissolve more quickly, allowing them to be taken orally instead of being injected. The tools and techniques applied to glass might also provide headway on other problems, in material science, biology and other fields, that look at general properties that arise out of many disordered interactions.

“A glass is an example, probably the simplest example, of the truly complex,” Dr. Harrowell, the University of Sydney professor, said. In liquids, molecules jiggle around along random, jumbled paths. When cooled, a liquid either freezes, as water does into ice, or it does not freeze and forms a glass instead.

In freezing to a conventional solid, a liquid undergoes a so-called phase transition; the molecules line up next to and on top of one another in a simple, neat crystal pattern. When a liquid solidifies into a glass, this organized stacking is nowhere to be found. Instead, the molecules just move slower and slower and slower, until they are effectively not moving at all, trapped in a strange state between liquid and solid.

The glass transition differs from a usual phase transition in several other key ways. It takes energy, what is called latent heat, to line up the water molecules into ice. There is no latent heat in the formation of glass.

The glass transition does not occur at a single, well-defined temperature; the slower the cooling, the lower the transition temperature. Even the definition of glass is arbitrary — basically a rate of flow so slow that it is too boring and time-consuming to watch. The final structure of the glass also depends on how slowly it has been cooled.

By contrast, water, cooled quickly or cooled slowly, consistently crystallizes to the same ice structure at 32 degrees Fahrenheit.

To develop his theory, Dr. Wolynes zeroed in on an observation made decades ago, that the viscosity of a glass was related to the amount of entropy, a measure of disorder, in the glass. Further, if a glass could be formed by cooling at an infinitely slow rate, the entropy would vanish at a temperature well above absolute zero, violating the third law of thermodynamics, which states that entropy vanishes at absolute zero.

Dr. Wolynes and his collaborators came up with a mathematical model to describe this hypothetical, impossible glass, calling it an “ideal glass.” Based on this ideal glass, they said the properties of real glasses could be deduced, although exact calculations were too hard to perform. That was in the 1980s. “I thought in 1990 the problem was solved,” Dr. Wolynes said, and he moved on to other work.

Not everyone found the theory satisfying. Dr. Wolynes and his collaborators so insisted they were right that “you had the impression they were trying to sell you an old car,” said Jean-Philippe Bouchaud of the Atomic Energy Commission in France. “I think Peter is not the best advocate of his own ideas. He tends to oversell his own theory.”

Around that time, the first hints of the dichotomy of fast-moving and slow-moving regions in a solidifying glass were seen in experiments, and computer simulations predicted that this pattern, called dynamical heterogeneity, should exist.

Dr. Weitz of Harvard had been working for a couple of decades with colloids, or suspensions of plastic spheres in liquids, and he thought he could use them to study the glass transition. As the liquid is squeezed out, the colloid particles undergo the same change as a cooling glass. With the colloids, Dr. Weitz could photograph the movements of each particle in a colloidal glass and show that some chunks of particles moved quickly while most hardly moved.

“You can see them,” Dr. Weitz said. “You can see them so clearly.”

The new findings did not faze Dr. Wolynes. Around 2000, he returned to the glass problem, convinced that with techniques he had used in solving protein folding problems, he could fill in some of the computational gaps in his glass theory. Among the calculations, he found that dynamical heterogeneity was a natural consequence of the theory.

Dr. Bouchaud and a colleague, Giulio Biroli, revisited Dr. Wolynes’s theory, translating it into terms they could more easily understand and coming up with predictions that could be compared with experiments. “For a long time, I didn’t really believe in the whole story, but with time I became more and more convinced there is something very deep in the theory,” Dr. Bouchaud said. “I think these people had fantastic intuition about how the whole problem should be attacked.”

For Dr. Garrahan, the University of Nottingham scientist, and Dr. Chandler, the Berkeley scientist, the contrast between fast- and slow-moving regions was so striking compared with the other changes near the transition, they focused on these dynamics. They said that the fundamental process in the glass transition was a phase transition in the trajectories, from flowing to jammed, rather than a change in structure seen in most phase transitions. “You don’t see anything interesting in the structure of these glass formers, unless you look at space and time,” Dr. Garrahan said.

They ignore the more subtle effects related to the impossible-to-reach ideal glass state. “If I can never get there, these are metaphysical temperatures,” Dr. Chandler said.

Dr. Chandler and Dr. Garrahan have devised and solved mathematical models, but, like Dr. Wolynes, they have not yet convinced everyone of how the model is related to real glasses. The theory does not try to explain the presumed connection between entropy and viscosity, and some scientists said they found it hard to believe that the connection was just coincidence and unrelated to the glass transition.

Dr. Harrowell said that in the proposed theories so far, the theorists have had to guess about elementary atomic properties of glass not yet observed, and he wondered whether one theory could cover all glasses, since glasses are defined not by a common characteristic they possess, but rather a common characteristic they lack: order. And there could be many reasons that order is thwarted. “If I showed you a room without an elephant in the room, the question ‘why is there not an elephant in the room?’ is not a well-posed question,” Dr. Harrowell said.

New experiments and computer simulations may offer better explanations about glass. Simulations by Dr. Harrowell and his co-workers have been able to predict, based on the pattern of vibration frequencies, which areas were likely to be jammed and which were likely to continue moving. The softer places, which vibrate at lower frequencies, moved more freely.

Mark D. Ediger, a professor of chemistry at the University of Wisconsin, Madison, has found a way to make thin films of glass with the more stable structure of a glass that has been “aged” for at least 10,000 years. He hopes the films will help test Dr. Wolynes’s theory and point to what really happens as glass approaches its ideal state, since no one expects the third law of thermodynamics to fall away.

Dr. Weitz of Harvard continues to squeeze colloids, except now the particles are made of compressible gels, enabling the colloidal glasses to exhibit a wider range of glassy behavior.

“When we can say what structure is present in glasses, that will be a real bit of progress,” Dr. Harrowell said. “And hopefully something that will have broader implications than just the glass field.”

July 29, 2008

2E3DEAB5-7110-407C-94DB-E90FA063E761.jpg

By Alison Abbott, July 25, 2008, Nature – Gene therapists used to talk about permanently fixing ‘broken’ genes. But the emphasis has now shifted to treating conditions such as coronary disease and cancer using transient gene expression. Alison Abbott reports.

The language used by gene therapists has become noticeably more sober over the years. In the early 1990s, the field’s pioneers raised hopes of curing single-gene disorders such as cystic fibrosis and severe combined immunodeficiency. Today, talk of cures has been replaced by guarded optimism about “encouraging” results. And the focus of attention has shifted away from genetic diseases towards using gene therapy as part of the therapeutic arsenal needed to confront major killers such as heart disease and cancer.

Two classical issues brought gene therapists down to Earth: efficacy and toxicity. The original idea of replacing missing or defective genes was compelling. But it soon became obvious that the vectors, such as weakened viruses, used to ferry the new genes into patients were highly inefficient. And in some instances, the vectors can also cause an adverse inflammatory reaction — as in the tragic case of the US teenager Jesse Gelsinger, whose death in September 1999 during a gene-therapy trial cast a pall over the entire field. Even when genes are successfully and safely transferred, their activity typically tails off after only a short period.

Faced with these problems, many gene therapists have gone back to the laboratory bench, trying to solve these technical issues in animal models before returning to human trials. But others have kept one foot in the clinic, turning to diseases that could be approached with the imperfect tools available. “We started to think positive,” says Ronald Crystal of the Cornell Medical College in New York. “What applications could we have for transient gene expression?”

Growth potential

The list of contenders is growing, with cancer and coronary-artery disease at the top. Attempts to treat single-gene diseases now account for only about an eighth of the worldwide roster of 500 or so approved clinical protocols. Trials using transient gene expression — in which a short period of activity by transferred genes is sufficient to treat a range of conditions — account for most of the rest.

Excitement over the potential of gene therapy to help treat coronary heart disease stems in part from work in Crystal’s lab. In 1998, his team showed that introducing a gene encoding the protein vascular endothelial growth factor (VEGF) into the heart muscles of pigs caused new blood vessels to grow around blocked coronary arteries1.

F1CDFE8D-6033-4319-BDDB-93ACE24D17DB.jpg
Gene ferries: adenoviruses (left), seen as red dots in the nucleus of this cultured human cell (above), can be used to introduce therapeutic genes into patients.
GOPAL MURTI/SPL/P. HAWTIN

Crystal introduces his therapeutic genes using adenoviruses — a major cause of common colds — modified so that they cannot cause disease. But because adenoviruses are such common infectious agents, our immune systems are primed to recognize them, so the vectors are rapidly destroyed. This undermines attempts at efficient and permanent gene transfer which are needed to treat conditions such as cystic fibrosis — the focus of Crystal’s earlier trials.

But the vectors’ transience was no obstacle to Crystal’s work with VEGF. “This is exactly the property we wanted for vascular disease,” he says. “It’s like flicking a master switch — once the blood vessels have been triggered to grow, we don’t need the switch to be flicked again.” Indeed, excessive growth of blood vessels is something that Crystal wants to avoid.

Crystal has since moved into the clinic, testing the technique in 31 patients undergoing coronary-bypass operations in a phase I clinical trial, designed to assess safety2. “Nothing about efficacy can be concluded with phase I trials,” he stresses. Despite this, the patients reported some relief of symptoms, such as stress following exercise, and techniques for monitoring the growth of blood vessels indicated improvement in areas of heart muscle into which the vector had been injected.

9D910CBA-F55F-4DDC-A2E4-25FA6C5C7AB6.jpg
Replumbing the heart: Ronald Crystal has used transient gene therapy to stimulate the growth of new blood vessels around blocked arteries in pigs (top right, control and experimental). Now he is working with coronary-bypass patients.
PETER ARNOLD/SPL/LUNAGRAFIX/RON CRYSTAL

Jeffery Isner of Tufts University in Boston is also working with VEGF. His phase I trial involved 85 patients with coronary heart disease and 110 with blockages in their peripheral circulation — a complication of conditions such as diabetes. Rather than using an adenoviral vector, Isner injects naked DNA encoding VEGF into muscle tissue in the vicinity of the blocked arteries. The efficiency of gene transfer in this method is very low, but Isner believes it will be sufficient to produce enough VEGF to trigger the growth of blood vessels.

Preliminary results from Isner’s trial, for 13 patients with coronary-artery disease, were published last August3. Again, scanning techniques suggested that the treatment had promoted the growth of new blood vessels.

In the wake of Gelsinger’s death — in an unrelated gene-therapy trial at the University of Pennsylvania — progress has slowed. Both Crystal and Isner are now conducting further safety trials to be sure that the small number of deaths among very sick patients in their phase I studies occurred because of the patients’ underlying health problems, rather than as a result of the therapy.

Stimulating work

Meanwhile, scientists working in the commercial sector are gaining ground. In March, at a meeting in Orlando, Florida, of the American College of Cardiology, Berlex Biosciences of Richmond, California, reported positive results with fibroblast growth factor (FGF), which also promotes blood vessel growth. In this trial, adenovirus vectors containing the gene for FGF were injected into the coronary arteries of 60 patients with angina. A further 19 patients served as placebo controls, receiving injections that did not deliver FGF. The treated patients enjoyed greater relief from their symptoms. At the same meeting, Berlex reported similarly encouraging results from a placebo-controlled trial in peripheral vascular disease. The company is now gearing up to start phase III studies, which will use large enough numbers of patients to determine efficacy.

Given these encouraging developments, gene therapists are now thinking about other conditions that might be treated using growth-factor genes. Gene therapy involving VEGF could, for instance, help wounds to heal by promoting the regrowth of skin and blood vessels. Several groups are looking at the potential of platelet-derived growth factor for the same application. Crystal is also conducting experiments on rats to test whether the genes for VEGF and BMP7, one of a family of proteins that promote bone growth, could be used to fuse vertebrae after spinal surgery. Currently, orthopaedic surgeons use metal rods and bone grafts.

The gene for a signalling protein called Sonic hedgehog, which is involved in a wide range of development processes, may also have potential for transient gene therapy. One of the processes it helps to control is the development of hair follicles, and Crystal wonders if a short burst of Sonic hedgehog might be used to treat the hair loss caused by cancer chemotherapy, by activating dormant follicles. His team is investigating this by studying mouse pups, whose hair grows in synchronized bursts during their first weeks after birth. When Crystal and his colleagues injected adenoviral vectors containing the gene for Sonic hedgehog into the pups’ skin, it shortened the ‘resting’ phases of the hair follicles4.

Tumour targets

While Crystal looks at treating a side- effect of cancer chemotherapy, other gene therapists are targeting the tumours themselves. Two-thirds of current gene-therapy trials are for cancer, and around half of these involve immunotherapy, or ‘cancer vaccines’ — strategies that try to provoke the immune system’s ‘killer’ T cells to attack tumour cells. Again, transient gene expression should, in theory, be sufficient to start the process. “The great thing about the cancer vaccine strategy,” says Drew Pardoll of Johns Hopkins University in Baltimore, “is that expression of transferred genes need only be short-term, just enough to kick-start the immune response. Then the immune system should sustain it.”

The immune system destroys many cells that go awry and start to turn cancerous. But tumours grow when it fails to notice that something has gone wrong. The main challenge facing the developers of cancer vaccines is breaking this immunological ‘tolerance’, says Pardoll. This requires the activation of specific T cells targeted to molecular signatures, or antigens, carried by the tumour cells.

T cells are activated by ‘antigen-presenting cells’, of which dendritic cells are the most important. Dendritic cells engulf and digest cellular debris, processing the resulting antigens so that they are carried on their surfaces. If the dendritic cells sense that these antigens represent a danger to the body, they also secrete ‘co-stimulatory’ molecules, and these activate the T cells that recognize the antigen involved. But triggering this sequence of events against antigens specific to tumour cells is a tough challenge.

One approach is to transfer a gene for a cytokine into tumour cells, and then to inject the resulting transgenic cells back into the patient. Cytokines are signalling proteins that help marshall immune responses, and previous research has shown that infusions of cytokines such as interleukin-2 (IL-2) can, by themselves, help shrink tumours. For example, in one trial of 255 patients with kidney cancer that had spread to secondary sites5, high-dose IL-2 caused remission in 12 patients, and a partial response in a further 24. But most patients experienced significant side-effects, particularly high fever.

Gene therapists are betting that using tumour cells to produce a transient burst of an appropriate cytokine will trigger dendritic cells more effectively, and so kick-start the chain of events to associate tumour antigens with danger — without causing generalized toxicity.

Early attempts at this strategy used tumour cells taken from the patients themselves. In 1998, for example, a team based at the Humboldt University in Berlin reported increased antitumour immune responses in 16 terminal melanoma patients vaccinated with their own cancerous cells transfected with genes for either interleukin-7 (ref. 6) or interleukin-12 (ref. 7).

But most experts believe that tailoring therapy to individuals by genetically manipulating the patient’s own cells is too time-consuming to be a realistic clinical option. Instead, attention is now focusing on applying the same approach to cell lines maintained in culture that come from the same type of tumour as the patient’s — which should share common antigens.

6C60D4AF-DC57-42A1-8F26-21C1420C7BBF.jpg
Under attack: can prostate cancer cells be destroyed using ‘cancer vaccines’?
NINA LAMPEN/NANCY KEDERSHA/SPL

Early phase clinical trials are now being conducted by several companies, using the genes for cytokines including IL-2 and granulocyte–macrophage colony-stimulating factor. Prominent among these companies is Cell Genesys of Foster City, California, which is targeting melanoma plus kidney, prostate, pancreatic and lung cancers. Variations on the theme include transferring a second gene to the introduced cells to promote the desired immune response even further. These include the gene for a protein that binds to a molecule called CD40, which promotes the maturation of dendritic cells, or for co-stimulatory molecules.

In for the kill

The disadvantage of ‘off-the-shelf’ vaccines using cancer cell lines is that some patients’ tumours might carry a different set of antigens. In an attempt to avoid this problem, Vical, a company based in San Diego, is conducting phase I and II trials in which naked DNA encoding genes for cytokines or co-stimulatory proteins is injected into the patients’ tumours. The idea is that the tumour cells will take up the DNA, and then transiently secrete proteins that will activate an immune response against the cells’ antigens. The drawback, comments Pardoll, is that it is difficult to control the cells’ uptake of DNA. Nonetheless, over the past two years Vical has reported at clinical research meetings encouraging results from patients suffering from melanoma, and prostate and kidney cancers.

6D72F4BB-5797-4E7A-89CF-0BA1159BD92E.jpg
Positive approach: Steven Rosenberg is using transient gene therapy to tackle melanoma.
WILLIAM BRANSON/NIH

An alternative approach to tackling tumours involves introducing the genes for cancer antigens inside viral vectors, in the hope that this will provide an appropriate context to associate the antigens with danger, and activate the immune system accordingly. Various early phase clinical trials are under way. Steven Rosenberg’s team at the National Cancer Institute (NCI) in Bethesda, Maryland, for instance, has attempted to treat melanoma using adenoviruses carrying genes for specific antigens associated with the cancer8. Initial results were disappointing, however, with patients showing only weak immune responses.

In similar work, a team led by Jeffrey Schlom of the NCI and John Marshall from Georgetown University Medical Center in Washington DC, is tackling a range of cancers using a gene for a more generic antigen, called human carcinoembryonic antigen, spliced into the avipox virus9,10. Avipox is a member of the vaccinia virus family that infects birds but which cannot replicate in mammalian cells. The immune response against the tumour antigen appears to increase with each monthly injection, particularly if the injections also contain cytokines, and if the patients are first primed by vaccinating them with vaccinia.

At the 2001 meeting of the American Society of Clinical Oncology, held earlier this month in San Francisco, Schlom and Marshall’s team announced the results of a small randomized trial of 18 patients with late-stage colorectal cancer, whose survival time was increased against controls.

At this stage, researchers in the field are unsure which cancer vaccine strategies will emerge as the most promising. “It’s far too early to know which way is best,” says Rosenberg, who reviewed progress in tumour immunotherapy in last week’s Nature11. “We need to try lots of different approaches just to see which works, and we need to work creatively,” he says. Researchers in the field also stress that cancer vaccines are not likely to be used in isolation, but alongside surgery, radiotherapy and chemotherapy.

Indeed, although gene therapy approaches to both vascular diseases and tumour immunotherapy are showing promise, they are still in their infancy. The true efficacy of these treatments will only be revealed by definitive phase III trials, involving thousands of patients. But with companies such as Berlex Biosciences now planning such trials, we shouldn’t have to wait too long to find out whether transient gene therapy will deliver the goods.

Information on gene-therapy trials

Your browser may not support display of this image.http://www.wiley.co.uk/wileychi/genmed

Next Page →