brain

Scientists are making progress toward a quick test to gauge head-injury severity.

MIT Technology Review, April 16, 2009, by Emily Singer — Go to the emergency room with chest pains, and physicians can determine fairly routinely–with blood tests and an electrocardiogram–whether or not you’ve had a heart attack. A bump to the head is another matter. Currently, no blood tests are approved as a way to diagnose brain injury in the United States. In the case of mild head injuries or more serious ones that take time to develop, it’s difficult to tell early on how severely a patient has been hurt and whether she will suffer long-term consequences.

The high-profile case of actress Natasha Richardson, who died last month after a seemingly minor fall on the ski slopes, demonstrates this uncertainty in a dramatic fashion. According to news reports, she was walking and talking after the fall and refused medical attention, but later developed a headache and was rushed to the hospital. Richardson died two days later of an epidural hematoma, an injury in which blood builds up between the brain’s outer membrane and the skull.

One of the most challenging situations for physicians is deciding how to deal with patients who come into the emergency room with mild traumatic brain injury or concussion. Those with telltale symptoms such as dizziness and nausea will be given a computed axial tomography (CT) scan to look for signs of bleeding in the brain; patients who do show bleeding will need further monitoring and sometimes surgery. But because it’s difficult to determine who needs the scan, many patients get it unnecessarily, and others who do need it may be sent home.

Scientists hope that a blood test to detect proteins and other molecules released into the blood after brain injury could help. But developing such tests has been a challenge. “It’s very hard, because not every head injury is the same,” says David Hovda, director of the Brain Injury Research Center at the University of California, Los Angeles. “Getting hit in the forehead or rotating the neck damage different parts of the brain. And men and women, young and old, people who come in drunk, can all show brain injury differently.”

One blood test already used in Europe to screen head-trauma patients before CT scans detects a protein called S100B, which is released by astrocyte cells in the brain after injury. “The thinking is, if you don’t have [this marker] in the blood, then you don’t have the kind of brain injury you could see on CAT scan,” says Jeffrey Bazarian, an emergency-room physician and scientist at the University of Rochester Medical Center, in New York. The test is not approved for use in the United States, however. In a set of clinical guidelines for evaluating head trauma published recently, Bazarian and others estimated that the S100B test could significantly reduce unnecessary CT scanning. “We predict it could eliminate unnecessary radiation in a lot of people–about 30 percent [of those who come into the ER with brain injury],” he says.

The utility of the S100B test is limited, however. It cannot predict how well a patient will do in the long term. For example, those who have low levels of the protein after trauma may have cellular damage not visible on a CT scan. And some patients who do have brain bleeds will recover with no long-term consequences. “We and others are looking for markers that are more sophisticated, markers that correlate with cellular damage and with problems down the road,” says Bazarian.

The S100B test might actually aid in this quest. New research by Bazarian and his collaborators shows that it can accurately predict whether the blood-brain barrier–a molecular gate between the bloodstream and the nervous system that prevents the exchange of proteins and other compounds–is open or closed. (Previously, the only way to measure the status of the blood-brain barrier was an invasive test that involves threading a catheter through the skull into the brain.)

While the status of the blood-brain barrier itself is not a specific marker of traumatic brain injury–the barrier can open for other reasons, including heavy exercise, seizures, and meningitis–it could aid in the interpretation of other biomarkers in the blood. If the blood-brain barrier is closed, proteins that accompany brain injury might not reach the blood, making it difficult to evaluate the results of other tests. “If you don’t find any markers of brain injury in the blood, it could be because there is no brain injury, or because there is brain injury but the gate is closed,” says Bazarian.

The test may also aid in clinical trials of new drugs for treating brain injury. A number of trials for drugs designed to stop inflammation and other harmful biological processes that flair up soon after brain injury have failed, possibly because the drugs did not make it into the brain. If physicians knew whether a patient’s blood-brain barrier was open, they could reassess these drugs and test new ones only in these patients.

In the long term, scientists would like to develop a blood test that can predict the severity of a patient’s injury, as well as his or her prognosis. Banyan Biomarkers, a startup based inAlachua, FL, may be the farthest along in this endeavor. Researchers there are testing ways to detect a panel of biomarkers linked to mild, moderate, and severe traumatic brain injury in humans. Scientists at the company are now looking for these biomarkers in several hundred patients shortly after they suffer brain trauma, to determine when the biomarkers appear in the blood, how long they last, and how reliably they can predict the magnitude of an injury. Ronald Hayes, one of the company’s founders, says that the scientists expect to complete those studies late this year and early next year, and to start the larger-scale trials required for FDA approval in early 2010

genes

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The New York Times, April 16, 2009, by Nicholas Wade — The era of personal genomic medicine may have to wait. The genetic analysis of common disease is turning out to be a lot more complex than expected.

Since the human genome was decoded in 2003, researchers have been developing a powerful method for comparing the genomes of patients and healthy people, with the hope of pinpointing the DNA changes responsible for common diseases.

This method, called a genomewide association study, has proved technically successful despite many skeptics’ initial doubts. But it has been disappointing in that the kind of genetic variation it detects has turned out to explain surprisingly little of the genetic links to most diseases.

A set of commentaries in this week’s issue of The New England Journal of Medicine appears to be the first public attempt by scientists to make sense of this puzzling result.

One issue of debate among researchers is whether, despite the prospect of diminishing returns, to continue with the genomewide studies, which cost many millions of dollars apiece, or switch to a new approach like decoding the entire genomes of individual patients.

The unexpected impasse also affects companies that offer personal genomic information and that had assumed they could inform customers of their genetic risk for common diseases, based on researchers’ discoveries.
These companies are probably not performing any useful service at present, said David B. Goldstein, a Duke University geneticist who wrote one of the commentaries appearing in the journal.

“With only a few exceptions, what the genomics companies are doing right now is recreational genomics,” Dr. Goldstein said in an interview. “The information has little or in many cases no clinical relevance.”

Unlike the rare diseases caused by a change affecting only one gene, common diseases like cancer and diabetes are caused by a set of several genetic variations in each person. Since these common diseases generally strike later in life, after people have had children, the theory has been that natural selection is powerless to weed them out.

The problem addressed in the commentaries is that these diseases were expected to be promoted by genetic variations that are common in the population. More than 100 genomewide association studies, often involving thousands of patients in several countries, have now been completed for many diseases, and some common variants have been found. But in almost all cases they carry only a modest risk for the disease. Most of the genetic link to disease remains unexplained.

Dr. Goldstein argues that the genetic burden of common diseases must be mostly carried by large numbers of rare variants. In this theory, schizophrenia, say, would be caused by combinations of 1,000 rare genetic variants, not of 10 common genetic variants.

This would be bleak news for those who argue that the common variants detected so far, even if they explain only a small percentage of the risk, will nonetheless identify the biological pathways through which a disease emerges, and hence point to drugs that may correct the errant pathways. If hundreds of rare variants are involved in a disease, they may implicate too much of the body’s biochemistry to be useful.

“In pointing at everything,” Dr. Goldstein writes in the journal, “genetics would point at nothing.”

Two other geneticists, Peter Kraft and David J. Hunter of the Harvard School of Public Health, also writing in the journal, largely agree with Dr. Goldstein in concluding that probably many genetic variants, rather than few, “are responsible for the majority of the inherited risk of each common disease.”

But they disagree with his belief that there will be diminishing returns from more genomewide association studies.
“There will be more common variants to find,” Dr. Hunter said. “It would be unfortunate if we gave up now.”
Dr. Goldstein, however, said it was “beyond the grasp of the genomewide association studies” to find rare variants with small effects, even by recruiting enormous numbers of patients. He said resources should be switched away from these highly expensive studies, which in his view have now done their job.

“If you ask what is the fastest way for us to make progress in genetics that is clinically helpful,” he said, “I am absolutely certain it is to marshal our resources to interrogate full genomes, not in fine-tuning our analyses of common variations.”

He advocates decoding the full DNA of carefully selected patients.

Dr. Kraft and Dr. Hunter say that a person’s genetic risk of common diseases can be estimated only roughly at present but that estimates will improve as more variants are found. But that means any risk estimate offered by personal genomics companies today is unstable, Dr. Kraft said, and subject to upward or downward revision in the future.

Further, people who obtain a genomic risk profile are likely to focus with horror on the disease for which they are told they are at highest risk. Yet this is almost certain to be an overestimate, Dr. Kraft said.

The reason is that the many risk estimates derived from a person’s genomic data will include some that are too high and some that are too low. So any estimate of high risk is likely to be too high. The phenomenon is called the “winner’s curse,” by analogy to auctions in which the true value of an item is probably the average of all bids; the winner by definition has bid higher than that, and so has overpaid.

Dr. Kari Stefansson, chief executive of deCODE Genetics, an Icelandic gene-hunting company that also offers a personal genome testing service, said deCODE alerted clients to pay attention to diseases for which testing shows their risk is three times as great as average, not to trivial increases in risk.

Dr. Stefansson said his company had discovered 60 percent of the disease variants known so far.

“We have beaten them in every aspect of the game,” he said of rival gene hunters at American and British universities.

The undiscovered share of genetic risk for common diseases, he said, probably lies not with rare variants, as suggested by Dr. Goldstein, but in unexpected biological mechanisms. DeCODE has found, for instance, that the same genetic variant carries risks that differ depending on whether it is inherited from the mother or the father.

monitoringmuscle

A handheld device could give doctors more precise data about muscle health–painlessly.

MIT Technology Review, April 16, 2009, by Courtney Humphries — Neuromuscular diseases like amyloid lateral sclerosis (ALS) and muscular dystrophy often involve a progressive loss of muscle function, but tracking the health of muscles over time is not always easy or precise. The best way to diagnose and evaluate muscle degeneration involves an uncomfortable needle test; both this test and other approaches like questionnaires are subjective and not easy to reproduce over multiple sessions.

A new device, under development by Seward Rutkove, a neurologist and scientist at Harvard Medical School, and his colleagues at MIT could provide a painless, noninvasive, and quantitative alternative. The prototype handheld probe, similar to an ultrasound probe, measures electrical impedance in the muscle, which changes depending on the health of the tissue.

The approach, also known as electric impedance myography (EIM), is a modification of the basic technology used in body composition devices to measure the percentage of fat or muscle in the body. A high-frequency electric current is applied to the skin through a set of noninvasive electrodes, while another set of skin electrodes records the resulting voltages from the tissue. The properties of the current change depend on the composition and microscopic structure of the underlying tissue.

Muscles are made of long bundled fibers oriented in the same direction. An electrical current passes more easily when it travels parallel to the fibers; when it passes across the fibers, it encounters more cell membranes, which cause a greater delay or phase shift in the current. Rutkove’s group at Beth Israel Deaconess Medical Center has found that this phase shift varies depending on the health of the muscle, since diseased muscle has fewer cell membranes. In addition, energy is lost as the current flows through muscle, and more so when flowing across the fibers. Rutkove’s group has found that looking at both phase shift and energy loss can provide unique information on the health of the muscle, since diseased muscles have fewer muscle fibers, smaller cell membranes, and abnormal amounts of fat and water in the muscle, all of which impact these measurements.

Rutkove’s group initially made muscle measurements using off-the-shelf body composition devices modified to perform EIM. But the process required stick-on electrodes placed at several positions along a muscle, and a single body part might require multiple rounds placing the electrodes at various angles. The handheld probe, developed in collaboration with Joel Dawson’s electrical-engineering lab at MIT, makes it possible to take the measurements quickly without a need for electrodes.

Dawson says that the main technical challenge in developing the device was to find a way to deliver electric currents at varying angles without requiring complex machinery. “We came up with the idea of having a lot of little pixel probes and connecting them together,” he says. The head of the device contains two rings of small electrodes: one to send current, and one to measure voltage. These individual electrodes can be electrically connected in different combinations to act as single larger electrodes, or can be isolated individually to give a finer resolution. This allows the researchers to program the specific angles that they want to measure. The device is connected to a computer that calculates impedance measurements and displays the results graphically.

Rutkove is currently testing EIM in patients with ALS and in children with spinal muscular atrophy. He says that the biggest challenge for making EIM useful is knowing how to interpret the data. His work has shown that neuromuscular diseases can have unique EIM “signatures” that can be used to diagnose and treat the disease, but it’s an ongoing research effort “to find the right signature or impedance profile that tells you it’s one type of disease versus another.” The technique must also be tested in enough patients to understand the normal range of individual variability.

“The idea of having a tool that is noninvasive and painless to assess muscle function is very attractive,” says Michael Benatar, a neurologist at Emory University, who is testing the device in patients. Currently, the best test for muscle function is electromyography (EMG), which involves placing a needle into the muscle and having the patient contract the muscle. Benatar has been testing the EIM method in patients with ALS to see if the technique could be used for early detection of disease. “We’re hoping we might be able to detect abnormalities with EIM that aren’t apparent clinically or with conventional techniques,” he says. But he adds that EIM is not ready to be used more widely in the clinic until it’s clear how to interpret the results.

Rutkove hopes that in the meantime, EIM will prove useful as a research tool. His group is also conducting studies on animals with neuromuscular diseases to understand in more detail how EIM readings relate to the underlying tissue changes with disease.

bridges

The tissue creates a scaffold for nerve tissue regeneration.

MIT Technology Review, March/April 2009, by Kristina Grifantini — Researchers have shown that artificially stretched nerve “bridges” can guide the natural regrowth of damaged nerve tissue in rats. This technique may eventually provide an effective treatment for people who suffer nerve damage as a result of injury or surgery.

Nerve fibers, called axons, extend from neurons and carry electrical signals around the body. When a nerve is severed, both the axon and its supportive myelin sheath are damaged. Although axons grow back after being severed, they do not do so fast enough, or over sufficient distance, to repair major damage.

At present, surgeons lack effective treatment for these injuries. Small amounts of nerve tissue can be harvested from elsewhere in a patient’s body and longer stretches of nerve fibers can sometimes be supplied by tissue donors, but in the latter case, a patient must take immunosuppressant drugs so that the donor tissue is not rejected.

A team led by Douglas Smith, professor and director of the University of Pennsylvania’s Center for Brain Injury and Repair, has been able to grow artificially stretched nerve tissues and place them inside guiding tubes. They then used these tissue tubes to bridge the gap between severed nerve tissues in rats and found that the scaffolds promoted the regrowth of axon tissue at either end.

“What we’ve done is created a 3-D neural network, a mini nervous system that is kind of like jumper cables,” says Smith. The research is reported in the latest issue of the journal Tissue Engineering.

To begin, the researchers placed rat neurons in two dishes and chemically coaxed them to sprout axons. Using a computer-controlled system, they gradually pulled the two dishes apart, stretching the axons to about a centimeter over seven days. Finally, the axons were embedded in a supportive collagen scaffold and inserted into tubes made of polyglycolic acid.

The team used these tubes to connect severed sciatic nerves, which run from the lower back into the leg. As the axons from the rats’ severed nerves grew into the tubing, the new and transplanted tissue intertwined. The outer synthetic tube disintegrated over four months, leaving a normal-functioning nerve in its place. By measuring electrical signals passing through the damaged nerves and performing behavioral tests, the researchers found that the nerves had regrown successfully.

In more than 20 animals, the team had “almost 100 percent success of transplant,” says Smith. “They survived and promoted growth from the host in a stunning way.” Additionally, although the nerve tissue was not their own, the rats’ bodies accepted the transplants without the use of an immunosuppressant. The team now plans to test the procedure in larger animals.

The axons must grow quickly, before the part of the severed nerve detached from the neuron dies. “We actually grow [axons] faster than what is thought possible,” says Smith, noting that they can grow axons at a rate of up to a centimeter per day, while previously axons in dishes grow about 1 millimeter per day.

“What I like about this is that it takes a different approach than the standard biological approaches,” says Jennifer Elisseeff, an assistant professor of biomedical engineering at Johns Hopkins University. She adds that a major obstacle to nerve regeneration is getting the nerves to grow fast enough. “This would be a more efficient way of inducing regeneration,” she says. “This really accelerates it.”

“This is a very interesting approach that demonstrates how bioengineering and cell therapy approaches can be combined to solve an important medical problem,” says Ali Khademhosseini, an assistant professor at Harvard University who works on tissue engineering. “The bridging of the severed spine by using the process that has been described is highly promising.” He adds that researchers still need to achieve results in primates and humans, as well as to demonstrate the technique’s effectiveness in treating different nerve injuries.

globalhealth2

Medical tests for poor countries need to be properly field-tested.

MIT Technology Review, March/April 2009, by Jose Miguel Trevejo — Applying modern diagnostic technologies to disease management around the globe could dramatically improve patient care. Unfortunately, many such technologies are not available or not usable in resource-poor settings.

In sub-Saharan Africa, for example, health-care practitioners treating children with severe fevers have to either make an educated guess or use a “shotgun” approach, treating for all potential illnesses from malarial infection to bacterial pneumonia. The inability to arrive at a specific diagnosis not only delays proper therapy but can increase death rates. A recent study has shown that mortality is almost twice as high in patients mistakenly diagnosed as having malaria. Those deaths could be dramatically reduced with a reliable test that can be given at the bedside.

With current advances in low-cost, portable technologies (see “Paper Diagnostic Tests”) and increasing knowledge of the biology of disease, the time is ripe to bring novel diagnostic tools to those who desperately need them. Accurate diagnoses will not only help individual patients but also yield more-precise measurements for public-health and treatment programs.

It is imperative, however, that new diagnostics be developed and tested as early as possible in the settings where they will be used–preferably in collaboration with local researchers and clinicians. Hospitals in the developing world are littered with broken x-ray machines and other donated medical equipment that does not work in humid conditions and has no replacement parts. Most businesses wouldn’t think of releasing a new product without sufficient market research or testing, and the same should be true for field-testing medical devices. Such testing will help avoid two problems that have hobbled earlier efforts: pushing newer technologies when existing or simpler technologies would be as effective, and attempting to apply technologies that work well in the controlled, sterile environment of the laboratory but not in the real world.

Given that different diseases are endemic to specific parts of the world, and that regions differ in genetics, environment, culture, sociopolitical infrastructure, and the biology of particular pathogens, a “one size fits all” approach to diagnostics is rarely appropriate. The steps leading from a promising diagnostic technology to a robust working product must be thoughtfully executed to ensure that the enormous potential of these tests is fulfilled in the places where they can do the most good.

José Miguel Trevejo is a principal scientist at Charles Stark Draper Laboratory.

hybridcar1

In November, Fisker Automotive will begin sales of a car with 50 miles of battery-powered range.

MIT Technology Review, April 17, 2009, by Kevin Bullis — The first plug-in hybrid to be sold in the United States will likely be the Fisker Karma, which is due out in November. Fisker Automotive, which unveiled the concept version of the Karma in January, recently raised $87 million to help put it into production. A number of other plug-in hybrids, including models from GM, Chrysler, and Toyota, are scheduled to come out in the next few years.

The Karma, a luxury four-passenger sedan, can be recharged by plugging it in; it can then be driven on power from a battery alone for 50 miles. After that, an onboard gasoline generator kicks on to recharge the battery, extending the range by 250 miles between fill-ups. Power from optional solar cells on the roof will be used primarily to cool the car when it’s parked, but they could also partially recharge the battery. The car will run on a lithium manganese oxide battery made by Advanced Lithium Power, based in Vancouver, BC. The battery is similar to the one selected for the Chevrolet Volt, a plug-in hybrid due out in November of 2010.

Henrik Fisker, a car designer and cofounder of the company, said at the New York Auto Show last week that the car is part of his effort to show that environmentally friendly cars need not be small and underpowered. To go with its performance, the car carries a hefty price tag of $87,000.

The car will indeed be fast, but it won’t be quite as green as some of the other plug-ins that will come out soon, in large part because of its size. Two 150-kilowatt electric motors together deliver 403 horsepower–enough to accelerate to 60 miles per hour in 5.8 seconds. (It takes the Volt about 9 seconds.) But that kind of acceleration is available only in something called “sport” mode, which uses power from both the battery pack and the gas-powered generator. Drivers will need to select the “stealth” mode to rely exclusively on electricity stored in the battery.
The stealth mode is a holdover from the origins of the vehicle’s propulsion system. The Q-drive system was developed by Quantum Technologies for military vehicles designed to have a quiet electric mode for “clandestine operations.” When the gas generator and battery are used together, the vehicle gets between 35 and 40 miles per gallon. That’s still better than conventional performance vehicles, but not as good as the Chevrolet Volt; when its gas generator kicks in after a 40-mile all-electric range, the Volt will get 50 miles per gallon.

Fisker Automotive is one of several small companies attempting to challenge established automakers by producing plug-in hybrids or electric vehicles. The most promising of these, according to Mike Omotoso, senior manager of power-train forecasting at JD Power and Associates, are Fisker Automotive and Tesla Motors, a company that is already producing its first car, the electric Roadster, a small, very high-performance car that can accelerate to 60 miles per hour in less than four seconds. The Roadster runs exclusively on power stored in its battery: it does not have an onboard generator to extend its range. In spite of Tesla’s lead in getting a car to market, it is expected to sell fewer cars than Fisker, Omotoso says. That’s because the plug-in hybrid design will make the Karma more appealing to consumers who want to travel long distances. (The Tesla Roadster can go 244 miles on a charge, but recharging it takes hours. The Karma can be refueled quickly.) JD Power estimates that Tesla will sell 500 to 800 cars next year, while Fisker Automotive is expected to sell more than 10,000.

Fisker could have a narrow window of opportunity in which to establish itself, Omotoso says. GM, Chrysler, Toyota, and other major automakers have plans to produce plug-in hybrids and electric vehicles in the next few years that will likely be far less expensive than the Karma. Yet Fisker will have a few limited advantages in competing with the established automakers, Omotoso says. Unlike GM and Chrysler, it won’t be loaded down by legacy costs–overhead from large factories and bills for retiree pensions and health care, for example. It will also be able to draw on the same suppliers, so that as demand for GM plug-ins increases production and drives down costs for parts, those costs will also come down for Fisker. Eventually, the company intends to sell less expensive cars to a wider market. Last week, Henrik Fisker said that it may be possible to use very simple engines to recharge the battery and extend range. These could cost as little as $500, he said–far less than the $3,000 that a conventional engine can cost.

Fisker won’t be the first car maker to manufacture plug-in hybrids: a Chinese company called BYD, which is backed by Warren Buffett, is already producing plug-in hybrids in China. But Fisker will be the first to sell them in the United States. “With the first car that hits the market, people will judge the level of enthusiasm based on sales, and try to project from that what the future of these cars is,” says Felix Kramer, the founder of CalCars and a plug-in hybrid advocate. “If they stumble, if they have quality problems or people are disappointed in the product, then it sets everyone back.” Omotoso expects that consumers will pay particularly close attention to whether the cars have the advertised range and to whether the lithium ion battery packs prove safe and reliable. “If the consumer gets spooked by safety issues, consumers might say, ‘Let’s just stick with [conventional] hybrids,'” such as the Toyota Prius, Omotoso says, especially since they’re cheaper than plug-in hybrids. “The plug-in market could be strangled at birth if there are significant problems with the Fisker [Karma],” he says.