Two studies show that complete-genome sequencing can identify disease-causing genes
MIT Technology Review, by Emily Singer, March 11, 2010 – James Lupski, a physician-scientist who suffers from a neurological disorder called Charcot-Marie-Tooth, has been searching for the genetic cause of his disease for more than 25 years. Late last year, he finally found it–by sequencing his entire genome. While a number of human genome sequences have been published to date, Lupski’s research is the first to show how whole-genome sequencing can be used to identify the genetic cause of an individual’s disease.
The project, published today in the New England Journal of Medicine, reflects a new approach to the hunt for disease-causing genes–an approach made possible by the plunging cost of DNA sequencing. Part of a growing trend in the field, the study incorporates both new technology and a more traditional method of gene-hunting that involves analyzing families with rare genetic diseases. A second study, the first to describe the genomes of an entire family of four, confirmed the genetic root of a rare disease, called Miller syndrome, afflicting both children. That study was published online yesterday in Science.
While the approach is currently limited to rare genetic diseases, researchers hope it will ultimately enable the discovery of rare genetic variants increasingly thought to lie at the root of even common diseases, such as diabetes and heart disease.
Lupski was diagnosed as a teenager with Charcot-Marie-Tooth, a disorder that strikes about one in 2,500 people and affects sensory and motor nerves and leads to weaknesses of the foot and leg muscles. Three of his seven siblings also have the disease. While the disorder has a number of different forms and can be caused by a number of genetic mutations, it appears to be recessive in Lupski’s family, meaning that an individual must carry two copies of the defective gene to have it.
Decades later, in 1991, after Lupski had trained as both a molecular biologist and a medical geneticist, his lab at Baylor College of Medicine, in Houston, identified the first genetic mutation linked to Charcot-Marie-Tooth. It was a duplication of a gene on chromosome 17 that is involved in producing the fatty insulation that covers nerve fibers, as well as several other genes tied to the disease. “Every time we discovered a new gene, we put my DNA in the group of samples to be sequenced,” says Lupski. But those studies failed to identify the mutation responsible for his case. To date, 29 genes and nine genetic regions have been linked to the disease.
In 2007, Lupski and colleague Richard Gibbs, director of the Human Genome Sequencing Center at Baylor, helped sequence James Watson’s genome, the first personal genome sequence to be published (aside from that of Craig Venter, who used his own DNA in the private arm of the Human Genome Project). The problem with that project was that, thanks to Watson’s good health, there was little clinical relevance to his genome–he had no diseases to try to match to a gene. So Gibbs offered to turn the genome center’s sequencing power on Lupski.
Using technology from Applied Biosystems, a sequencing company based in Foster City, CA, the researchers generated about 90 gigabytes of raw sequence data, covering Lupski’s genome approximately 30 times. (Because of unavoidable errors in sequencing, a human genome must be analyzed a number of times to generate an accurate read.) They then identified spots where his genome differed from that of the reference sequence from the Human Genome Project, and narrowed that pool down to novel variations found in genes previously linked to Charcot-Marie-Tooth or other nerve disorders. Researchers found that Lupski’s genome carried two different mutations in a gene called SH3TG2, which had been previously tied to the disorder. The team then sequenced the gene in DNA from his siblings, parents, and deceased grandparents. (In preparation for this discovery, the scientist had collected his family’s DNA 25 years ago.) All of his affected siblings also carried both of the mutations, while the unaffected family members carried either one or neither, exactly the pattern for a recessive disease.
“I think this is the wave of the future,” says Thomas Bird, director of the Neurogenetics Clinic at the University of Washington, in Seattle, who was not involved in the research. “Genetic testing is going to become more and more important in medicine as the technology becomes more extensive and less expensive.”
Understanding the genetic mutation that causes Lupski’s disorder can help scientists search for treatments. For example, animals genetically engineered to mimic the gene duplication responsible for about 70 percent of the human cases of Charcot-Marie-Tooth can be helped by an estrogen-blocking drug, which is now in clinical trials. (Other genetic variations, including some yet to be discovered, are responsible for the remaining 30 percent.) The same week that Lupski identified his disease mutation, he received a research paper to review that described the creation of a mouse lacking the same gene, SH3TG2. “Suddenly we’re starting to get insight into the disease process for the first time in 25 years,” says Lupski, who hopes to repeat his success by sequencing patients with other unexplained nerve disorders.
In the Science study, Leroy Hood and collaborators at the Institute for Systems Biology, in Seattle, sequenced the complete genomes of a nuclear family of four, the first published example of familial whole-genome sequencing. Both children in the family have Miller syndrome, a rare craniofacial disorder. By comparing the sequence of parents and offspring, researchers could calculate the rate of spontaneous mutations arising in the human genome from one generation to the next. The rate equates to about 30 mutations per child, lower than previous estimates.
One of the major problems with analyzing whole-genome data is isolating important genetic signals from noise–both sequencing errors and thousands of harmless genetic variations that have little or no impact on a person’s health. Comparing intergenerational genomes allowed scientists to filter out some of this noise. They honed in on the genetic changes that appeared from one generation to the next and then resequenced those regions to identify true changes. Hood estimates that errors are about 1,000 times more prevalent than true mutations. “In the future, when all of us have our genomes done, we’ll almost certainly have them done in families, because it increases the accuracy of the data,” says Hood.
By comparing the genomes of the unaffected parents to their affected children, researchers identified four candidate genes for Miller syndrome. One candidate overlapped a gene linked to the disease in a study published in January. That study sequenced just the gene-coding regions of these children and two others with Miller syndrome. (Lupski’s study, in contrast, focused on genes known to be related to Charcot-Marie-Tooth or other nerve disorders. But that approach would be ineffective in identifying unexpected genes or genes for diseases that are not well-studied.)
Thus far, whole genome sequencing has been limited to identifying genes linked to so-called Mendelian disorders, in which mutations in a single gene cause the disease. Eventually, Lupski, Hood, and others aim to move on to more complex diseases, such as Alzheimer’s. “There are various ways to turn a common disease into a rare one–you start with families that have more severe forms of common disease, or earlier onsets,” says George Church, who leads the Personal Genome Project at Harvard. “I think almost every disease has rare [variants].” It was this type of approach that led to the identification in 1993 of the Alzheimer’s risk gene APOE4, still the strongest genetic risk factor known to date. Now, thanks to cheap sequencing, the ability to scan the genome in its entirety will allow a much broader and more thorough search.
Lupski’s family gives a potential example of how genes tied to rare disorders may shed light on more common ones. Two of the scientist’s siblings who carried one of the genetic mutations linked to Charcot-Marie-Tooth had signs of carpal tunnel syndrome, a common disorder often caused by repetitive movements. “That’s a very common disease and now we have insight into it,” says Church. “I think there will be lots of cases where you identify a gene in one person in a family, and then start asking questions about the phenotype of family members with only one copy.”
Church and others say the two studies signal a new trend in human genetics research. Over the last few years, microarrays designed to cheaply screen human genomes for common genetic variations linked to common, complex diseases have mostly picked up variants with only a mild effect on disease risk. The bulk of the genetic causes of these ailments remains a mystery, and a growing number of scientists believe this disease risk lies in rare variants only detectable with whole-genome sequencing.
Lupski, who as a medical geneticist still sees patients once a week, won’t be able to offer them whole-genome sequencing in the next year or two. “I don’t want people to think everyone can have this diagnosis, or that having a diagnosis means there is a cure,” says Lupski. “But we can start using the technology more and more for gene discovery.”
But cost-wise, personal genomes may not be far off. For example, Bird at the University of Washington says that a comprehensive genetic screen for inherited nerve diseases costs about $15,000. Researchers estimate that Lupski’s genome cost about $50,000. And Complete Genomics, a startup in California that sequenced the family in Hood’s study, will soon offer bulk sequencing services for about $20,000 a genome, with a $5,000 price tag not far behind.
Vice Chair, Department of Molecular and Human Genetics
Cullen Professor, Departments of Molecular and Human Genetics and PediatricsPh.D., New York University
M.D., New York University School of Medicine
Postdoctoral, New York University
To what extent are de novo DNA rearrangements in the human genome responsible for sporadic human traits including birth defects? How many human Mendelian and complex traits are due to structural changes and/or gene copy number variation (CNV)? What are the molecular mechanisms for human genomic rearrangements? The answers to these questions will impact both prenatal and postnatal genetic diagnostics, as well as patient management and therapeutics.
For five decades, the molecular basis of disease has been addressed in the context of how mutations effect the structure, function, or regulation of a gene or its protein product. However, we have been living in a genocentric world. During the last decade it has become apparent that many disease traits are best explained on the basis of genomic alterations. Furthermore, it has become abundantly clear that architectural features of the human genome can result in genomic instability and susceptibility to DNA rearrangements that cause disease traits – I have referred to such conditions as genomic disorders.
Fifteen years ago, it became evident that genomic rearrangements and gene dosage effects, rather than the classical model of coding region DNA sequence alterations, could be responsible for a common, autosomal dominant, adult-onset neurodegenerative trait—Charcot-Marie-Tooth neuropathy type 1A (CMT1A). With the identification of the CMT1A duplication and its reciprocal deletion causing hereditary neuropathy with liability to pressure palsies (HNPP), the demonstration that PMP22 copy-number variation (CNV) could cause inherited disease in the absence of coding-sequence alterations, was initially hard to fathom. How could such subtle changes—three copies of the normal “wild-type” PMP22 gene rather than the usual two—underlie neurologic disease?
Nevertheless, it has become apparent during this last decade and a half that neurodegeneration can represent the outcome of subtle mutations acting over prolonged time periods in tissues that do not generally regenerate, regardless of the exact molecular mechanism. This concept has revealed itself through 1) conformational changes causing prion disease, 2) the inability to degrade accumulated toxic proteins in amyloidopathies, α-synucleinopathies, and polyglutamine expansion disorders, and 3) alteration in gene copy number and/or expression levels through mechanisms such as uniparental disomy (UPD), chromosomal aberrations (e.g., translocations), and submicroscopic genomic rearrangements including duplications, deletions, and inversions.
Currently, structural variation of the human genome is commanding a great deal of attention. In the postgenomic era, the availability of human genome sequence for genome-wide analysis has revealed higher-order architectural features (i.e., beyond primary sequence information) that may cause genomic instability and susceptibility to genomic rearrangements. Nevertheless, it is perhaps less generally appreciated that any two humans contain more base-pair differences due to structural variation of the genome than resulting from single-nucleotide polymorphisms (SNPs). De novo genomic rearrangements have been shown to cause both chromosomal and Mendelian disease, as well as sporadic traits, but our understanding of the extent to which genomic rearrangements, gene CNV, and/or gene dosage alterations are responsible for common and complex neurological traits including sporadic traits remains rudimentary.
It is not clear to what extent genomic changes are responsible for disease traits, common traits (including behavioral traits), or perhaps sometimes represent benign polymorphic variation. Only recently has the ubiquitous nature of structural variation of the human genome been revealed. Central to our understanding of human biology, evolution, and disease is an answer to the following questions: What is the frequency of de novo structural genomic changes in the human genome? and What are the molecular mechanisms for genomic rearrangements?
MedPageToday.com, by Emily P. Walker, March 11, 2010, WASHINGTON — President Barack Obama has announced new measures aimed at cracking down on fraud and waste in government programs, including Medicare and Medicaid.
The program would offer financial rewards to private auditors who root out improper payments made by the government. The so-called recapture audits could recoup $2 billion in taxpayer money over the next three years, the White House said in a release.
In 2009, improper payments totaled $98 billion, with $54 billion coming from Medicare and Medicaid, the White House said.
The White House said that offering reclaimed money to pay for audits is only allowed for Medicare fee-for-service programs and in 20 out of 24 of the major government agencies.
This leaves out federal payments made by other government agencies, and payments made to state and local governments, universities, banks, and nonprofit organizations.
Obama issued a presidential memorandum Wednesday that allows all federal departments and agencies to use these so-called “recapture audits,” which are performed by accounting specialists and fraud examiners to identify improper payments to contractors.
The auditors will examine records for issues such as duplicate payments and fictitious vendors, according to a White House memo.
Even so, agencies can use such audits only in specific circumstances. Bipartisan bills pending in both the House and the Senate would give government agencies broader authority to pay for audits with money that was recaptured from previous audits.
The administration said pilot programs using the recapture audit tactic have been “highly effective” in California, New York and Texas, recapturing $900 million for taxpayers.
GoogleNews.com, March 11, 2010 — Currently, at least 540 hospitals in the United States are using social media tools such as YouTube, Facebook, Twitter, and blogs to promote their message, mission and brand as well as collaborate with fellow healthcare professionals.
Despite the growing popularity and power of Tweeting from the operating room, for example, such a proliferation in public–often viral–sharing makes healthcare lawyers nervous. “Risky Business: Treating Tweeting the Symptoms of Social Media,” published in the March 2010 issue of AHLA Connections, points out several of the pitfalls.
For one, a physician who shares his or her feelings about a patient online could easily run afoul of HIPAA. Unlike a passing comment at a cocktail party, the article points out, a Tweet or a Facebook status leaves a permanent record of the privacy violation.
The existence of an electronic record also marks the difference between a physician casually chatting with a neighbor about a medical concern and having that same interaction online with a Facebook “friend.” Liabilities could include patient abandonment, medical malpractice, privacy violations, and more.
Although addressing employees’ use of social media is a must, doing so can be difficult and potentially create animosity with hospital workers. Paul Levy, CEO at Beth Israel Deaconess Medical Center in Boston, stated on his blog, Running a Hospital, “banning use of social media in the workplace will inhibit the growth of community and discourage useful information sharing.” Rather, he and other experts recommend a more positive approach, emphasizing what employees “can” do with social media, such as being transparent and authentic at all times and avoiding betrayal of a patient’s or colleague’s trust.
For more information:
– download the full HealthLawyers article
FierceHealthCare.com, March 11, 2010, BOSTON, MA – March 10, 2010 – The Lucian Leape Institute at the National Patient Safety Foundation released today a report that finds that U.S. “medical schools are not doing an adequate job of facilitating student understanding of basic knowledge and the development of skills required for the provision of safe patient care.” The report comes approximately 10 years after the Institute of Medicine’s landmark 1999 report “To Err Is Human,” which found that 98,000 Americans die unnecessarily from preventable medical errors. “Despite concerted efforts by many conscientious health care organizations and health professionals to improve and implement safer practices, health care remains fundamentally unsafe,” said Lucian L. Leape, MD, Chair of the Institute and a widely renowned leader in patient safety. “The result is that patient safety still remains one of the nation’s most solvable public health challenges.”
A major reason why progress has been so slow is that medical schools and teaching hospitals have not trained physicians to follow safe practices, analyze bad outcomes, and work collaboratively in teams to redesign care processes to make them safer. These education and training activities, the report states, need to begin on Day 1 of medical school and continue throughout the four years of medical education and subsequent residency training.
“The medical education system is producing square pegs for the delivery system’s round holes,” said Dennis S. O’Leary, MD, President Emeritus of The Joint Commission, a member of the Institute, and leader of the initiative. “Educational strategies need to be redesigned to emphasize development of the skills, attitudes, and behaviors that are foundational to the provision of safe care.” The new report – titled “Unmet Needs: Teaching Physicians to Provide Safe Patient Care” – is based on a Roundtable of leading experts in medical education, patient safety, healthcare, and healthcare improvement convened by the Institute. Participants ranged from some of the most eminent figures in these fields to patients and current medical students who are experiencing medical education first-hand.
The 40-member Roundtable quickly surfaced several major themes. Most medical schools do not teach safety science nor equip new doctors with the interpersonal skills they need to practice safely. The singular focus of medical schools for the past 100 years has been on teaching basic sciences and clinical knowledge; this is no longer adequate. To practice safely, and to improve care, students need to learn safety science, human factors engineering concepts, systems thinking, and the science of improvement. And they need to develop the interpersonal skills to communicate effectively with co-workers and patients and work well in teams.
Teaching hospitals, where clinical education of students and residents takes place, are also falling short in their safety education and training roles. Like medical schools, most teaching hospitals have hierarchical cultures that are inimical both to safety education and safety improvement. The unquestioning deference to physician authority inhibits adherence to safe practices and team-building across disciplines. In addition, too many students suffer humiliating and dehumanizing experiences at the hands of the faculty, their role models, which creates a culture of fear and intimidation, impairs learning, and creates the likelihood that students and residents so treated will pass these behaviors on to the next generation of learners.
The report concludes that “substantive improvement in patient safety will be difficult to achieve without major education reform at the medical school and residency training program levels.”
The report’s 12 recommendations center on three main themes:
- Medical schools and teaching hospitals need to create learning cultures that emphasize patient safety, model professionalism, encourage transparency, and enhance collaborative behavior. They should have zero tolerance policies for egregious disrespectful or abusive behavior.
- Medical schools should teach patient safety as a basic science and ensure that students develop interpersonal and communication skills through experiences working in teams with nursing, pharmacy, and other professional students.
- Medical schools and teaching hospitals need to launch intensive faculty development programs to enable all faculty to acquire sufficient patient safety knowledge and to develop the interpersonal skills in teamwork and collaboration that permit them to function effectively as teachers and role models for students.
“Because they are powerful role models, all clinical faculty need to be the kinds of physician we want our students to become,” said Dr. Leape.
In addition, the report calls on the accrediting body for medical schools (the Liaison Committee on Medical Education) and the accrediting body for residency programs (the Accreditation Council for Graduate Medical Education) to modify their accreditation standards accordingly.
“Patient safety is a top priority for our nation’s medical schools and teaching hospitals,” said John E. Prescott, MD, Chief Academic Officer of the Association of American Medical Colleges. “Improvements in instruction and training in this area are on the rise in all phases of medical education.”
This report is the first of a planned series of such reports on issues that the Lucian Leape Institute has identified as top priorities in ongoing efforts to improve patient safety. “We are very excited about this initial report of the Lucian Leape Institute,” said Diane C. Pinakiewicz, MBA, President of the Lucian Leape Institute and the National Patient Safety Foundation, “but we recognize that this is just the beginning of a major collaborative effort to see the report’s recommendations through to their full implementation.”
Subsequent Institute initiatives will address integration of care across health care organizations and delivery systems; restoration of pride, meaning and joy in professional work; active consumer engagement in patient care; and provision of fully transparent care.
The full report is available online at www.npsf.org/LLI-Unmet-Needs-Report .
About the Lucian Leape Institute
The Lucian Leape Institute at the National Patient Safety Foundation, established in 2007, is charged with defining strategic paths and calls to action for the field of patient safety, offering vision and context for the many efforts underway within health care, and providing the leverage necessary for system-level change. Its members comprise national thought leaders with a common interest in patient safety whose expertise and influence are brought to bear as the Institute calls for the innovation necessary to expedite the work and create significant, sustainable improvements in culture, process, and outcomes critical to safer health care. The Institute challenges the system to address the structural impediments to more expeditious and comprehensive adoption of patient safety solutions.
About National Patient Safety Foundation
The National Patient Safety Foundation® has been diligently pursuing one mission since its founding in 1997 – to improve the safety of the healthcare system for the patients and families it serves. As the widely recognized voice of patient safety, NPSF is unwavering in its determined and committed focus on uniting disciplines and organizations across the continuum of care, championing a collaborative, inclusive, multi-stakeholder approach. NPSF is an independent, not-for-profit, 501(c)(3) organization.
The Lucian Leape Institute at the National Patient Safety Foundation gratefully acknowledges The Doctors Company Foundation for its generous support of the LLI Expert Roundtable on Reforming Medical Education and of the publication and dissemination of the resultant report.
If current trends in the use of online education continue, 50% of continuing medical education (CME) used by physicians will be delivered via the Internet in 2016. This would represent a dramatic increase over the 9% of CME delivered via the Internet in 2008. According to a new study published in the winter, 2010 issue of the Journal of Continuing Education in the Health Professions these changes in how practicing physicians obtain ongoing training could disrupt the multi-billion dollar CME industry in much the same way technological innovations have disrupted other established industries.
Most physician CME today is delivered via live meetings and conferences and is prepared by academic centers and professional societies. The new study finds that, in contrast, most online CME is prepared by commercial education companies. It is also distributed to physicians for free or at a very low cost. The authors observed that the pattern of a new technology being developed outside of mainstream organizations is consistent with what Harvard Professor Clayton Christensen has described as a pattern of “disruptive innovation” that often damages existing organizations while leading to lower prices and higher quality.
“These findings are very provocative,” stated Dr. John Harris Jr., President of Medical Directions, Inc., Assistant Professor of Clinical Medicine at the University of Arizona and the study’s lead author. “We know that lots of health professionals are using the Internet for ongoing education. We also know that the tried and true approaches, such as live meetings, are still quite popular. But these analyses, which are based on 11 years of data, show that the growth rate for online CME is well-established and exponential. We can expect far more changes in how CME is developed, distributed, and probably paid for in the next 10 years than we have seen in the past 30.”
The key study findings are consistent with broader trends in the use of online technologies in education. A 2009 report from Babson College observed, “Online enrollments have continued to grow at rates far in excess of the total higher education student population…” A 2009 review of 46 published studies by the US Department of Education found, “…on average, students in online learning conditions performed better than those receiving face-to-face instruction.”
SOURCE Medical Directions, Inc.
Medscape.com, by Nancy Fowler Larson, March 11, 2010 — Community-wide protection against seasonal influenza can be achieved by immunizing only those children between 3 and 15 years of age, according to a study published March 10 in the Journal of the American Medical Association.
“Influenza is a major cause of morbidity and mortality, resulting in an estimated 200,000 hospitalizations and 36,000 deaths annually in the United States alone,” write Mark Loeb, MD, MSc, from McMaster University, Hamilton, Ontario, Canada, and colleagues. “Children and adolescents appear to play an important role in the transmission of influenza. Selectively vaccinating youngsters against influenza may interrupt virus transmission and protect those not immunized.”
To overcome the difficulty of randomizing whole communities in most settings, researchers chose to work in the tightly knit, rural western Canadian colonies of the Hutterite, members of the Anabaptist faith. The December 2008 through June 2009 trial consisted of 947 children aged between 36 months and 15 years, the ages during which they attend school. Other residents of the 49 colonies received neither the influenza inoculation nor a hepatitis A vaccine, which was selected as a control.
In colonies designated for the influenza vaccine, an average of 83% (range, 53% – 100%) of healthy children (502 total) received the vaccine. Similarly, 79% (range, 50% – 100%) of well children (445 total) received the hepatitis A vaccine in colonies chosen to serve as control groups. Contraction of influenza was confirmed by the reverse transcriptase polymerase chain reaction assay.
The results showed, among other findings, a higher rate of influenza cases in control group communities than in colonies with children vaccinated against influenza:
- 3.1% (39/1271) of nonrecipients contracted influenza in colonies whose children received the influenza vaccine.
- 7.6% (80/1055) of nonrecipients contracted the disease in colonies in which children were vaccinated for hepatitis A.
- The level of protective effectiveness for nonrecipients was 61% (95% confidence interval [CI], 8% – 83%; P = .03).
Among all subjects, including those who received no vaccine and those who did, 4.5% (80/1773) of those in influenza vaccine colonies contracted influenza. In hepatitis A colonies, 10.6% (159/1500) had confirmed influenza. Therefore, the overall protectiveness rate was 59% (95% CI, 5% – 82%; P = .04). Researchers observed no adverse events in those who were vaccinated.
“The reduction in cases of influenza appeared to be primarily due to the prevention of outbreaks, with about half as many observed in colonies receiving the influenza vaccine colonies as those receiving the hepatitis A vaccine,” the researchers write.
Limitations to the study included a lack of immunization for those between ages 2 and 3 years, which may have led to underestimating the size of protectiveness rate. That 3 of 46 randomized colonies dropped out of the study before completion is another stated limitation.
The investigators concluded that their findings present experimental proof that supports selective immunization of children of school age to discourage the spread of influenza.
“Particularly, if there are constraints in quantity and delivery of vaccine, it may be advantageous to selectively immunize children in order to reduce community transmission of influenza,” the study authors write.
The fact that the study was conducted in enclosed colonies raises questions about its potential implications in the wider community, according to Greg Evans, PhD, director of the Institute for Biosecurity, Saint Louis University School of Public Health, Missouri. Still, the results merit further investigation.
“It certainly would indicate to researchers that it is worth investigating further in communities that are less closed,” Dr. Evans said in an interview with Medscape Infectious Diseases.
Dr. Evans explained that herd immunity (immunizing some for the benefit of most) typically requires vaccinating 80% of a population; he applauded this study for demonstrating that a lower vaccination rate can also produce herd immunity in certain circumstances. He noted that immunizing just children and teenagers is a much more efficient method of protecting a community.
“It is easier to get to the children and the adolescents because they are in school,” Dr. Evans said. “Trying to get to adults that are working, or older people, or people who do not believe in vaccination, is much more difficult than to introduce it into the schools.”
The Canadian Institutes for Health Research and the National Institute for Allergy and Infectious Diseases supported the study. Sanofi Pasteur donated vaccines used for the study but provided no funding. The study authors have disclosed no relevant financial relationships.
Authors and Disclosures
Nancy Fowler Larson
Nancy Fowler Larson is a freelance writer for Medscape
Medscape.com, by Pauline Anderson, March 11, 2010 — More patients with a migraine with aura were pain free after being treated with a handheld single-pulse transcranial magnetic stimulation (sTMS) device than with a sham device, and more of them remained free of pain for up to 2 days, a new randomized, double-blind study finds.
The study also showed that using preventive medication resulted in the highest pain-free response.
The study adds to animal research suggesting that the electrical current generated by TMS turns off the cortical spreading depression thought to be involved in migraine with aura, said lead author Richard B. Lipton, MD, professor and vice chair of the Department of Neurology at Albert Einstein College of Medicine, New York City.
The spots of light and zigzag lines characteristic of a migraine aura are believed to be associated with a wave of excitation that spreads over the cortical mantle, with the ensuing graying out of vision associated with a wave of inhibition of nerve cell activity, he explained.
“The way these pieces fit together is that cortical spreading depression causes migraine aura; migraine aura causes migraine pain; TMS turns off migraine in humans and turns off cortical spreading depression in experimental animals,” Dr. Lipton said. “So we assume, but have not proven, that TMS works in humans by turning off cortical spreading depression.”
The trial was published online March 4 and will appear in the April issue of Lancet Neurology. It was funded by Neuralieve, manufacturer of the sTMS device. Dr. Lipton reported that he has received a clinical research grant from and holds stock options in Neuralieve and has consulted for or undertaken research funded by other manufacturers of drugs and devices for migraine.
Real vs Sham Device
The study enrolled patients from 16 centers in the United States. Eligible subjects were aged 18 to 70 years and had 1 to 8 migraines a month with aura preceding migraine for at least 30% of episodes, followed by moderate to severe headache in 90% of attacks.
Both the treatment and sham devices weigh 1.54 kg and are 32.5 cm long — about the size of a hair dryer, said Dr. Lipton. Both make a clicking noise and vibrated, although only the TMS device delivers a magnetic field.
“The click and vibration was intended to mask the sensation associated with real TMS,” said Dr. Lipton.
During the initial 1-month lead-in phase, patients learned to use an electronic diary. For the second treatment phase, researchers randomly allocated 201 patients to receive the active sTMS or the sham stimulation.
Patients were taught to apply the device to the occiput, just below the occipital bone, and to administer 2 pulses about 30 seconds apart. They were to begin treatment as soon as possible and within 1 hour of aura onset, and they could treat up to 3 attacks during 3 months.
A primary outcome was being pain free at 2 hours after treatment for the first attack. Another primary outcome was the presence of photophobia, nausea, or phonophobia at 2 hours after treatment. For this outcome, the researchers used a noninferiority comparison, partly because the rates of these symptoms might be too low to generate the power to indicate a significant difference.
Secondary outcomes were mild or no pain at 2 hours, sustained pain-free response at 24 hours and 48 hours, and need for rescue drugs during an attack.
Patients could continue to use their usual medical treatments. Rescue drugs were permitted 2 hours after treatment.
Of the original population, 164 patients treated at least 1 aura episode — 82 in the TMS group and 82 in the sham group.
In the TMS group, 39% achieved a pain-free response 2 hours after treatment for the first attack compared with 22% in the sham stimulation group. According to the study authors, this represents a “therapeutic gain” of 17% (95% confidence interval, 3% – 31%; P = .0179).
As well, more patients in the treatment group sustained their pain-free status to 24 hours (29% vs 16% for the sham treatment) and to 48 hours (27% vs 13%).
The study also showed that the active treatment was not inferior 2 hours after treatment for associated symptoms of photophobia (69% for treatment vs 75% for sham), phonophobia (51% vs 62%), and nausea (37% vs 39%).
When baseline pain was moderate or severe, treatment with TMS was associated with a significant reduction of every migraine-associated symptom. When baseline pain was absent or mild, there was no difference between sham and TSM treatment in relief of migraine-associated symptoms.
“That was a somewhat unexpected result,” said Dr. Lipton. “Triptans, the most common migraine medication, work best early, while pain is still mild, but for TMS, it appears that it works best if you treat a migraine when pain is moderate and not mild.”
Some secondary outcomes, including use of rescue drugs and consistency of pain relief response, did not differ between the groups.
For the subgroup of patients who were using preventive drugs such as β-blockers, the incidence of pain at 2 hours was much lower in patients using the TMS device compared with those using the sham device — 65% vs 97% (P = .0014). The absolute risk reduction was 32% in those using preventive treatment compared with 8% for those not using preventive treatments.
“We found that the people who were already on preventive medication got better results than those not on preventive medications,” said Dr. Lipton. “It makes sense to me because preventive medications reduce brain excitability and might make it easier for the fluctuating magnetic field to turn the migraine off.” He stressed, though, that this was only seen in a small subgroup of patients.
The TMS device was well tolerated. The number of adverse events was low and comparable to that of the sham group. As well, patients rated the device an average 8 of 10 for user-friendliness.
Patients in both groups were about equally likely to guess that they were in the treatment allocation. “Ensuring blinding was maintained was very important,” said Dr. Lipton. “In the study itself, 80% of people in both groups thought they were getting the real treatment. I was very happy with that result because it means we successfully blinded the study.”
Optimal dosing and timing need to be further studied, he said.
There is preliminary evidence that the TMS might also reduce pain in migraine without aura, said Dr. Lipton. Such a migraine has a prodromal phase during which there are changes in mood or behavior and difficulty concentrating.
“The thought is that the mechanism of the prodrome may be cortical spreading depression in parts of the brain that have to do with thinking rather than in parts of the brain that have to do with seeing and feeling,” he noted.
Major Step Forward
In a comment accompanying the article, Hans-Christoph Diener, MD, from the Department of Neurology and Headache Center at the University Hospital Essen, University Duisburg-Essen, Germany, said the use of TMS could be a “major step forward” in the treatment of migraine with aura.
This is particularly so for patients in whom treatments such as triptans are ineffective, poorly tolerated, or, in the case of patients with vascular diseases or pregnant women, contraindicated, he said.
The study results are important not only because they show that TMS is effective for pain relief in patients with migraine with aura but also because they support the theory that TMS modulates cortical spreading depression, said Dr. Diener.
The study was well performed in that it followed the International Headache Society recommendation that the 2-hour pain-free rate be used as the primary end point in migraine trials. The study authors were also “well advised,” he writes, to use a noninferior approach to test the device’s efficacy on accompanying symptoms of nausea, photophobia, and phonophobia because some patients may not have these symptoms during the aura phase.
Dr. Diener agreed that the TMS approach might play a role in patients with migraine without aura.
However, he wrote, many research questions remain to be answered, for example, whether multipulse TMS may be even more effective than 2 pulses.
More research is also needed on the TMS device, especially because this trial failed to show efficacy in improvement from moderate or severe headache to mild or no headache, Dr. Diener noted.
The study was funded by Neuralieve, manufacturer of the sTMS device. Dr. Lipton reports that he has received a clinical research grant from and holds stock options in Neuralieve and has consulted for or undertaken research funded by other manufacturers of drugs and devices for migraine. Conflict of interest information for the coauthors appears in the original paper. Dr. Diener reports he has received no funding from Neuralieve. He also reports having no ownership interest and does not own stocks in any pharmaceutical company. He has received honoraria for participation in clinical trials, contribution to advisory boards, or oral presentations from pharmaceutical companies. Full disclosure information appears in the comment.
Lancet Neurol. Published online March 4, 2010.
Authors and Disclosures
Pauline Anderson is a freelance writer for Medscape.
Personalized medicine: Leroy Hood, founder of the Institute for Systems Biology, in Seattle, has a vision of the future of medicine that he calls the “P4” approach.
Credit: Institute for Systems Biology
MIT Technology Review, March 11, 2010, by Emily Singer — Genomics pioneer Leroy Hood says a coming revolution in medicine will bring enormous new opportunities. Leroy Hood has been at the center of a number of paradigm shifts in biology. He helped to invent the first automated DNA sequencing machine in the 1980s, along with several other technologies that have changed the face of molecular biology. And in 2000, he founded the Institute for Systems Biology, a multidisciplinary institute in Seattle dedicated to examining the interactions between biological information at many different levels, and to moving forward a new perspective for studying biology. The next revolution he plans to help shape is in medicine, using new technologies and new knowledge in biology and informatics to make its practice more predictive, preventative and personal.
Hood says that with each of the major transitions he’s been a part of, he has faced skepticism. The human genome project, for example, had many naysayers. But he says the best way to overcome doubts is with results. To that end, Hood has founded a startup called Integrated Diagnostics, which is developing cheap diagnostics that could be used to detect diseases at earlier, more treatable stages. He has also developed a partnership between the Institute for Systems Biology and Ohio State Medical School, where he hopes to show how combining existing medical and genomics technologies can affect the practice of health care today.
Hood contends that digitizing medical records–the health-care industry’s major push at the moment–is just one small part of the informatics overhaul the field needs to undergo. And pharmacogenomics–the practice of using an individual’s genetic makeup to choose drugs –provides only a limited example of the potential power of personalized medicine.
TR: How do you see the future of personalized medicine?
LH: I think personalized medicine is too narrow a view of what’s coming. I think we’ll see a shift from reactive medicine to proactive medicine. I define it as “P4” medicine–powerfully predictive, personalized, preventative–meaning we’ll shift the focus to wellness–and participatory. That means persuading the various constituencies that this medicine is real and it’s here. Physicians will have to learn a medicine they didn’t learn in medical school.
TR: What new technologies will drive the revolution in medicine?
LH: Individual genomes will become a standard of medical records in 10 years or so, and we will have the power to make inferences [about an individual’s health] when combined with phenotypic information. Then we can begin to plan strategies for individual health care in ways we have never done before.
Nanotechnology approaches to protein measurement–such as measuring 2,500 proteins from a drop of blood–will also be important. We want to develop tests to asses 50 organ-specific proteins from 50 organs as way of interrogating health rather than disease.
The third technology that is going to be transformational is the ability to get detailed analysis from a single cell. We can analyze transcriptomes and RNAomes, proteomes and metabolomes [the collection of transcribed genes or messenger RNA, total RNA, proteins and metabolites, respectively, in the cell]. That information will reveal quanti cellular states that will say lots about normal mechanisms and disease mechanisms. For example, we are doing an experiment now where we take 1,000 cells from glioblastomas [a type of brain tumor] and select transcripts from each of those cells. We’re discovering interesting new things about what constitutes a tumor.
The final driver is going to be what I generally call computational and mathematical tools, the ability to deal with data dimensionality that is utterly staggering. If we have patients in 10 years with billions of data points, being able to compare that with individual genotype-phenotype correlations will give us deep and fundamental new insights into predictive medicine. But the challenge is, where will we get the cycles to make those computations and where will we get storage for all this data?
TR: So IT has a major role to play in personalized medicine?
LH: Medicine is going to become an information science. The whole health-care system requires a level of IT that goes beyond mere digitization of medical records, which is what most people are talking about now. In 10 years or so, we may have billions of data points on each individual, and the real challenge will be to develop information technology that can reduce that to real hypotheses about that individual.
TR: Will there be consequences beyond medicine?
LH: I think the P4 medicine revolution has two enormous societal consequences. It will absolutely transform the business plans of every sector of health care. Which will adapt and which will become dinosaurs? That’s an interesting question, but it will mean enormous opportunities for companies.
I also think it will lead to digitization of medicine, the ability to get relevant data on a patient from a single molecule, a single cell. I think this digitization in the long run will have exactly the same consequences it has had for the digitization of information technology. In time, the costs of health care will drop to the point where we can export it to the developing world. That concept, which was utterly inconceivable a few years ago, is an exciting one.
TR: What will be the challenges in implementing this vision of medicine?
LH: I think the biggest challenges will be societal acceptance of the revolution. We are putting together something we call the P4 Medical Institute. The idea is to bring in industrial partners as part of this consortium to help us transfer P4 medicine to the patient population at Ohio State University, which is both the payer and provider for its employees. We plan to announce further details of this project in two or three months.
How companies develop new products and how the companies themselves develop in the process
MIT Technology Review, March/April 2010, by Matt Mahoney — The companies that the editors of Technology Review selected for the TR50 all have strong records of innovation. But how does the innovation process at a startup like Twitter compare with that at IBM? In a series of articles in the 1970s, including a 1978 contribution to TR, Harvard business professor William J. Abernathy and MIT professor of management and engineering James M. Utterback posed this basic question:
How does a company’s innovation–and its response to innovative ideas–change as the company grows and matures?
Abernathy and Utterback created a model, still in use, that described the life cycle of industrial innovation. They began with two extreme cases to define the limits of their “spectrum of innovators”:
Past studies of innovation imply that any innovating unit sees most of its innovations as new products. But that observation masks an essential difference: what is a product innovation by a small, technology-based unit is often the process equipment adopted by a large unit to improve its high-volume production of a standard product.
The authors found that small companies or groups are most often the source of radical product innovations.
New products which require reorientation of corporate goals or production facilities tend to originate outside organizations devoted to a “specific” production system; or, if originated within, to be rejected by them.
A more fluid pattern of product change is associated with the identification of an emerging need or a new way to meet an existing need; it is an entrepreneurial act. … It is reasonable that the diversity and uncertainty of performance requirements for new products give an advantage in their innovation to small, adaptable organizations with flexible technical approaches and good external communications, and historical evidence supports that hypothesis.
To be sure, radical innovations generate excitement and attract attention, but these are merely the beginning of the story for products that succeed in the marketplace.
One distinctive pattern of technological innovation is evident in the case of established, high-volume products such as incandescent light bulbs, paper, steel, standard chemicals, and internal-combustion engines. … In all these examples, major systems innovations have been followed by countless minor product and systems improvements, and the latter account for more than half of the total ultimate economic gain due to their much greater number.
Of course, the two extreme cases are just that, and companies like the ones profiled in this issue fall at all places on the spectrum. In fact, the authors argue that successful companies are likely to move from one end to the other in their lifetime. The histories of two very different industries illustrate the common trajectory.
Two types of enterprise can be identified in this early period of the new [semiconductor] industry–established units that came into semiconductors from vested positions in vacuum tube markets, and new entries such as Fairchild Semiconductor, I.B.M., and Texas Instruments, Inc. The established units responded to competition from the newcomers by emphasizing process innovations. Meanwhile, the latter sought entry and strength through product innovation. … Since 1968, however, the basis of competition in the industry has changed; as costs and productivity have become more important, the rate of major product innovation has decreased, and effective process innovation has become an important factor … .
Like the transistor in the electronics industry, [Douglas Aircraft’s] DC-3 stands out as a major change in the aircraft and airlines industries. … Just as the transistor put the electronics industry on a new plateau, so the DC-3 changed the character of innovation in the aircraft industry for the next 15 years. No major innovations were introduced into commercial aircraft design from 1936 until new jet-powered aircraft appeared in the 1950s. Instead, there were simply many refinements to the DC-3 concept–stretching the design and adding appointments; and during the period of these incremental changes airline operating cost per passenger-mile dropped an additional 50 percent.*
The way companies manage this transition from the initial “fluid” phase to the later “specific” stage is vitally important.
*For a review of Boeing’s new 787 Dreamliner, see “Reinventing the Commercial Jet” in TR.