The New York Times, August 26, 2008 – Stanford University, concerned about the influence drug companies may have on medical education, is expected to announce Tuesday that it will severely restrict industry financing of doctors’ continuing education at its medical school.

Nearly all doctors in the country must take annual refresher courses that drug makers have long paid for. While the industry says its money is intended solely to keep doctors up to date, critics charge that companies agree to support only classes that promote their products.

On Tuesday, Stanford plans to announce that it will no longer let drug and device companies specify which courses they wish to finance. Instead, companies will be asked to contribute only to a schoolwide pool of money that can be used for any class, even ones that never mention a company’s products.

With its approach, Stanford becomes the sixth major medical school — including those at the universities of Massachusetts, Pittsburgh, Colorado, Kansas and California Davis — to form schoolwide pools for university contributions to medical education, according to the Prescription Project, a nonprofit organization that largely opposes industry financing of medical education. The Memorial Sloan-Kettering Cancer Center, meanwhile, has banned all industry support for its doctor classes.

Dr. David Korn, chief scientific officer of the Association of American Medical Colleges, said Stanford’s new policy was ”an extremely important step forward.” The association recommended in June that medical schools pool contributions from companies as a means of shielding teachers from commercial influences.

Ken Johnson, a spokesman for the Pharmaceutical Research and Manufacturers of America, said Monday that ”America’s pharmaceutical research companies have taken positive steps to help ensure they provide nothing but accurate and balanced information to health care providers.”

Dr. Philip A. Pizzo, dean of Stanford’s School of Medicine, said in an interview that the school wanted to take a firm stand on the issue, even if it meant that drug and device companies might no longer contribute to the educational effort if they could not specify which classes they wanted to support.

”I want to make sure we’re not marketing for industry or being influenced by their marketing,” Dr. Pizzo said.

The policy comes in the wake of growing scrutiny of industry financing of doctor education. In April 2007, Senator Charles E. Grassley, Republican of Iowa, issued a report that documented how drug makers used the classes to increase sales of their latest products.

In an e-mail statement on Monday, Senator Grassley said, ”Reforms based on transparency can foster accountability and build confidence in medical education and, in turn, the practice of medicine.”

Since Senator Grassley began his investigation, a growing number of drug makers have begun to make public their lists of educational grant recipients, and Pfizer recently announced that it would no longer directly support commercial medical education companies, which deliver many of the classes that doctors attend and may be more susceptible to industry influence than ones based at medical schools.

Doctors have grown accustomed to taking educational classes free — often with a lunch included. Separating commercial influences from doctor education might require doctors to pay their own way, which some doctors have said they would resist.

Dr. Murray Kopelow, chief executive of the Accreditation Council for Continuing Medical Education, said that Stanford’s new policy was part of a growing push in medical education to further separate crucial medical information from marketing messages.

”It’s a good plan, and it’s a big deal that a place like Stanford has adopted it,” Dr. Kopelow said. ”When this is all over, medical education will not be the same as what it’s been.”

By Judith K. Jones, MD, PhD, August 18, 2008, Medscape Pharmacists – When I was in charge of the postmarketing drug safety program at the US Food and Drug Administration (FDA) in the early 1980s, 1 particular drug report became etched in my memory.

“I think we are killing the babies,” a physician told me over the phone. The caller worked in a neonatal intensive care unit that housed very early neonates, babies who previously might not have survived but were now being saved with intensive care treatment.

Nevertheless, this physician worried that some of the interventions might be doing more harm than good. He wondered if the neonates’ tiny size might increase their risk for toxicity to benzyl alcohol, an antibacterial preservative found in flushing solution for arterial lines. When he calculated the daily dose of this additive relative to neonatal weight, he found that it exceeded toxic levels. He even cited several deaths that might have been related to benzyl alcohol toxicity.

I asked the physician to send the case details to the FDA. He also presented his findings at a scientific meeting the following week to see if others had noted the same potential problem. Neonatologists from at least 1 other center returned to their unit and concluded that this adverse event might be occurring there as well. In the meantime, the FDA examined the extent and distribution of those multidose vials of heparinized bacteriostatic sodium chloride and water within a relatively short time, promulgated warnings about the risk, and finally withdrew them from the market. As the topic was later examined in more detail, it was apparent that a number of neonates may have experienced the toxic effects of this preservative in this very unique overdose
situation.[1,2]

That case study clearly shows the importance of reporting adverse drug events (ADEs), and it demonstrates how 1 alert healthcare provider can and did make a difference.

In fact, individual ADE reports to the FDA and to the manufacturer can make a significant difference in the safety of a drug after it has been approved for use by a diverse population. These reports represent the most expedient method of identifying possible new and serious ADEs. They can be readily evaluated by officers at the FDA and may be acted on to update the product’s safety status through various actions described below.

In contrast to some other countries, most notably Sweden,[3] clinicians in the United States are not required to report ADEs. As a result, the reporting rate is relatively low, partly due to a lack of information on what and how to report. Previous articles in this Medscape series have outlined methods for recognizing ADEs and tools for reporting ADEs, in an effort to improve reporting. This article addresses another reason that may contribute to low reporting: lack of understanding about the importance of individual reports.

ADE reports can signal important safety issues. Assessing these reports can lead to changes in how a drug is used or advertised, and it can even lead to removal from the market. Another example involves the acne medication isotretinoin (Accutane). After it was approved in the early 1980s, this drug was found to be associated with birth defects of the central nervous system (microcephaly or hydrocephalus) and cardiovascular system (anomalies of the great vessels). Microtia or absence of external ears was also noted in a majority of cases.[4,5]

The question of drug-related birth defects is a particularly difficult one, and clinicians must be especially observant and report suspected ADEs to help determine possible signals of new adverse effects. Of all near- or full-term births, 3% to 5% are associated with a major birth defect; however, extremely low frequencies of any particular defect make epidemiologic studies in large populations a challenge. For example, less than 1 in 1000 live births is associated with gastroschisis, tetralogy of Fallot, or transposition of the great vessels. Thus, reports by alert clinicians often are the most efficient method of identifying potential defects at an early stage. In the case of isotretinoin, spontaneous reports early in its marketing served as the basis for rapid introduction of specific exposure registries. In addition, risk management programs were implemented to prevent the use of the drug in pregnancy.[6]

How is a clinical observation translated into useful new information on a drug? The Figure below shows the flow of information that may ultimately result in changes to a drug’s label information. This, in turn, may affect how it is advertised, packaged, and in some cases, formulated or marketed. It is important to emphasize that manufacturers are required by law to report all events associated with their product to the FDA, regardless of whether or not they consider them causally related. If the events are new (ie, not in the product label) and are serious (defined as resulting in death, hospitalization, prolongation of hospitalization or illness, or birth defects), they must be reported within 15 days from the time the company becomes aware of the event. This also applies to literature reports — when the manufacturer identifies ADEs through routine literature searches, they must report them in the same manner. Reports that are neither new nor serious are provided to FDA in periodic reports (every 6 months in the first 3 years after marketing, then yearly).

BDA55DBF-11A8-4268-BE7A-E025B77E42C3.jpg
Figure 1.
The flow of information and actions on a suspected adverse drug reaction report (SADR)

The FDA’s regulatory requirements were set up with the specific understanding that a drug is approved on the basis of at least 2 randomized controlled clinical trials that show it to be effective. The safety of an approved drug is examined extensively in preclinical studies (in vitro and in animals to determine the potential risk of cancer and birth defects, in particular) and in all the human clinical trials up to the time of approval. However, the FDA recognizes that even when large clinical trials (ie, > 10,000 patients) are conducted for approval, not every safety issue will be identified. This occurs for several reasons: (1) the subjects who were evaluated in the trials are relatively healthier and take fewer other drugs than patients who will ultimately use the drug; (2) the drugs were not tested in special populations who may need to use the drug, such as pregnant women, children, or the frail elderly; and (3) many adverse reactions of serious public health concern occur very infrequently. For example, hepatic failure is found in 1 in 5000 to 20,000 people. However, when a product is used by millions of persons, this rare occurrence can translate to a sizable number of cases. There is simply insufficient statistical power in the clinical trial data to expect that these rare effects will be detected.

Researchers use the “rule of 3” to derive a rough estimate of the power to detect events.[7,8] If an ADE occurs at a rate of 1/5000, then it will be necessary to evaluate the drug in 15,000 patients to have a 95% chance to detect just 1 case. Therefore, many thousands more patients would need to be studied to identify sufficient cases for analysis.

Federal food and drug laws and resulting regulations are based on the knowledge that there will almost surely be new cases of ADEs occurring in special populations who were not exposed in the clinical trials and/or are quite rare and only detected when sufficient individuals are exposed. This important fact underlines the importance of the individual clinician who stands at the frontier of understanding a new product’s spectrum of effects.

If the clinician includes “possible ADE” in the differential diagnosis for any new event, he or she increases the likelihood of early discovery of these new and rare events once they are reported. In some cases, only a few events can serve to signal a problem and may result in action to update the information on the drug — actions that potentially can save lives.

This activity is supported by an independent educational grant from PhRMA.

References

1. Gershanik J, Boecler B, Ensley H, McCloskey S, George W. The gasping syndrome and benzyl alcohol poisoning., N Engl J Med. 1982;307):1384-1388. Abstract
2. Hiller JL, Benda GI, Rahatzad M, et al. Benzyl alcohol toxicity: impact on mortality and intraventricular hemorrhage among very low birth weight infants. Pediatrics. 1986;77:500-506. Abstract
3. Wiholm BE, Westerholm B. Drug utilization and morbidity statistics for the evaluation of drug safety in Sweden. Acta Med Scand Suppl. 1984;683:107-117. Abstract
4. Stern RS, Rosa F, Baum C. Isotretinoin and pregnancy. J Am Acad Dermatol. 1984;10:851-854. Abstract
5. Lammer EJ, Chen DT, Hoar RM, et al. Retinoic acid embryopathy. N Engl J Med. 1985;313:837-841. Abstract
6. Goldberg JD, Golbus MS. The value of case reports in human teratology. Am J Obstet Gynecol. 1986;154:479-482. Abstract
7. Hanley JA, Lippman-Hand A. If nothing goes wrong, is everything all right? Interpreting zero numerators. JAMA. 1983:249;1743-1745. Abstract
8. Eypasch E, Lefering R, Kum CK, Troidl H. Probability of adverse events that have not yet occurred: a statistical reminder. BMJ. 1995:311;619-620. Abstract

Judith K. Jones, MD, PhD, Adjunct Professor of Public Health, University of Michigan Summer Health Program in Public Health, Ann Arbor, Michigan

Disclosure: Judith K. Jones, MD, PhD, has disclosed that she has received research grants for epidemiology research from Abbott Laboratories; Allergan; C.B. Fleet Co. Inc.; Cephalon; Genetech, Inc.; Hoffman-LaRoche; and Oxford Pharmaceuticals. Dr. Jones has also disclosed that she has received a research grant for risk management from Bayer Healthcare, and received consulting fees from Bristol-Myers Squibb, Hoffman-LaRoche, Otsuka Pharmaceutical, Quintiles, and sanofi-aventis.

By Andrew J. Vickers, PhD, August 18, 2008, Medscape, From WebMD – Tommy John, the renowned pitcher, once made 3 errors on a single play: He fumbled a grounder, threw wildly past first base, then bobbled the relay throw from right field and threw past the catcher. I was reminded of that story when peer-reviewing a paper describing a randomized trial. Near the start of the results section, the authors wrote something like, “Although there was no difference in baseline age between groups (P = .458), controls were significantly more likely to be male (P = .000).”

This goes one better than Tommy John, because there are actually 4 errors in this single sentence (or perhaps even 4.5).* The first error has been discussed in a previous article (please see Related Links): You cannot conclude “no difference” between groups on the basis of a high P value because failing to prove a difference is not the same as proving no difference.

Here are the other 3 errors:

  1. P values for baseline differences between randomized groups. P values are used to test a hypothesis — in this case, a null hypothesis that can be informally stated as: “There is no real difference between groups; any differences we see are due to chance alone.” But this is a randomized trial, so any differences between groups must be due to chance alone. In short, we are testing a null hypothesis that we know to be true. Nonetheless, reporting P values for baseline differences in randomized trials remains routine: When I recently refused a clinician’s request to calculate these P values for baseline differences, he sent me references to several recent papers published in high-profile journals to show that what I thought was wrong was actually quite common. Given that copying others is not necessarily the best path to statistical truth, I politely declined a second time.
  2. Inappropriate levels of precision. The first p value in our multierror sentence is reported to 3 significant figures (P = .458). What do the 5 and 8 tell us here? We are already way above statistical significance; a little bit more or less isn’t going to change our conclusions, so reporting the P value to a single significant figure (ie, P = .5) is fine. Inappropriate levels of precision are pretty ubiquitous in the scientific literature, perhaps because a very precise number sounds more “scientific.” One of my favorite examples is a paper that reported a mean length of pregnancy of 32.833 weeks, suggesting that we want to know the time of conception to the nearest 10 minutes. This would require some rather close questioning of the pregnant couple.
  3. Reporting a P value of zero. No experimental result has a zero probability; even if I throw a billion unbiased coins I have a small, but definitely non-zero, chance of getting all heads. I once pointed this out in a peer review, only to have the authors reply that the statistical software had given them P = .000, so the value must be right.

This gets to the heart of why I care about these errors even though they don’t make much difference to anything (why don’t I just ignore those unnecessary decimal places?). Many people seem to think that we statisticians spend most of our time doing calculations, but that is perhaps the least interesting thing that we do. Far more important is that we spend time looking at numbers and thinking through what they mean. If I see any number in a scientific report that is meaningless — a P value for baseline differences in a randomized trial, say, or a sixth significant figure –I know that the authors are not being careful about what they are doing; they are just pulling numbers from a computer print-out. And that doesn’t sound like science to me.

*Note: About that “half an error”: the authors tell us that “baseline” age was no different between groups. This was a trial on pain in which all patients were on study for the same period of time, so unless patients in different treatment groups grew old at different rates, there is no reason to tell us that it is “baselineā€ age that is being compared.

Andrew J. Vickers, PhD, Assistant Attending Research Methodologist, Memorial Sloan-Kettering Cancer Center, New York, NY

Disclosure: Andrew J. Vickers, PhD, has disclosed no relevant financial relationships.