Credit: Technology Review
Google would have had to fix a balkanized U.S. health-care system to make the service catch on.
MIT Technology Review, June 29, 2011, by David Talbot — At the end of this year, Google Health will flatline. The service couldn’t encourage many people to import or analyze their health data, and experts say its untimely death is, in many ways, an extension of U.S. health-care providers’ failure to share data across institutions, or make it easy for patients to obtain it.
Google’s free online service lets people upload, store, analyze, and share their health information. But there are hundreds of different health-care institutions in the U.S. that use different systems to record and store data, and many doctors don’t use electronic records at all, making the task of retrieving and updating data extremely difficult for the average person, says Isaac Kohane, who directs the informatics program at Children’s Hospital in Boston, and codirects Harvard Medical School’s Center for Biomedical Informatics.
For Google to make its service attractive, it would have had to solve this health IT mess, which is in the early stages of being addressed through recent national policy moves. These include 2009 federal stimulus incentives for doctors and hospitals to adopt electronic medical records, and for hospitals to share data with one another.
Kohane says it will be at least five years before data flows smoothly enough to make something like Google Health worthwhile. “Google is unwilling, for perfectly good business reasons, to engage in block-by-block market solutions to health-care institutions one by one,” Kohane says, “and expecting patients to actually do data entry is not a scalable and workable solution.”
Google did forge some partnerships—including one with the insurer Blue Cross Blue Shield—that let patients upload insurance billing and medical information into its service more easily. Even so, the user experience was uneven, as Technology Review described in 2009. Some patients, for example, would need to obtain copies of their records and then manually enter all the information.
Google announced on its blog late last week that Google Health would be canceled as of January 1, 2012. Users can retrieve their data for an additional year, but after that, the data will be deleted.
Other, similar services survive. Prominent among them is Microsoft’s HealthVault. Microsoft was quick to announce ways that Google Health users could move their data over to HealthVault. In a blog spelling out this process, Sean Nolan, chief architect and general manager of Microsoft’s health solutions group, also said “the only way to fix health care is to consumerize it,” and added that “we’ll get there, and Google Health moved the ball forward.”
Some limited health data exchanges among hospitals and other health-care providers are cropping up around the United States, but right now, there’s no requirement for full sharing. “There is still no flow,” Kohane says. “There still has to be additional cultural shifts to actually make the information flow.”
Through 2019, up to $36 billion could be spent under the federal stimulus for adoption and meaningful use of electronic medical records. (The final federal outlay—taking into account expected savings and income from penalties for failure to adopt the records—is projected to be $19.6 billion.) “The investment has been committed, but a lot of the money has not been spent yet, and we will see in the next couple of years whether it will happen or not,” Kohane adds.
But Google isn’t waiting to find out.
“In order to make the much needed environmental changes that the article, below, is concerned about, it might help to include additional statistics that show how citizens deprived at birth, of an equal chance to fulfill their potential, end up dragging society down, which over time affects all of us, including the corporate executives who avoid responsibility for their contribution to a toxic environment.
In an increasingly transparent planet, it’s becoming easier to see that in the great scheme of things, we are one,…….. if you hurt one, you hurt all.”
…………….Joyce Hays, Target Health Inc.
The New York Times, June 28, 2011, by Nancy Folbre — In an imaginary world of equal opportunity we would all be free to choose our own economic future. In reality, many children in the United States are born to lose, suffering health disadvantages at birth that reduce their likelihood of economic success.
Epidemiologists and economists have long agreed that low birth weight is an important, albeit approximate, predictor of future health problems. A wealth of new economic research tracing individuals over time shows that it is also an approximate predictor of future earnings problems, with statistical effects almost as strong as children’s test scores.
Among other things, low birth weight increases the probability of suffering from attention deficit hyperactivity disorder and lowers the probability of graduating from high school.
In the current American Economic Review, Janet Currie of Princeton, a pioneer in this new area of research, summarizes recent findings and points out that children of black mothers who dropped out of high school are three times as likely as children of white college-educated mothers to suffer low birth weight.
Many of the mechanisms that underlie this inequality are linked to characteristics of the physical environment, such as exposure to environmental toxins.
For instance, carbon monoxide related to automobile emissions harms fetal health. Detailed statistical analysis of families in New Jersey shows that moving from an area with high levels of carbon monoxide to one with lower levels has an effect on birth weight larger than persuading a woman who was smoking 10 cigarettes a day during pregnancy to quit.
Another memorable illustration of carbon monoxide effects comes from a study of the impact of E-ZPass electronic technologies, which improve infant health by reducing auto emissions in neighborhoods close to highway toll booths.
Professor Currie’s research shows that black and Latino children are significantly more likely than white children to be born to mothers living in proximity to such hazards, supporting arguments long made by environmental justice advocates.
Less-educated mothers are less aware of such health risks and less able to mobilize the economic resources necessary to move to better neighborhoods. This helps explain results showing that improved educational opportunities for mothers improve infant health.
Another important policy implication is that stricter environmental regulation would benefit low-income children in particular. Professor Currie has taken part in research showing that reductions in the release of three toxicants (cadmium, toluene, and epichlorohydrin) from 1988 to 1999 account for a 3.9 percent reduction in infant mortality over that time.
Previous research by Kenneth Y. Chay of the University of California, Berkeley, and Michael Greenstone of the Massachusetts Institute of Technology has shown that the 1970 Clean Air Act reduced infant mortality.
Yet many children in the United States live, play or go to school in areas with dangerously poor air quality – a particularly serious problem in summer months when smog heats up.
Greater publicity for economic research on the impact of regulation might quiet critics of the Environmental Protection Agency, who often focus on its short-term costs rather than its long-term benefits. Would changing its name — perhaps to the Environmental Child-Protection Agency — help win over the family-values crowd?
Professor Currie herself tends to emphasize the pricing problem. As she put it: “Factories dump toxic releases into the atmosphere but don’t pay the cost of pollution. There would be less harm to the children who ingest the toxins if the factories had to bear the cost.”
Changes would happen even more quickly if the chief executives of these companies — and their children — had to bear the cost. But these adults are free to choose where to live and what to breathe. And their children are, for the most part, born to win.
Dr. Brien Smith discusses next steps with Douglas Davis, a patient at Spectrum Health Butterworth Hospital in Grand Rapids, Mich. Photo: Johnny Quirin/Spectrum Health
The New York Times, June 28, 2011, by Pauline W. Chen MD — One day during medical school, my classmates and I learned that one of the most well-liked doctors-in-training in the hospital had had a seizure while leading morning work rounds.
The sight of him writhing had caused the other doctors and nurses on the ward to panic. Some stood mute, frozen with fear. An intern, believing that the seizure arose from low blood sugar levels, took his half-eaten jelly doughnut and held it against the mouth of his seizing colleague. Others yelled to the ward secretary to “call a code,” and continued to do so even after another dozen doctors and nurses had already arrived on the floor.
The young doctor eventually recovered. But for many of the medical students and doctors who heard about the episode or were on the wards that day, the dread of that morning would linger long beyond our years of training. Epilepsy was, and remains, a frightening and mysterious malady.
For the last 20 years, Dr. Brien J. Smith has tried to change how doctors and patients view epilepsy. Earlier this year, Dr. Smith, chief of neurology at Spectrum Health in Michigan, became chairman of the Epilepsy Foundation. Being elected head of a national organization does not seem unusual for a doctor who is a well-recognized authority and advocate in his or her field. What is extraordinary is that Dr. Smith knows firsthand about the disease and what his patients experience: He learned he had epilepsy when he was in high school.
“Every day I see how off-base health care workers are with seizures and epilepsy,” Dr. Smith said recently. “There’s a lot of stigma attached, a lot of stereotypes regarding cognitive abilities and how seizures should look.”
I spoke to Dr. Smith and asked him about his advocacy work, his diagnosis and how being a patient has affected his interactions with patients and colleagues.
When were you diagnosed with epilepsy?
My epilepsy probably started at a young age. I remember waking up as a young child with weird dreams — a kaleidoscope view of the world. In my midteens I had these feelings of kind of a brain warp that would pass. Finally as a junior in high school, I had a seizure getting out of the car in the high school parking lot.
How did you decide to become an epilepsy specialist in medical school?
I liked surgery and I liked emergency medicine. But I realized I needed to find something that didn’t require procedural work or spur-of-the-moment cataclysmic decisions where the pressure is on and even if you have a little short-circuit, that could mean life or death to someone.
I also was interested in neurology and figured it was my calling in life.
How did your epilepsy affect your interaction with colleagues?
I never hid the diagnosis. But a few years later after my training, I was asked to write a foreword for a book that was a collection of personal accounts of seizures. That was not an easy decision. I worried that by being so transparent, people might see me as handicapped, that they might view me differently.
The decision to write that foreword made me realize that I should be more of an advocate. I had noticed that among my colleagues, there were a lot of stereotypes of those with the disease, like “Oh, wow, look at that crazy person.” None of them were learning a lot about seizures in medical school and training, and they didn’t really understand how seizures could affect people.
What has happened with your epilepsy?
I was seizure-free for many years but had a major seizure on my way to my first major epilepsy meeting in 1992. On an M.R.I. afterward, the neuroradiologists found a slow-growing brain tumor that probably caused the seizures all along but was never seen, or missed on early CAT scans.
I ended up getting the left temporal lobe of my brain removed, which leaves people with difficulty naming things for the first six to 12 months until the brain figures out how to get around the problem.
So there I was for a while, a neurologist who couldn’t name things.
How have your own experiences affected your interactions with colleagues?
It’s really helped me see that doctors need to be taught to understand seizures. It’s a disorder with over 40 different types or syndromes that can affect anyone. There are Supreme Court justices who have had a couple of seizures and function normally. There are adults who have had significant head trauma, strokes or brain tumors. And there are individuals for whom epilepsy is catastrophic — children who all of a sudden find themselves going down a path where there is a strong likelihood they will never have a normal life.
Did the experience change how you interact with patients?
After my surgery, I was a very different person, with very little motivation and energy. We had just rearranged the furniture upstairs and I needed time to re-equilibrate. But one rarely hears doctors talking about the transition after surgery and how it can go in many different directions because most physicians have no clue of the implications of the diseases or their treatments.
One problem I face is whether I should share my own story with patients. I don’t want patients to assume that just because Dr. Smith is doing great, everyone who has epilepsy surgery will as well. My job is to be as realistic as possible about outcomes and risks. With brain surgery, the possibility of a major complication can’t be excluded. This is the brain we’re talking about. Once you take a part of it out, you can’t bring it back.
EPILEPTIC Seizures and Syndromes
Seizures happen when the electrical system of the brain malfunctions. Instead of discharging electrical energy in a controlled manner, the brain cells keep firing. The result may be a surge of energy through the brain, causing unconsciousness and contractions of the muscles.
If only part of the brain is affected, it may cloud awareness, block normal communication, and produce a variety of undirected, uncontrolled, unorganized movements.
Most seizures last only a minute or two, although confusion afterwards may last longer. An epilepsy syndrome is defined by a collection of similar factors, such as type of seizure, when they developed in life, and response to treatment.
The human brain is the source of human epilepsy. Although the symptoms of a seizure may affect any part of the body, the electrical events that produce the symptoms occur in the brain. The location of that event, the extent of its reach with the tissue of the brain, and how long it lasts all have profound effects.
There are many different types of seizures. People may experience just one type or more than one. The kind of seizure a person has depends on which part and how much of the brain is affected by the electrical disturbance that produces seizures. Experts divide seizures into generalized seizures (absence, atonic, tonic-clonic, myoclonic), partial (simple and complex) seizures, nonepileptic seizures and status epilepticus.
Classifying epilepsy by seizure type alone leaves out other important information about the patient and the episodes themselves. Classifying into syndromes takes a number of characteristics into account, including the type of seizure; typical EEG recordings; clinical features such as behavior during the seizure; the expected course of the disorder; precipitating features; expected response to treatment, and genetic factors. Find out more about epilepsy syndromes.
- Seizures are symptoms of abnormal brain function. With the exception of very young children and the elderly, the cause of the abnormal brain function is usually not identifiable. In about seven out of ten people with epilepsy, no cause can be found. Among the rest, the cause may be any one of a number of things that can make a difference in the way the brain works. Head injuries or lack of oxygen during birth may damage the delicate electrical system in the brain. Other causes include brain tumors, genetic conditions (such as tuberous sclerosis), lead poisoning, problems in development of the brain before birth, and infections like meningitis or encephalitis.
Some people who have epilepsy have no special seizure triggers, while others are able to recognize things in their lives that do affect their seizures. Keep in mind, however, that just because two events happen around the same time doesn’t mean that one is the cause of the other. Generally, the most frequent cause of an unexpected seizure is failure to take the medication as prescribed. That’s the most common trigger of all. Other factors include ingesting substances, hormone fluctuations, stress, sleep patterns and photosensitivity.
Photosensitivity and Seizures
Epilepsy affects more than three million Americans. For about 3 percent of them, exposure to flashing lights at certain intensities or to certain visual patterns can trigger seizures. This condition is known as photosensitive epilepsy.
While Watching Television:
» Watch television in a well-lit room.
» Reduce the brightness of the screen.
» Keep as far back from the TV as possible.
» Use the remote to change channels so you won’t have to get too close.
» Avoid watching for long periods.
» Try wearing polarized sunglasses.
Photosensitive epilepsy is more common in children and adolescents, especially those with generalized epilepsy, in particular juvenile myoclonic epilepsy. It becomes less frequent with age, with relatively few cases in the mid twenties.
Many people are unaware that they are sensitive to flickering lights or to certain kinds of patterns until they have a seizure. They may never go on to develop epilepsy, which is characterized by recurrent spontaneous seizures, though a seizure may be triggered by certain photic conditions. Many individuals who are disturbed by light exposure do not develop seizures but experience other symptoms such as headache, nausea, dizziness and more. They do not have epilepsy.
Examples of Triggers
Seizures in photosensitive people may be triggered by exposure to television screens due to the flicker or rolling images, to computer monitors, to certain video games or TV broadcasts containing rapid flashes or alternating patterns of different colors, and to intense strobe lights like visual fire alarms.
Also, seizures may be triggered by natural light, such as sunlight, especially when shimmering off water, flickering through trees or through the slats of Venetian blinds.
Certain visual patterns, especially stripes of contrasting colors, may also cause seizures. People have wondered whether flashing lights on the outside top of buses or emergency vehicles may trigger seizures in people with photosensitive epilepsy.
Not all televisions, video games, computer monitors, and strobe lights trigger seizures, however. Even in predisposed individuals, many factors must combine to trigger the photosensitive reaction.
- frequency of the flash (that is, how quickly the light is flashing)
- contrast with background lighting
- distance between the viewer and the light source
- wavelength of the light
- whether a person’s eyes are open or closed
The frequency or speed of flashing light that is most likely to cause seizures varies from person to person. Generally, flashing lights most likely to trigger seizures are between the frequency of 5 to 30 flashes per second (Hertz).
The likelihood of such conditions combining to trigger a seizure is small. However, to be safe, photosensitive individuals are advised to keep at a distance from TV screens and to place other lights in the surrounding area to lower the contrast between the brightness on the screen and the background. These conditions protect the viewer and are easy to obtain during TV viewing but not while playing video games or when randomly exposed to strong environmental lights. Therefore, other protective devices or strategies may be needed.
Check with your doctor if you are concerned about flashing lights triggering seizures. Chances are that your medical records will indicate how you responded to flashing lights during the electroencephalogram (EEG), a test done routinely in most people with epilepsy. During this test, sensors are attached to the patient’s scalp to monitor the electrical activity of the brain in various conditions, including light stimulation generated by a strobe positioned in front of the eyes. An abnormal response when the patient is exposed to various frequencies of flashing lights indicates the presence of photosensitivity. If you have not been diagnosed with epilepsy or have not had this type of test, ask your doctor about ordering one for you, or consult a local neurologist.
The same concerns may apply to relatives of individuals who are known to be photosensitive, such as siblings. Because the condition is genetic it may affect other members of the same family. Finding out if your are photosensitive or not is relevant, especially if the relatives are children or adolescents who intend to engage in activities presenting risks such as intense videogame playing.
If you are diagnosed with photosensitive epilepsy, your doctor may prescribe medication and suggest that you:
- avoid exposure to certain kinds of flashing lights; and
- cover one eye and turn away from the direct light source when in the presence of flashing lights.
You may also wish to discuss with your doctor whether the following tips suggested by photosensitivity and epilepsy experts would be helpful to you.
Visual Fire Alarm Strobe Lights
Under the Americans with Disabilities Act, most workplaces and places serving the public, including theaters, restaurants, and recreation areas, are required to have fire alarms, which flash as well as ring so that people who cannot hear or cannot hear well will know that there is an emergency.
To reduce the likelihood of the strobe light triggering a seizure, the Epilepsy Foundation’s professional advisory board recommends that
- the flash rate be kept to under 2 Hertz with breaks every so often between flashes; and
- flashing lights should be placed at a distance from each other and set to flash together at the same time to avoid an increase in the number of individual flashes.
- Watch television in a well-lit room to reduce the contrast between light from the set and light in the room.
- Reduce the brightness of the screen.
- Keep as far back from the screen as possible.
- Use the remote control to change channels on the TV so you won’t have to get too close to the set.
- Avoid watching for long periods of time.
- Wear polarized sunglasses while viewing television to reduce glare.
- Sit at least 2 feet from the screen in a well-lit room.
- Reduce the brightness of the screen.
- Do not let children play videogames if they are tired.
- Take frequent breaks from the games and look away from the screen every once in a while. Do not close and open eyes while looking at the screen – blinking may facilitate seizures in sensitive individuals.
- Cover one eye while playing, alternating which eye is covered at regular intervals.
- Turn the game off if strange or unusual feelings or body jerks develop.
- Use a flicker-free monitor (LCD display or flat screen).
- Use a monitor glare guard.
- Wear non-glare glasses to reduce glare from the screen.
- Take frequent breaks from tasks involving the computer.
Exposure to Strong Environmental Lights
- Cover one eye (either one) with one hand until the stimulus is over. Closing both eyes or turning your eyes in another direction will not help.
University of California Riverside Neuroscientists’ Discovery Could Bring Relief to People with Epilepsy
Maxim Bazhenov and Giri Krishnan used computational model to study epileptic seizures at the molecular level; research could lead to novel therapeutics for seizure disorder.
RIVERSIDE, Calif., 2011-06-28– Researchers at the University of California, Riverside have made a discovery in the lab that could help drug manufacturers develop new antiepileptic drugs and explore novel strategies for treating seizures associated with epilepsy – a disease affecting about 3 million Americans.
Neurons, the basic building blocks of the nervous system, are cells that transmit information by electrical and chemical signaling. During epileptic seizures, which generally last from a few seconds to minutes and terminate spontaneously, the concentrations of ions both inside the neuron and the space outside the neuron change due to abnormal ion flow to and from neurons through ion “channels” – tiny gateways that are embedded to the surface of the neuron.
Ordinarily, intracellular (inside the cell) sodium concentration is low compared to extracellular sodium (the reverse is true of potassium). During seizure, however, there is a buildup of intracellular sodium, with sodium ions moving into neurons from the extracellular space, and potassium ions doing the opposite.
To understand exactly how neurons function during epileptic seizures, Maxim Bazhenov, an associate professor of cell biology and neuroscience, and Giri P. Krishnan, a postdoctoral researcher in his lab, developed and used realistic computer simulations in their analyses and found that while there is a progressive and slow increase in intracellular sodium during seizure, it is this accumulation of intracellular sodium that leads to the termination of the seizure.
“According to our model, sodium concentration reaches a maximum just before the seizure terminates,” Bazhenov said. “After seizure initiation, this intracellular sodium buildup is required to terminate the seizure.”
The researchers’ computational model simulates the cortical network. (The cortex is the outer layer of the cerebrum of the mammalian brain. A sheet of neural tissue, it is often referred to as gray matter.) The model simulates neurons, connections between neurons, variable extracellular and intracellular concentrations for sodium and potassium ions and variable intracellular concentrations for chloride and calcium ions.
Bazhenov explained that conventional antiepileptic drugs are commonly designed to target various sodium channels in order to reduce their activity.
“These drugs essentially slow down the intracellular build-up of sodium, but this only prolongs seizure duration,” he said. “This is because seizure duration is affected by the rate of intracellular sodium accumulation – the slower this rate, the longer the seizure duration.”
According to Bazhenov, targeting the sodium channels is not the best approach for drugs to take. He explained that even for drugs to increase the activity of the sodium channels (in order to reduce seizure duration) there is an undesirable effect: seizures become more likely.
“The drugs ought to be targeting other ion channels, such as those responsible for the buildup of intracellular chloride,” he advises. “According to our model, restricting the chloride increase would lead to a faster termination of seizure and can even make seizures impossible.”
Bazhenov and Krishnan’s model also shows that the occurrence of seizures depends critically on the activity of ionic “pumps” – structures that are also embedded to the surface of neurons. These pumps help remove the sodium and chloride ions from inside the neurons and critically influence their concentrations in the brain.
Study results appear in the June 15 issue of The Journal of Neuroscience.
The research was supported by a grant to Bazhenov from the National Institutes of Health.
Reviewed by Joseph Sirven, M.D., Epilepsy Foundation Professional Advisory Board Chair-Elect
Epilepsy Awareness in the News!
Successful epilepsy surgery took place in New York City last week
Diagnosed at age 6, Daniel Jakubowitz has been under the care of a multidisciplinary team of leading neurologists and staff at the Comprehensive Epilepsy Center at Montefiore Hospital in New York, in partnership with Albert Einstein College of Medicine, for the past 20 years. Daniel also served as a volunteer for the Epilepsy Foundation of Long Island.
After a lifetime of seizures, Daniel underwent surgery 5 months ago to remove the part of the brain that was causing his seizures. The operation was a success and Daniel has been seizure-free ever since. Daniel’s care team includes: Solomon Moshe, M.D., Vice Chair, Department of Neurology, Director of Clinical Neurophysiology and Director of Child Neurology; Sheryl Haut, M.D., a neurologist specializing in adult care; Alexis Boro, M.D., Assistant Professor of Neurology; Alan Legatt, M.D., Ph.D., Director of the EEG Laboratory, Director of the Evoked Potential Laboratory, and Director of Intraoperative Neurophysiology at Montefiore Medical Center; and Patrick Lasala, M.D., who leads in the development of stereotactic neurosurgery.
Connecticut Governor Signs Legislation with Protections for People with Epilepsy
June 28, 2011–The Epilepsy Foundation of Connecticut announced, two weeks ago, that after a 4-year legislative effort, the Patient Prescription Protection Act has finally passed the legislature and was signed into law today by Connecticut Governor Dannell Malloy.
The bill is designed to protect people with epilepsy from having their medication switched without their consent. It requires pharmacists to notify and receive the consent of the patient and their physician before filling a prescription using a new drug manufacturer or distributor of the prescribed drug.
Epilepsy Foundation Executive Vice President Sandy Finucane said, “Informed consent is the critical issue here. People who are going from one epilepsy drug to another need to know in advance and be monitored by a physician during the change. Most do fine, but for some it’s a matter of life and death.”
The Connecticut affiliate expressed gratitude to the many advocates who have worked to support the bill over the past 4 years–writing letters, attending hearings and making phone calls. This session, Representatives Boukus, Ritter and Walker, as well as Senators Harp and Gerratana were strong supporters. In addition, former Senator Handley joined supporters of this legislation at the Public Health Committee hearing.
For the more than 60,000 people in Connecticut with epilepsy who rely on medication for controlling and/or reducing their seizures, this legislation is long overdue. Now people with epilepsy can be assured of a consistent supply of their medication and can live their lives without the fear of unexpected seizures due to a switch in their medication.
HEADING HOME A herd in India in 2006. Rinderpest ended in India in the ’90s
The New York Times, June 27, 2011, by Donald G. McNeil Jr — On Tuesday in a ceremony in Rome, the United Nations is officially declaring that for only the second time in history, a disease has been wiped off the face of the earth.
The disease is rinderpest.
Everyone has heard of smallpox. Very few have heard of the runner-up.
That’s because rinderpest is an epizootic, an animal disease. The name means “cattle plague” in German, and it is a relative of the measles virus that infects cloven-hoofed beasts, including cattle, buffaloes, large antelopes and deer, pigs and warthogs, even giraffes and wildebeests. The most virulent strains killed 95 percent of the herds they attacked.
But rinderpest is hardly irrelevant to humans. It has been blamed for speeding the fall of the Roman Empire, aiding the conquests of Genghis Khan and hindering those of Charlemagne, opening the way for the French and Russian Revolutions, and subjugating East Africa to colonization.
Any society dependent on cattle — or relatives like African zebu, Asian water buffaloes or Himalayan yaks — was vulnerable.
As meat and milk, cattle were and are both food and income to peasant farmers, as well as the source of calves to sell and manure for fields. Until recently, they were the tractors that dragged plows and the trucks that hauled crops to market. When herds die, their owners starve.
The long but little-known campaign to conquer rinderpest is a tribute to the skill and bravery of “big animal” veterinarians, who fought the disease in remote and sometimes war-torn areas — across arid stretches of Africa bigger than Europe, in the Arabian desert and on the Mongolian steppes.
“The role of veterinarians in protecting society is underappreciated,” said Dr. Juan Lubroth, chief veterinary officer of the Food and Agriculture Organization of the United Nations, at whose headquarters Tuesday’s ceremony is being held. “We do more than just take care of fleas, bathe mascots and vaccinate Pooch.”
The victory is also proof that the conquest of smallpox was not just an unrepeatable fluke, a golden medical moment that will never be seen again. Since it was declared eradicated in 1980, several other diseases — like polio, Guinea worm, river blindness, elephantiasis, measles and iodine deficiency — have frustrated intensive, costly efforts to do the same to them. The eradication of rinderpest shows what can be done when field commanders combine scientific advances and new tactics.
In 1998, a longtime leader of the effort, Sir Gordon R. Scott of the Center for Tropical Veterinary Medicine at the University of Edinburgh, wrote an article saying he had reluctantly concluded that it would fail.
“The major obstacle,” he wrote, “is man’s inhumanity to man. Rinderpest thrives in a milieu of armed conflict and fleeing refugee masses. Until world peace is secured, the nays win the argument.”
He cited Somalia, Sudan, Sri Lanka, Yemen and Kurdish parts of Iraq and Turkey as areas where war drove animals and their owners over borders and life was risky for vaccinators.
Dr. Scott will not be in Rome for the ceremony; he died in 2004. Yet perhaps without realizing it, he did outlive rinderpest. The last known case was in a wild buffalo tested in Mount Meru National Park in Kenya in 2001.
An Ancient Battle
The modern eradication campaign began in 1945, when the Food and Agriculture Organization was founded. But it became feasible only as vaccines improved. An 1893 version made from the bile of convalescent animals was replaced by vaccines grown in goats and rabbits and finally in laboratory cell lines; a heat-stable version was developed in the 1980s.
How long the ancient battle went on is uncertain. Although cattle die-offs did affect all the historical events mentioned above, there is uncertainty about which were from rinderpest and which were something else, like anthrax.
Death from rinderpest is rapid and nasty. Animals get feverish; their eyes and noses run. Their digestive tracts are inflamed from mouth to anus, and they die of diarrhea and protein loss.
But other diseases have overlapping symptoms, and a rapid diagnostic test that could be used next to a dying animal was not developed until the 1990s.
Until recently, it was assumed the disease existed as long ago as 10,000 B.C., when cattle were domesticated in the Indus Valley in what is now Pakistan. It was blamed for an epidemic in Egypt in 3,000 B.C. (the fifth plague of Moses fell on the pharaoh’s herds) and for the widespread die-offs that starved the Roman Empire in the face of fourth-century invaders. In the ninth century, it was the chief suspect in the “mortality upon the horned animals” in the British Isles.
Last year, however, Japanese geneticists studying rinderpest’s mutation patterns estimated that until about A.D. 1000, it was virtually identical to measles — making it likely that pandemics that killed only animals before that time had other causes, like anthrax or possibly an ancestor virus from which both measles and rinderpest evolved.
Some experts now believe the disease arose in the gray oxen of the Central Asian steppes and was swept forward in the trains of baggage and beasts that followed the Mongol armies in the 1200s as they conquered Eurasia from China to Poland. (The Mongols are also suspected of importing bubonic plague from South Asia in flea-bitten rats hiding in grain sacks.)
Like smallpox, rinderpest settled into a pattern of irregularly recurring pandemics, sometimes touched off by imports of Russian steppe cattle, in which the disease smoldered but rarely killed. The longer between waves, the more victims died.
With the exception of a brief, contained outbreak in Brazil in 1920, it did not reach the Americas. It touched Australia in 1923, but the authorities there stamped it out by slaughtering 3,000 animals.
Despite its proximity to Eurasia, Africa was spared until 1887, when the Italian Army, struggling to conquer Abyssinia, imported Indian cattle for food and draft power.
From the port of Massawa in present-day Eritrea, the virus exploded so fast that it reached South Africa within a decade (and is considered one of the factors that impoverished Boer farmers as war with the English approached). It doomed East Africa’s wandering herders, subsisting on milk mixed with cow blood. Historians believe a third of them or more starved to death.
The disease was still leaping water barriers as late as the 1980s, when Indian peacekeepers in Sri Lanka imported sick goats. Until 1999, war-torn Sri Lanka was one of the world’s last pockets of rinderpest.
Finding a Vaccine
As rinderpest advanced and receded over the centuries, it led to some important scientific advances.
In 1713, when it threatened the papal herds, Pope Clement XI asked his personal physician, Dr. Giovanni Maria Lancisi, to stop it. Dr. Lancisi was familiar with the work of Dr. Bernardino Ramazzini, a scholar at the University of Padua who accurately deduced that rinderpest spread by the “virulently poisoned breath of an ox” and its excretions and hide — not by fogs, astrology or other popular theories.
According to Dr. Scott, Dr. Lancisi prescribed quarantine measures that were nearly as brutal to humans as to cattle.
Charlatan “cures” were banned; priests were ordered to stop relying on prayer alone and to preach from the pulpit that all herds with any sick members were to be slaughtered and buried in lime, while healthy herds were to be kept isolated. Any layman who resisted or cheated was to be hanged, drawn and quartered. Any disobeying priest was to be sent to the galleys for life.
Within nine months, the outbreak in the Papal States was snuffed. In the rest of Europe — where Protestants disdained papal orders — it persisted for a century and killed 200 million cattle.
By the 1750s, dairymen in England and the Netherlands were experimenting with a crude early form of inoculation: soaking a cloth in a diseased cow’s mucus, then sewing it into a cut in a healthy cow. It did not always protect, and sometimes killed.
(This was 50 years before Dr. Edward Jenner became famous for preventing smallpox by vaccinating a boy with pus from a milkmaid’s cowpox blister. But Dr. Jenner was not the first; he got the credit because he successfully repeated the vaccination 23 times and published his results.)
In 1761, the first school of veterinary medicine was founded in Lyon, France, specifically to fight rinderpest.
In 1924, a new and devastating European outbreak was the impetus for creating the World Organization for Animal Health, the veterinary equivalent of the World Health Organization.
In that decade, the new Soviet Union finally realized the old czarist goal of eradicating rinderpest among steppe cattle.
Under Mao, China followed in the 1950s, relying on quarantine and slaughter measures like Dr. Lancisi’s (except that uncooperative farmers were only imprisoned).
India, however, struggled until 1995.
“You can’t slaughter cows in India,” said Dr. William P. Taylor, a rinderpest expert and technical adviser to that nation. But India did so well at vaccination that near the end it became a problem for global surveillance because health officials were reluctant to stop long enough to prove the disease was gone. (Vaccinated animals test positive despite their immunity.)
FIELD OF DEATH Cattle carcasses littered a pasture in South Africa in 1900 during a rinderpest epidemic — Photo: G. R. Thomson
The Last Frontier
The intractable problem was Africa. The disease was in 32 countries there, and many had pastoralist tribes like the Fulani, Masai, Dinka and Afar, who lived on the borderless fringes and drove cattle up to 50 miles a day, having virtually no contact with governments and getting no veterinary bulletins.
“In the ’60s and ’70s, the biggest problem we had was to convince farmers to bring in their animals,” said Dr. Protus Atang, a former director of the African Union’s veterinary institute. “They believed vaccination brought disease.”
Others had a traditional prevention method — smearing feces from infected animals in the mouths of healthy ones.
Just reaching them was hard. Land Rovers broke down, gasoline and cash ran short. Vaccine was packaged with salt so it could be dissolved in saline, but in remote areas salt was so valuable that it would be stolen.
Announcing vaccination days “was advertising to rustlers where the herds would be that day,” said John Anderson, former chief of laboratory testing for the eradication drive. African veterinary officers were paid so poorly that they survived only through second jobs like breeding chickens or mending watches.
Despite all the drawbacks, by 1979 the effort looked successful, and was ended. By the mid-’80s, rinderpest returned.
“I think they just stopped too early to celebrate,” Dr. Anderson said. “No one’s exactly sure where it came back from.”
Smallpox eradication boosted morale, Dr. Atang said, and a second effort was mounted in 1986, followed by a third in 1998.
A crucial advance was a new vaccine that survived a month without refrigeration. That let herders who could be recruited do their own vaccinating. An education campaign using comic books, flip charts and lecturers who spoke local languages was begun.
“The way we previously did it was really mindless,” said Dr. Peter L. Roeder, who directed the final eradication drive after working on the two earlier ones. “We’d get up before dawn to drive long distances. We’d be wrestling the animals to the ground, it’d get stinking hot, and pretty soon the locals would get fed up and walk away.”
The cattle were nervous and hard to handle, and no wonder, he said: They lived day and night with their owners and now were being roped and tackled by white men wearing khaki and reeking of unfamiliar soaps and deodorants.
“But someone local, dressed as a local, with mutton fat rubbed in his hair, could walk among them and stick in a needle and barely be noticed,” Dr. Roeder said. “We’d be lucky to get 20 percent immunity in a herd; our local guys could get 90, 95 percent.”
His “Paul on the road to Damascus moment” he said, took place in 1991, as Ethiopia’s civil war ended and he could finally drive north.
“We were driving up the edge of the Rift Valley, dropping down into the bottom to meet the Afar people,” Dr. Roeder said, “and almost everywhere we found rinderpest and people crying out for vaccination.
“Later, sitting in a bar drinking smuggled Peroni beer, it came to us: It wasn’t necessary to constantly be doing mass vaccinations. We were trying to get 30 million cattle and never getting more than nine million. We needed to concentrate on these lowland areas where the virus was persistent. We could vaccinate two million and do better.”
While the upland had large, visible outbreaks, he explained, between them the virus lurked in the lowland herds as it had centuries before in steppe oxen. Since the older animals were all survivors and the 1-year-olds were protected by maternal antibodies, he reasoned, only the 2- to 3-year-olds were vulnerable, and their age could be estimated by looking at their teeth. If all members of that group were vaccinated, the virus would slowly disappear.
A later crucial development was the rapid diagnostic test.
In the same way presidents denied that their citizens had AIDS, they denied that their citizens’ cattle had rinderpest. Dr. Roeder said he once loaded a dead cow onto his pickup and drove it to the capital to insist it be tested. (He declined to name the country.)
The new tests, similar to pregnancy kits, but using an eye swab instead of urine, empowered local veterinary officials, said Dr. Anderson, their inventor. Officials in the capital could no longer just dismiss reports as misdiagnoses.
Even though the last known case was in 2001, officials waited 10 years to declare success, since surveillance is harder with animal diseases. Even in Somalia, where the last smallpox case was found, a dying child would be rushed to a hospital. A dying cow would just be left behind.
The whole campaign, from 1945 to the present, cost about $5 billion, the United Nations has estimated.
“At first I thought, that’s quite a lot,” Dr. Roeder said. “Then I thought, that last royal wedding cost $8 billion. This was cheap.”
HealingBackPain.co.uk, June 27, 2011 — Back pain, back ache, lower back pain, lumbago, back problems, whatever you choose to call it, is a serious concern for many people.
It is so common it is practically the norm rather than the exception for many adults.
Official figures from the UK state that four out of every five adults will have a back pain problem at some point in their lives. And serious back problems can strike at any age even in children, but it is most common in adults from their mid-30s to mid-50s.
This may be because although still active many adults in this age group are not as fit as they were in their 20s. In fact, back problems are easily the biggest cause of lost work days, not only in the UK, but also in all western countries.
Fortunately, back injuries are usually not as serious as people initially believe and on most occasions, it will recover with a little medical assistance in a matter of weeks.
It is not surprising that so many people suffer back injuries when you consider that all that is supporting your entire upper body weight are 24 small vertebrae.
In between each of these bones, is a shock absorbing disc that stops friction and jarring between each bone.
Then there are a group of strings known as ligaments that tie the discs and the vertebrae together.
The muscles that support your back and body are connected to the vertebrae via some more string like connections known as tendons.
It is the muscles of the back that actually support the weight of your upper body rather than the spine itself, which is merely the part of your skeleton that the back supporting muscles, attach to.
Lumbago or as it is usually called lower back pain will affect 70% of all adults at some point in their lives, the area that is affected stretches from the top of your legs to the bottom of your ribs.
Lower back pain can be a gradual and growing ache or can strike suddenly usually as the result of an injury. This part of your back is both complex and delicate meaning that even a relatively minor injury can cause a substantial amount of pain.
Contrary to popular belief lower back pain does not usually come from the spine itself but rather from the discs, muscles and ligaments and so forth. When we say we have, hurt our back the automatic mental image is that we have hurt our spine in fact most of the time it is another part of the mechanism that has suffered damage.
Again, contrary to popular belief most of the time, back injuries are generally not serious and even without treatment will often recover within a few weeks. However, this does not mean that you should not visit a doctor or hospital should you injure your back because there is the slight possibility of a more serious and long lasting problem.
Along the Spine, Women Buckle at Breaking Points
The New York Times, June 27, 2011, by Jane E. Brody — An 80-year-old friend was lifting a corner of the mattress while making her bed when, as she put it, “I broke my back.”
In fact, she suffered a vertebral fracture — a compression, or crushing, of the front of a vertebra, one of the 33 bones that form the spinal column. This injury is very common, affecting a quarter of postmenopausal women and accounting for half of the 1.5 million fractures due to bone loss that occur each year in the United States.
By age 80, two in every five women have had one or more vertebral compression fractures. They often result in chronic back pain and impair the ability to function and enjoy life. They are one reason so many people shrink in height as they age.
Multiple vertebral fractures, found in 20 percent to 30 percent of cases, often result in a hunched posture, a condition called kyphosis that impairs breathing and compresses the abdomen, leading to a protruding stomach with limited capacity.
But while vertebral fractures are a telltale sign of bone loss among women over age 50 and men over age 60, most who suffer them are unaware of the problem and receive no treatment to prevent future fractures in vertebrae, hips or wrists, the bones most likely to break under minor stress when weakened.
Yet, if a vertebral fracture is diagnosed and properly treated, the risk of future fractures, including hip fractures, is reduced by half or more, studies have shown.
“Most vertebral fractures do not come to medical attention at the time of their occurrence,” Dr. Kristine E. Ensrud and Dr. John T. Schousboe wrote recently in The New England Journal of Medicine. One reason is that the pain may be minimal at first or, if more severe, attributed to a strain that subsides over a few weeks.
Indeed, patients or their physicians are made aware of these fractures in just one-fourth to one-third of the instances in which they are discovered on X-rays, according to the doctors.
“The patient may have had a chest or back X-ray for some other reason, perhaps to rule out pneumonia, but the focus is on why the test was ordered, and an incidental finding of a vertebral fracture is ignored,” Dr. Ensrud said in an interview. “Doctors need to be more aware of this problem, and maybe patients should ask to see the report.”
Dr. Ensrud, an internist and epidemiologist who researches osteoporosis at the University of Minnesota and the Veterans Affairs Medical Center in Minneapolis, noted that in a person with severe osteoporosis, a vertebral fracture can be caused by something as mundane as coughing, sneezing, turning over in bed or stepping out of a bathtub.
In patients whose bone loss is less advanced, a fracture may occur when lifting something heavy, tripping or falling out of a chair.
“A lot of the time, people don’t recall the incident,” Dr. Ensrud said. “They just report that their back has been bothering them.” Patients also may mistakenly assume that their chronic discomfort is a result of arthritis or a normal consequence of age, and never mention it to their doctors.
About one-third of the postmenopausal women found to have vertebral fractures do not have osteoporosis as defined by bone mineral density testing, according to Dr. Ensrud and Dr. Schousboe. Rather, test scores indicate that these women are suffering from a lesser form of bone loss called osteopenia.
Yet the occurrence of vertebral fractures means that the situation is worse than bone density testing would suggest. “The identification of a vertebral fracture indicates a diagnosis of osteoporosis,” Dr. Ensrud and Dr. Schousboe concluded in their article.
Asked if such women should receive bone-preserving medication, Dr. Ensrud said emphatically, “Yes!” One major study found that a vertebral fracture raises the risk of further vertebral fractures by five times in just one year.
A vertebral fracture can be seen on an ordinary X-ray of the spine. But there is a more practical approach involving much less radiation: a scan of the spine called a lateral DEXA, an acronym for dual energy X-ray absorptiometry, as part of a routine bone density exam.
The scan requires special computer software. Patients must ask whether a particular clinic or hospital is able to perform a lateral DEXA.
If a postmenopausal woman whose bone density measures in the osteopenic range (suggesting bone loss, but not yet full-blown osteoporosis) is found to have a vertebral fracture, her doctor may decide to prescribe medication that increases bone strength. Often the drug will be a bisphosphonate like alendronate (brand name Fosamax), which is now available in an inexpensive generic form.
Future fractures can often be prevented if a bisphosphonate is taken by someone found to have one or more vertebral fractures, even if these fractures cause no discomfort. There are many other bone-building options, too, including a once-a-year injection.
In addition, patients should consume adequate amounts of calcium and vitamin D, the critical nutrients for strong bones: a total of 1,200 milligrams of calcium daily from food and supplements, and 1,000 international units daily of vitamin D.
Initially, a painful vertebral fracture may be treated with a short period of bed rest and pain medication like a nonsteroidal anti-inflammatory drug, narcotic, pain patch or an injection or nasal spray of calcitonin. But if too much time is spent in bed, the resulting weakness can increase the risk of further fractures.
Whatever is done, or not done, to treat the injury, the pain of a vertebral fracture usually subsides over the course of several weeks.
Dr. Ensrud and Dr. Schousboe cautioned in their article against rushing into two invasive procedures that have become increasingly common in this country: vertebroplasty and kyphoplasty. During these procedures, a kind of cement is injected into the compressed vertebra to stabilize it.
The operations are performed by interventional radiologists who, naturally, endorse them enthusiastically. However, two scientifically conducted studies of vertebroplasty using sham procedures as a control found no benefit with respect to pain, disability or quality of life.
Nor are these procedures completely free of risk. Although rarely, they can sometimes injure nerves or cause pulmonary embolisms. They also may result in fractures of adjacent vertebrae by increasing the mechanical stress on them.
Exercises to improve posture, strengthen back muscles and enhance mobility are less costly and likely to be more effective in the long run, the doctors wrote.
The Power of Rescuing Others: Marsha Linehan, a therapist and researcher at the University of Washington who suffered from borderline personality disorder, recalls the religious experience that transformed her as a young woman. “So many people have begged me to come forward, and I just thought — well, I have to do this. I owe it to them. I cannot die a coward,”……. “My whole experience of these episodes was that someone else was doing it; it was like ‘I know this is coming, I’m out of control, somebody help me; where are you, God?’” —— -Marsha M. Linehan Photo Credit: Damon Winter/The New York Times
The New York Times, June 28, 2011, by Benedict Carey — The patient wanted to know, and her therapist — Marsha M. Linehan of the University of Washington, creator of a treatment used worldwide for severely suicidal people — had a ready answer. It was the one she always used to cut the question short, whether a patient asked it hopefully, accusingly or knowingly, having glimpsed the macramé of faded burns, cuts and welts on Dr. Linehan’s arms:
“You mean, have I suffered?”
“No, Marsha,” the patient replied, in an encounter last spring. “I mean one of us. Like us. Because if you were, it would give all of us so much hope.”
“That did it,” said Dr. Linehan, 68, who told her story in public for the first time last week before an audience of friends, family and doctors at the Institute of Living, the Hartford clinic where she was first treated for extreme social withdrawal at age 17. “So many people have begged me to come forward, and I just thought — well, I have to do this. I owe it to them. I cannot die a coward.”
No one knows how many people with severe mental illness live what appear to be normal, successful lives, because such people are not in the habit of announcing themselves. They are too busy juggling responsibilities, paying the bills, studying, raising families — all while weathering gusts of dark emotions or delusions that would quickly overwhelm almost anyone else.
Now, an increasing number of them are risking exposure of their secret, saying that the time is right. The nation’s mental health system is a shambles, they say, criminalizing many patients and warehousing some of the most severe in nursing and group homes where they receive care from workers with minimal qualifications.
Moreover, the enduring stigma of mental illness teaches people with such a diagnosis to think of themselves as victims, snuffing out the one thing that can motivate them to find treatment: hope.
“There’s a tremendous need to implode the myths of mental illness, to put a face on it, to show people that a diagnosis does not have to lead to a painful and oblique life,” said Elyn R. Saks, a professor at the University of Southern California School of Law who chronicles her own struggles with schizophrenia in “The Center Cannot Hold: My Journey Through Madness.” “We who struggle with these disorders can lead full, happy, productive lives, if we have the right resources.”
These include medication (usually), therapy (often), a measure of good luck (always) — and, most of all, the inner strength to manage one’s demons, if not banish them. That strength can come from any number of places, these former patients say: love, forgiveness, faith in God, a lifelong friendship.
But Dr. Linehan’s case shows there is no recipe. She was driven by a mission to rescue people who are chronically suicidal, often as a result of borderline personality disorder, an enigmatic condition characterized in part by self-destructive urges.
“I honestly didn’t realize at the time that I was dealing with myself,” she said. “But I suppose it’s true that I developed a therapy that provides the things I needed for so many years and never got.”
‘I Was in Hell’
She learned the central tragedy of severe mental illness the hard way, banging her head against the wall of a locked room.
Marsha Linehan arrived at the Institute of Living on March 9, 1961, at age 17, and quickly became the sole occupant of the seclusion room on the unit known as Thompson Two, for the most severely ill patients. The staff saw no alternative: The girl attacked herself habitually, burning her wrists with cigarettes, slashing her arms, her legs, her midsection, using any sharp object she could get her hands on.
The seclusion room, a small cell with a bed, a chair and a tiny, barred window, had no such weapon. Yet her urge to die only deepened. So she did the only thing that made any sense to her at the time: banged her head against the wall and, later, the floor. Hard.
“My whole experience of these episodes was that someone else was doing it; it was like ‘I know this is coming, I’m out of control, somebody help me; where are you, God?’ ” she said. “I felt totally empty, like the Tin Man; I had no way to communicate what was going on, no way to understand it.”
Her childhood, in Tulsa, Okla., provided few clues. An excellent student from early on, a natural on the piano, she was the third of six children of an oilman and his wife, an outgoing woman who juggled child care with the Junior League and Tulsa social events.
People who knew the Linehans at that time remember that their precocious third child was often in trouble at home, and Dr. Linehan recalls feeling deeply inadequate compared with her attractive and accomplished siblings. But whatever currents of distress ran under the surface, no one took much notice until she was bedridden with headaches in her senior year of high school.
Her younger sister, Aline Haynes, said: “This was Tulsa in the 1960s, and I don’t think my parents had any idea what to do with Marsha. No one really knew what mental illness was.”
Soon, a local psychiatrist recommended a stay at the Institute of Living, to get to the bottom of the problem. There, doctors gave her a diagnosis of schizophrenia; dosed her with Thorazine, Librium and other powerful drugs, as well as hours of Freudian analysis; and strapped her down for electroshock treatments, 14 shocks the first time through and 16 the second, according to her medical records. Nothing changed, and soon enough the patient was back in seclusion on the locked ward.
“Everyone was terrified of ending up in there,” said Sebern Fisher, a fellow patient who became a close friend. But whatever her surroundings, Ms. Fisher added, “Marsha was capable of caring a great deal about another person; her passion was as deep as her loneliness.”
A discharge summary, dated May 31, 1963, noted that “during 26 months of hospitalization, Miss Linehan was, for a considerable part of this time, one of the most disturbed patients in the hospital.”
A verse the troubled girl wrote at the time reads:
They put me in a four-walled room
But left me really out
My soul was tossed somewhere askew
My limbs were tossed here about
Bang her head where she would, the tragedy remained: no one knew what was happening to her, and as a result medical care only made it worse. Any real treatment would have to be based not on some theory, she later concluded, but on facts: which precise emotion led to which thought led to the latest gruesome act. It would have to break that chain — and teach a new behavior.
“I was in hell,” she said. “And I made a vow: when I get out, I’m going to come back and get others out of here.”
She sensed the power of another principle while praying in a small chapel in Chicago.
It was 1967, several years after she left the institute as a desperate 20-year-old whom doctors gave little chance of surviving outside the hospital. Survive she did, barely: there was at least one suicide attempt in Tulsa, when she first arrived home; and another episode after she moved to a Y.M.C.A. in Chicago to start over.
She was hospitalized again and emerged confused, lonely and more committed than ever to her Catholic faith. She moved into another Y, found a job as a clerk in an insurance company, started taking night classes at Loyola University — and prayed, often, at a chapel in the Cenacle Retreat Center.
“One night I was kneeling in there, looking up at the cross, and the whole place became gold — and suddenly I felt something coming toward me,” she said. “It was this shimmering experience, and I just ran back to my room and said, ‘I love myself.’ It was the first time I remember talking to myself in the first person. I felt transformed.”
The high lasted about a year, before the feelings of devastation returned in the wake of a romance that ended. But something was different. She could now weather her emotional storms without cutting or harming herself.
What had changed?
It took years of study in psychology — she earned a Ph.D. at Loyola in 1971 — before she found an answer. On the surface, it seemed obvious: She had accepted herself as she was. She had tried to kill herself so many times because the gulf between the person she wanted to be and the person she was left her desperate, hopeless, deeply homesick for a life she would never know. That gulf was real, and unbridgeable.
That basic idea — radical acceptance, she now calls it — became increasingly important as she began working with patients, first at a suicide clinic in Buffalo and later as a researcher. Yes, real change was possible. The emerging discipline of behaviorism taught that people could learn new behaviors — and that acting differently can in time alter underlying emotions from the top down.
But deeply suicidal people have tried to change a million times and failed. The only way to get through to them was to acknowledge that their behavior made sense: Thoughts of death were sweet release given what they were suffering.
“She was very creative with people. I saw that right away,” said Gerald C. Davison, who in 1972 admitted Dr. Linehan into a postdoctoral program in behavioral therapy at Stony Brook University. (He is now a psychologist at the University of Southern California.) “She could get people off center, challenge them with things they didn’t want to hear without making them feel put down.”
No therapist could promise a quick transformation or even sudden “insight,” much less a shimmering religious vision. But now Dr. Linehan was closing in on two seemingly opposed principles that could form the basis of a treatment: acceptance of life as it is, not as it is supposed to be; and the need to change, despite that reality and because of it. The only way to know for sure whether she had something more than a theory was to test it scientifically in the real world — and there was never any doubt where to start.
Getting Through the Day
“I decided to get supersuicidal people, the very worst cases, because I figured these are the most miserable people in the world — they think they’re evil, that they’re bad, bad, bad — and I understood that they weren’t,” she said. “I understood their suffering because I’d been there, in hell, with no idea how to get out.”
In particular she chose to treat people with a diagnosis that she would have given her young self: borderline personality disorder, a poorly understood condition characterized by neediness, outbursts and self-destructive urges, often leading to cutting or burning. In therapy, borderline patients can be terrors — manipulative, hostile, sometimes ominously mute, and notorious for storming out threatening suicide.
Dr. Linehan found that the tension of acceptance could at least keep people in the room: patients accept who they are, that they feel the mental squalls of rage, emptiness and anxiety far more intensely than most people do. In turn, the therapist accepts that given all this, cutting, burning and suicide attempts make some sense.
Finally, the therapist elicits a commitment from the patient to change his or her behavior, a verbal pledge in exchange for a chance to live: “Therapy does not work for people who are dead” is one way she puts it.
Yet even as she climbed the academic ladder, moving from the Catholic University of America to the University of Washington in 1977, she understood from her own experience that acceptance and change were hardly enough. During those first years in Seattle she sometimes felt suicidal while driving to work; even today, she can feel rushes of panic, most recently while driving through tunnels. She relied on therapists herself, off and on over the years, for support and guidance (she does not remember taking medication after leaving the institute).
Dr. Linehan’s own emerging approach to treatment — now called dialectical behavior therapy, or D.B.T. — would also have to include day-to-day skills. A commitment means very little, after all, if people do not have the tools to carry it out. She borrowed some of these from other behavioral therapies and added elements, like opposite action, in which patients act opposite to the way they feel when an emotion is inappropriate; and mindfulness meditation, a Zen technique in which people focus on their breath and observe their emotions come and go without acting on them. (Mindfulness is now a staple of many kinds of psychotherapy.)
In studies in the 1980s and ’90s, researchers at the University of Washington and elsewhere tracked the progress of hundreds of borderline patients at high risk of suicide who attended weekly dialectical therapy sessions. Compared with similar patients who got other experts’ treatments, those who learned Dr. Linehan’s approach made far fewer suicide attempts, landed in the hospital less often and were much more likely to stay in treatment. D.B.T. is now widely used for a variety of stubborn clients, including juvenile offenders, people with eating disorders and those with drug addictions.
“I think the reason D.B.T. has made such a splash is that it addresses something that couldn’t be treated before; people were just at a loss when it came to borderline,” said Lisa Onken, chief of the behavioral and integrative treatment branch of the National Institutes of Health. “But I think the reason it has resonated so much with community therapists has a lot to do with Marsha Linehan’s charisma, her ability to connect with clinical people as well as a scientific audience.”
Most remarkably, perhaps, Dr. Linehan has reached a place where she can stand up and tell her story, come what will. “I’m a very happy person now,” she said in an interview at her house near campus, where she lives with her adopted daughter, Geraldine, and Geraldine’s husband, Nate. “I still have ups and downs, of course, but I think no more than anyone else.”
After her coming-out speech last week, she visited the seclusion room, which has since been converted to a small office. “Well, look at that, they changed the windows,” she said, holding her palms up. “There’s so much more light.”
Conference on Disruptive Innovations in Clinical Trials
Together with our colleagues from Pfizer and Novartis, Target Health is delighted to present at the Disruptive Innovations in Clinical Trials conference which will take place September 15-16, 2011 in Philadelphia.
Dr. Jules T. Mitchel, President of Target Health will be presenting Target Health’s approach to the paperless clinical trial, including the results of the first Phase 2 study performed under a US IND, that used direct data entry and Target e*CTR (eClinical Trial Record).
This groundbreaking conference will present case studies that demonstrate either a disruptive or an innovative approach to advancing clinical trials. To register, visit www.theconferenceforum.org and use code THDP for a 25% discount.
For more information about Target Health contact Warren Pearlson (212-681-2100 ext. 104). For additional information about software tools for paperless clinical trials, please also feel free to contact Dr. Jules T. Mitchel or Ms. Joyce Hays. The Target Health software tools are designed to partner with both CROs and Sponsors. Please visit the Target Health Website at www.targethealth.com
On Rare Occasions DNA Dances Itself into a Different Shape
DNA, that marvelous, twisty molecule of life, has an alter ego. On rare occasions, its building blocks “rock and roll,” deforming the familiar double helix into a different shape. (Credit: Image courtesy of University of Michigan)
DNA, that marvelous, twisty molecule of life, possibly has an alter ego. On rare occasions, its building blocks “rock and roll,“ deforming the familiar double 1) ___ into a different shape. “We show that the simple DNA double helix exists in an alternative form — for one percent of the time — and that this alternative form is functional,“ said Hashim M. Al-Hashimi, who is the Robert L. Kuczkowski Professor of Chemistry and Professor of Biophysics at the University of Michigan. “Together, these data suggest that there are multiple layers of information stored in the genetic 2) ___.“ The findings were published online Jan. 26 in the journal Nature.
It’s been known for some time that the DNA molecule can bend and flex, something like a rope ladder, but throughout these gyrations its building 3) ___ – called bases – remain paired up just the way they were originally described by James Watson and Francis Crick, who proposed the spiral-staircase structure in 1953. By adapting 4) ___ ___ ___ NMR technology, Al-Hashimi’s group was able to observe transient, alternative forms in which some steps on the stairway come apart and reassemble into stable structures other than the typical Watson-Crick base pairs. The question was, what were these alternative stable structures?
“Using NMR, we were able to access the chemical shifts of this alternative form,“ said graduate student Evgenia Nikolova. “These chemical shifts are like fingerprints that tell us something about the structure.“ Through careful analysis, Nikolova realized the “fingerprints“ were typical of an orientation in which certain bases are flipped 180 degrees. “It’s like taking half of the stairway step and flipping it upside down so that the other face now points up,” said Al-Hashimi. “If you do this, you can still put the two halves of the step back together, but now what you have is no longer a Watson-Crick base pair; it’s something called a Hoogsteen base pair.“
“Using computational modeling, we further validated that individual bases can roll over inside the 5) ___ helix to achieve these Hoogsteen base pairs,“ said Loan Andricioaei, an associate professor of chemistry at the University of California, Irvine. Hoogsteen base pairs have previously been observed in double-stranded DNA, but only when the 6) ___ is bound to proteins or drugs or when the DNA is damaged. The new study shows that even under normal circumstances, with no outside influence, certain sections of DNA tend to briefly morph into the alternative structure, called an “excited state.“ Previous studies of DNA structure have relied mainly on techniques such as X-ray and conventional NMR, which can’t detect such fleeting or rare structural changes. “These methods do not capture alternative DNA structural forms that may exist for only a millisecond or in very little abundance, such as one percent of the time,“ said Al-Hashimi. “We took new solution NMR methods that previously have been used to study rare deformations in proteins and adapted them so that they could be used to study rare states in nucleic acids. Now that we have the right tools to look at these so-called excited states, we may find other short-lived states in DNA and RNA.“
Because critical interactions between DNA and proteins are thought to be directed by both the sequence of bases and the flexing of the molecule, these excited states represent a whole new level of information contained in the 7) ___ code, Al-Hashimi said
ANSWERS: 1) helix; 2) code; 3) blocks; 4) nuclear magnetic resonance; 5) double; 6) molecule; 7) genetic
Discovery of the Neuron – Part 2
Purkinje cerebellar cells
Purkinje is, however, most famous for discovering the cerebellar cells which bear his name. Because these cells are among the largest in the vertebrate brain, they were the first neurons to be identified. The low magnification and poor resolution of the microscope used by Purkinje is evident in the crude (yet beautiful) drawing, above, that he presented to the Congress of Physicians and Scientists in Prague, in 1837 (above). He also gave a very accurate description of the morphology of his cells:
Corpuscles surrounding the yellow substance [the junction between gray and white matter] in large numbers, are seen everywhere in rows in the laminae of the cerebellum. Each of these corpuscles faces the inside [of the organ], with the blunt, roundish endings towards the yellow substance, and it displays distinctly in its body the central nucleus together with its corona; the tail-like ending faces the outside, and, by means of two processes, mostly disappears into the gray matter which extends close to the outer surface which is surrounded by the pia mater.
Purkinje’s speculations on the functions of the entities he had discovered suggest that he contributed more to the Neuron Doctrine than he is generally given credit for:
With reference to the importance of the corpuscles…they are probably central structures…because of their whole organization in three concentric circles [i.e. cytoplasm, nuclear membrane and nucleolus] which may be related to the elementary brain and nerve fibers…as centers of force are related to the conduction pathways of force, or like the ganglia to the nerves of the ganglion, or like the brain substance to the spinal cord and cranial nerves. This means they would be collectors, generators and distributors of the neural organ.
Several decades later, further improvements in microscopy enabled Otto Friedrich Carl Dieters (1834-1863) to produce the most accurate description yet of a nerve cell, complete with axon and dendrites (above). Dieters referred to the axon and dendrites as the axis cylinder’ and protoplasmic processes,’ respectively.
Dieters died from typhoid fever, aged just 29. His description of nerve cells from the spinal cord, completed just before his death in 1863, was completed by Max Schultze, and published posthumously in 1865:
The central ganglion cell is an irregular shaped mass of granular protoplasm… the body of the cell is continuous uninterruptedly with a more or less large number of processes which branch frequently [and] have long stretches in between…these ultimately become immeasurably thin and lose themselves in the spongy ground substance…these processes [the dendrites]…will hereafter be called protoplasmic processes. A single process which originates either in the body of the cell or in one of the largest protoplasmic processes, immediately at its origin from the cell, is distinguishable from these at a glance.
From this description, it is clear that Dieters easily differentiated between the dendrites and the axon, but that he did not know if the axon arose from the cell body or from the dendritic tree. Dieters could not see the “immeasurably thin“ terminal branches of the dendrites and, like many others, inferred from his observations that they must fuse to form a continuous network. Dieters believed that dendrites, but not axons, could fuse by anastomosis.
Association Between Disease-Modifying Antirheumatic Drugs and Diabetes Risk in Patients With Rheumatoid Arthritis and Psoriasis
Rheumatoid arthritis (RA) and psoriasis have been linked with insulin resistance and diabetes mellitus (DM), and prior investigations have suggested that systemic immunosuppressive drugs may improve insulin resistance and reduce the risk of DM. As a result, a study published in the Journal of the American Medical Association (2011;305:2525-2531) was performed to compare the risk of newly recorded DM among participants diagnosed with RA or psoriasis based on use of a variety of disease-modifying antirheumatic drugs (DMARDs).
The investigation was a retrospective cohort study among 121,280 patients with a diagnosis of either RA or psoriasis on at least 2 visits. The analyses were conducted in the context of 2 large health insurance programs, 1 in Canada and 1 in the United States, using administrative data. The mean follow-up was 5.8 months and began with the first prescription for a DMARD after study eligibility was met. Drug regimens were categorized into 4 mutually exclusive groups: (1) tumor necrosis factor (TNF) inhibitors with or without other DMARDs; (2) methotrexate without TNF inhibitors or hydroxychloroquine; (3) hydroxychloroquine without TNF inhibitors or methotrexate; or (4) other non-biologic DMARDs without TNF inhibitors, methotrexate, or hydroxychloroquine (reference exposure).
The main outcome measure was newly recorded DM as evidenced by a new diagnosis of DM with use of a DM-specific medication.
The study cohort consisted of 13,905 participants with 22,493 treatment episodes starting 1 of the categories of DMARD regimens between January 1996 and June 2008. New diabetes cases and respective incidence rates per 1000 person-years were:
1.Other non-biologic DMARDs (55 cases among 3,993 treatment episodes; Rate = 50.2
2.TNF inhibitors (80 cases among 4,623 treatment episodes; Rate = 19.7
3.Methotrexate (82 cases among 8,195 treatment episodes; Rate -= 23.8
4.Hydroxychloroquine (50 cases among 5,682 treatment episodes; Rate = 22.2
The multivariate adjusted hazard ratios for DM compared with other nonbiologic DMARDS were:
1. 0.62 for TNF inhibitors
2. 0.77 for methotrexate
3. 0.54 for hydroxychloroquine.
According to the authors, among patients with RA or psoriasis, the adjusted risk of DM was lower for individuals starting a TNF inhibitor or hydroxychloroquine compared with initiation of other nonbiologic DMARDs.