Evening types have 10 percent higher risk of dying than morning counterparts

Date:
April 12, 2018

Source:
Northwestern University

Summary:
Night owls — people who prefer to stay up late and sleep late — have 10 percent higher risk of dying sooner than larks, people who go to bed early and rise early, reports a new study. This is the first study to show ‘owls’ have higher risk of mortality. Owls also suffer from more diseases and disorders than morning larks. Employers should allow greater flexibility in working hours for owls, scientists said.

 

‘Night owls’ have higher risk of early death than ‘morning larks.’
Credit: © megaflopp / Fotolia

 

 

Night owls” — people who like to stay up late and have trouble dragging themselves out of bed in the morning — have a higher risk of dying sooner than “larks,” people who have a natural preference for going to bed early and rise with the sun, according to a new study from Northwestern Medicine and the University of Surrey in the United Kingdom (UK).

The study, on nearly half a million participants in the UK Biobank Study, found owls have a 10 percent higher risk of dying than larks. In the study sample, 50,000 people were more likely to die in the 6½ -year period sampled.

“Night owls trying to live in a morning lark world may have health consequences for their bodies,” said co-lead author Kristen Knutson, associate professor of neurology at Northwestern University Feinberg School of Medicine.

Previous studies in this field have focused on the higher rates of metabolic dysfunction and cardiovascular disease, but this is the first to look at mortality risk.

The study will be published April 12 in the journal Chronobiology International.

The scientists adjusted for the expected health problems in owls and still found the 10 percent higher risk of death.

“This is a public health issue that can no longer be ignored,” said Malcolm von Schantz, a professor of chronobiology at the University of Surrey. “We should discuss allowing evening types to start and finish work later, where practical. And we need more research about how we can help evening types cope with the higher effort of keeping their body clock in synchrony with sun time.”

“It could be that people who are up late have an internal biological clock that doesn’t match their external environment,” Knutson said. “It could be psychological stress, eating at the wrong time for their body, not exercising enough, not sleeping enough, being awake at night by yourself, maybe drug or alcohol use. There are a whole variety of unhealthy behaviors related to being up late in the dark by yourself.”

In the new study, scientists found owls had higher rates of diabetes, psychological disorders and neurological disorders?

Can owls become larks?

Genetics and environment play approximately equal roles in whether we are a morning or a night type, or somewhere in between, the authors have previously reported.

“You’re not doomed,” Knutson said. “Part of it you don’t have any control over and part of it you might.”

One way to shift your behavior is to make sure you are exposed to light early in the morning but not at night, Knutson said. Try to keep a regular bedtime and not let yourself drift to later bedtimes. Be regimented about adopting healthy lifestyle behaviors and recognize the timing of when you sleep matters. Do things earlier and be less of an evening person as much as you can.

Society can help, too

“If we can recognize these chronotypes are, in part, genetically determined and not just a character flaw, jobs and work hours could have more flexibility for owls,” Knutson said. “They shouldn’t be forced to get up for an 8 a.m. shift. Make work shifts match peoples’ chronotypes. Some people may be better suited to night shifts.”

In future research, Knutson and colleagues want to test an intervention with owls to get them to shift their body clocks to adapt to an earlier schedule. “Then we’ll see if we get improvements in blood pressure and overall health,” she said.

The switch to daylight savings or summer time is already known to be much more difficult for evening types than for morning types.

“There are already reports of higher incidence of heart attacks following the switch to summer time,” says von Schantz. “And we have to remember that even a small additional risk is multiplied by more than 1.3 billion people who experience this shift every year. I think we need to seriously consider whether the suggested benefits outweigh these risks.”

How the study worked

For the study, researchers from the University of Surrey and Northwestern University examined the link between an individual’s natural inclination toward mornings or evenings and their risk of mortality. They asked 433,268 participants, age 38 to 73 years, if they are a “definite morning type” a “moderate morning type” a “moderate evening type” or a “definite evening type.” Deaths in the sample were tracked up to six and half years later.

The study was supported by the University of Surrey Institute?of Advanced Studies Santander fellowship and the National Institute of Diabetes and Digestive and Kidney Diseases grant R01DK095207 from the National Institutes of Health.

Story Source:

Materials provided by Northwestern UniversityNote: Content may be edited for style and length.


Journal Reference:

  1. Kristen L. Knutson, Malcolm von Schantz. Associations between chronotype, morbidity and mortality in the UK Biobank cohortChronobiology International, 2018; 1 DOI: 10.1080/07420528.2018.1454458

 

Source: Northwestern University. “Night owls have higher risk of dying sooner: Evening types have 10 percent higher risk of dying than morning counterparts.” ScienceDaily. ScienceDaily, 12 April 2018. <www.sciencedaily.com/releases/2018/04/180412085736.htm>.

Filed Under News | Leave a Comment 

Date:
April 11, 2018

Source:
Woods Hole Oceanographic Institution

Summary:
New research provides evidence that a key cog in the global ocean circulation system hasn’t been running at peak strength since the mid-1800s and is currently at its weakest point in the past 1,600 years. If the system continues to weaken, it could disrupt weather patterns from the United States and Europe to the African Sahel, and cause more rapid increase in sea level on the US East Coast.

 

When it comes to regulating global climate, the circulation of the Atlantic Ocean plays a key role. The constantly moving system of deep-water circulation, sometimes referred to as the Global Ocean Conveyor Belt, sends warm, salty Gulf Stream water to the North Atlantic where it releases heat to the atmosphere and warms Western Europe. The cooler water then sinks to great depths and travels all the way to Antarctica and eventually circulates back up to the Gulf Stream.
Credit: Intergovernmental Panel on Climate Change

 

 

New research led by University College London (UCL) and Woods Hole Oceanographic Institution (WHOI) provides evidence that a key cog in the global ocean circulation system hasn’t been running at peak strength since the mid-1800s and is currently at its weakest point in the past 1,600 years. If the system continues to weaken, it could disrupt weather patterns from the United States and Europe to the African Sahel, and cause more rapid increase in sea level on the U.S. East Coast.

When it comes to regulating global climate, the circulation of the Atlantic Ocean plays a key role. The constantly moving system of deep-water circulation, sometimes referred to as the Global Ocean Conveyor Belt, sends warm, salty Gulf Stream water to the North Atlantic where it releases heat to the atmosphere and warms Western Europe. The cooler water then sinks to great depths and travels all the way to Antarctica and eventually circulates back up to the Gulf Stream.

“Our study provides the first comprehensive analysis of ocean-based sediment records, demonstrating that this weakening of the Atlantic’s overturning began near the end of the Little Ice Age, a centuries-long cold period that lasted until about 1850,” said Dr. Delia Oppo, a senior scientist with WHOI and co-author of the study which was published in the April 12th issue of Nature.

Lead author Dr. David Thornalley, a senior lecturer at University College London and WHOI adjunct, believes that as the North Atlantic began to warm near the end of the Little Ice Age, freshwater disrupted the system, called the Atlantic Meridional Overturning Circulation (AMOC). Arctic sea ice, and ice sheets and glaciers surrounding the Arctic began to melt, forming a huge natural tap of fresh water that gushed into the North Atlantic. This huge influx of freshwater diluted the surface seawater, making it lighter and less able to sink deep, slowing down the AMOC system.

To investigate the Atlantic circulation in the past, the scientists first examined the size of sediment grains deposited by the deep-sea currents; the larger the grains, the stronger the current. Then, they used a variety of methods to reconstruct near-surface ocean temperatures in regions where temperature is influenced by AMOC strength.

“Combined, these approaches suggest that the AMOC has weakened over the past 150 years by approximately 15 to 20 percent” says Thornalley.

According to study co-author Dr. Jon Robson, a senior research scientist from the University of Reading, the new findings hint at a gap in current global climate models. “North Atlantic circulation is much more variable than previously thought,” he said, “and it’s important to figure out why the models underestimate the AMOC decreases we’ve observed.” It could be because the models don’t have active ice sheets, or maybe there was more Arctic melting, and thus more freshwater entering the system, than currently estimated.

Another study in the same issue of Nature, led by Levke Ceasar and Stefan Rahmstorf from the Potsdam Institute for Climate Impact Research, looked at climate model data and past sea-surface temperatures to reveal that AMOC has been weakening more rapidly since 1950 in response to recent global warming. Together, the two new studies provide complementary evidence that the present-day AMOC is exceptionally weak, offering both a longer-term perspective as well as detailed insight into recent decadal changes.

“What is common to the two periods of AMOC weakening — the end of the Little Ice Age and recent decades — is that they were both times of warming and melting,” said Thornalley. “Warming and melting are predicted to continue in the future due to continued carbon dioxide emissions.”

Oppo agrees, both noting, however, that just as past changes in the AMOC have surprised them, there may be future unexpected surprises in store. For example, until recently it was thought that the AMOC was weaker during the Little Ice Age, but these new results show the opposite, highlighting the need to improve our understanding of this important system.

Story Source:

Materials provided by Woods Hole Oceanographic InstitutionNote: Content may be edited for style and length.


Journal Reference:

  1. David J. R. Thornalley, Delia W. Oppo, Pablo Ortega, Jon I. Robson, Chris M. Brierley, Renee Davis, Ian R. Hall, Paola Moffa-Sanchez, Neil L. Rose, Peter T. Spooner, Igor Yashayaev, Lloyd D. Keigwin. Anomalously weak Labrador Sea convection and Atlantic overturning during the past 150 yearsNature, 2018; 556 (7700): 227 DOI: 10.1038/s41586-018-0007-4

 

Source: Woods Hole Oceanographic Institution. “Atlantic Ocean circulation at weakest point in more than 1,500 years.” ScienceDaily. ScienceDaily, 11 April 2018. <www.sciencedaily.com/releases/2018/04/180411131642.htm>.

Filed Under News | Leave a Comment 

Date:
April 10, 2018

Source:
FECYT – Spanish Foundation for Science and Technology

Summary:
A well-known experiment with young people bouncing a ball showed that when an observer focuses on counting the passes, he does not detect if someone crosses the stage disguised as a gorilla. Something similar could be happening to us when we try to discover intelligent non-earthly signals, which perhaps manifest themselves in dimensions that escape our perception, such as the unknown dark matter and energy.

 

Inside the Occator crater of the dwarf planet Ceres appears a strange structure, looking like a square inside a triangle.
Credit: NASA / JPL-Caltech

 

 

A well-known experiment with young people bouncing a ball showed that when an observer focuses on counting the passes, he does not detect if someone crosses the stage disguised as a gorilla. According to researchers at the University of Cádiz (Spain), something similar could be happening to us when we try to discover intelligent non-earthly signals, which perhaps manifest themselves in dimensions that escape our perception, such as the unknown dark matter and energy.

One of the problems that have long intrigued experts in cosmology is how to detect possible extraterrestrial signals. Are we really looking in the right direction? Maybe not, according to the study that the neuropsychologists Gabriel de la Torre and Manuel García, from the University of Cádiz, publish in the journal Acta Astronautica.

“When we think of other intelligent beings, we tend to see them from our perceptive and conscience sieve; however we are limited by our sui generis vision of the world, and it’s hard for us to admit it,” says De la Torre, who prefers to avoid the terms ‘extraterrestrial’ or aliens by its Hollywood connotations and use another more generic, as ‘non-terrestrial’.

“What we are trying to do with this differentiation is to contemplate other possibilities — he says-, for example, beings of dimensions that our mind cannot grasp; or intelligences based on dark matter or energy forms, which make up almost 95% of the universe and which we are only beginning to glimpse. There is even the possibility that other universes exist, as the texts of Stephen Hawking and other scientists indicate.”

The authors state that our own neurophysiology, psychology and consciousness can play an important role in the search for non-terrestrial civilizations; an aspect that they consider has been neglected until now.

In relation to this, they conducted an experiment with 137 people, who had to distinguish aerial photographs with artificial structures (buildings, roads …) from others with natural elements (mountains, rivers …). In one of the images, a tiny character disguised as a gorilla was inserted to see if the participants noticed.

This test was inspired by the one carried out by the researchers Christopher Chabris and Daniel Simons in the 90s to show the inattention blindness of the human being. A boy in a gorilla costume could walk in front of a scene, gesticulating, while the observers were busy in something else (counting the ball passes of players in white shirts), and more than half did not notice.

“It is very striking, but very significant and representative at the same time, how our brain works,” says De la Torre, who explains how the results were similar in the case of his experiment with the images. “In addition, our surprise was greater,” he adds, “since before doing the test to see the inattentional blindness we assessed the participants with a series of questions to determine their cognitive style (if they were more intuitive or rational), and it turned out that the intuitive individuals identified the gorilla of our photo more times than those more rational and methodical.”

“If we transfer this to the problem of searching for other non-terrestrial intelligences, the question arises about whether our current strategy may result in us not perceiving the gorilla,” stresses the researcher, who insists: “Our traditional conception of space is limited by our brain, and we may have the signs above and be unable to see them. Maybe we’re not looking in the right direction.”

Another example presented in the article is an apparently geometric structure that can be seen in the images of Occator, a crater of the dwarf planet Ceres famous for its bright spots. “Our structured mind tells us that this structure looks like a triangle with a square inside, something that theoretically is not possible in Ceres,” says De la Torre, “but maybe we are seeing things where there are none, what in psychology is called pareidolia.”

However, the neuropsychologist points out another possibility: “The opposite could also be true. We can have the signal in front of us and not perceive it or be unable to identify it. If this happened, it would be an example of the cosmic gorilla effect. In fact, it could have happened in the past or it could be happening right now.”

Three types of intelligent civilizations

In their study, the authors also pose how different classes of intelligent civilizations could be. They present a classification with three types based on five factors: biology, longevity, psychosocial aspects, technological progress and distribution in space.

An example of Type 1 civilizations is ours, which could be ephemeral if it mishandles technology or planetary resources, or if it does not survive a cataclysm. But it could also evolve into a Type 2 civilization, characterized by the long longevity of its members, who control quantum and gravitational energy, manage space-time and are able to explore galaxies.

“We were well aware that the existing classifications are too simplistic and are generally only based on the energy aspect. The fact that we use radio signals does not necessarily mean that other civilizations also use them, or that the use of energy resources and their dependence are the same as we have,” the researchers point out, recalling the theoretical nature of their proposals.

The third type of intelligent civilization, the most advanced, would be constituted by exotic beings, with an eternal life, capable of creating in multidimensional and multiverse spaces, and with an absolute dominion of dark energy and matter.

Story Source:

Materials provided by FECYT – Spanish Foundation for Science and TechnologyNote: Content may be edited for style and length.


Journal Reference:

  1. Gabriel G. De la Torre, Manuel A. Garcia. The cosmic gorilla effect or the problem of undetected non terrestrial intelligent signalsActa Astronautica, 2018; 146: 83 DOI: 10.1016/j.actaastro.2018.02.036

 

Source: FECYT – Spanish Foundation for Science and Technology. “A cosmic gorilla effect could blind the detection of aliens.” ScienceDaily. ScienceDaily, 10 April 2018. <www.sciencedaily.com/releases/2018/04/180410132835.htm>.

Filed Under News | Leave a Comment 

Large concentrations of sulfites and bisulfites in shallow lakes may have set the stage for Earth’s first biological molecules

Date:
April 9, 2018

Source:
Massachusetts Institute of Technology

Summary:
Planetary scientists have found that large concentrations of sulfites and bisulfites in shallow lakes may have set the stage for synthesizing Earth’s first life forms.

 

White Island, New Zealand. Researchers have found that a class of molecules called sulfidic anions may have been abundant in Earth’s lakes and rivers.
Credit: © Alba / Fotolia

 

 

Around 4 billion years ago, Earth was an inhospitable place, devoid of oxygen, bursting with volcanic eruptions, and bombarded by asteroids, with no signs of life in even the simplest forms. But somewhere amid this chaotic period, the chemistry of the Earth turned in life’s favor, giving rise, however improbably, to the planet’s very first organisms.

What prompted this critical turning point? How did living organisms rally in such a volatile world? And what were the chemical reactions that brewed up the first amino acids, proteins, and other building blocks of life? These are some of the questions researchers have puzzled over for decades in trying to piece together the origins of life on Earth.

Now planetary scientists from MIT and the Harvard-Smithsonian Center for Astrophysics have identified key ingredients that were present in large concentrations right around the time when the first organisms appeared on Earth.

The researchers found that a class of molecules called sulfidic anions may have been abundant in Earth’s lakes and rivers. They calculate that, around 3.9 billion years ago, erupting volcanoes emitted huge quantities of sulfur dioxide into the atmosphere, which eventually settled and dissolved in water as sulfidic anions — specifically, sulfites and bisulfites. These molecules likely had a chance to accumulate in shallow waters such as lakes and rivers.

“In shallow lakes, we found these molecules would have been an inevitable part of the environment,” says Sukrit Ranjan, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Whether they were integral to the origin of life is something we’re trying to work out.”

Preliminary work by Ranjan and his collaborators suggest that sulfidic anions would have sped up the chemical reactions required to convert very simple prebiotic molecules into RNA, a genetic building block of life.

“Prior to this work, people had no idea what levels of sulfidic anions were present in natural waters on early Earth; now we know what they were,” Ranjan says. “This fundamentally changes our knowledge of early Earth and has had direct impact on laboratory studies of the origin of life.”

Ranjan and his colleagues published their results today in the journal Astrobiology.

Setting early Earth’s stage

In 2015, chemists from Cambridge University, led by John Sutherland, who is a co-author on the current study, discovered a way to synthesize the precursors to RNA using just hydrogen cyanide, hydrogen sulfide, and ultraviolet light — all ingredients that are thought to have been available on early Earth, before the appearance of the first life forms.

From a chemistry point of view, the researchers’ case was convincing: The chemical reactions they carried out in the laboratory overcame longstanding chemical challenges, to successfully yield the genetic building blocks to life. But from a planetary science standpoint, it was unclear whether such ingredients would have been sufficiently abundant to jumpstart the first living organisms.

For instance, comets may have had to rain down continuously to bring enough hydrogen cyanide to Earth’s surface. Meanwhile, hydrogen sulfide, which would have been released in huge amounts by volcanic eruptions, would have mostly stayed in the atmosphere, as the molecule is relatively insoluble in water, and therefore would not have had regular opportunities to interact with hydrogen cyanide.

Instead of approaching the origins-of-life puzzle from a chemistry perspective, Ranjan looked at it from a planetary perspective, attempting to identify the actual conditions that might have existed on early Earth, around the time the first organisms appeared.

“The origins-of-life field has traditionally been led by chemists, who try to figure out chemical pathways and see how nature might have operated to give us the origins of life,” Ranjan says. “They do a really great job of that. What they don’t do in as much detail is, they don’t ask what were conditions on early Earth like before life? Could the scenarios they invoke have actually happened? They don’t know as much what the stage setting was.”

Cranking up the ingredients for life

In August 2016, Ranjan gave a talk at Cambridge University about volcanism on Mars and the types of gases that would have been emitted by such eruptions in the red planet’s oxygenless atmosphere. Chemists at the talk realized that the same general conditions would have occurred on Earth prior to the start of life.

“They took away from that [talk] that, on early Earth, you don’t have much oxygen, but you do have sulfur dioxide from volcanism,” Ranjan recalls. “As a consequence, you should have sulfites. And they said, ‘Can you tell us how much of this molecule there would have been?’ And that’s what we set out to constrain.”

To do so, he started with a volcanism model developed previously by Sara Seager, MIT’s Class of 1941 Professor of Planetary Sciences, and her former graduate student Renyu Hu.

“They did a study where they asked, ‘Suppose you take the Earth and just crank up the amount of volcanism on it. What concentrations of gases do you get in the atmosphere?'” Ranjan says.

He consulted the geological record to determine the amount of volcanism that likely took place around 3.9 billion years ago, around the time the first life forms are thought to have appeared, then looked up the types and concentrations of gases that this amount of volcanism would have produced according to Seager and Hu’s calculations.

Next, he wrote a simple aqueous geochemistry model to calculate how much of these gases would have been dissolved in shallow lakes and reservoirs — environments that would have been more conducive to concentrating life-forming reactions, versus vast oceans, where molecules could easily dissipate.

Interestingly, he consulted the literature in a rather unexpected subject while conducting these calculations: winemaking — a science that involves, in part, dissolving sulfur dioxide in water to produce sulfites and bisulfites under oxygenless conditions similar to those on early Earth.

“When we were working on this paper, a lot of the constants and data we pulled out were from the wine chemistry journals, because it’s where we have anoxic environments here on modern Earth,” Ranjan says. “So we took aspects of wine chemistry and asked: ‘Suppose we have x amount of sulfur dioxide. How much of that dissolves in water, and then what does it become?'”

Community cross-talk

Ultimately, he found that, while volcanic eruptions would have spewed huge quantities of both sulfur dioxide and hydrogen sulfide into the atmosphere, it was the former that dissolved more easily in shallow waters, producing large concentrations of sulfidic anions, in the form of sulfites and bisulfites.

“During major volcanic eruptions, you might have had up to millimolar levels of these compounds, which is about laboratory-level concentrations of these molecules, in the lakes,” Ranjan says. “That is a titanic amount.”

The new results point to sulfites and bisulfites as a new class of molecules — ones that were actually available on early Earth — that chemists can now test in the lab, to see whether they can synthesize from these molecules the precursors for life.

Early experiments led by Ranjan’s colleagues suggest that sulfites and bisulfites may have indeed encouraged biomolecules to form. The team carried out chemical reactions to synthesize ribonucleotides with sulfites and bisulfites, versus with hydrosulfide, and found the former were able to produce ribonucleotides and related molecules 10 times faster than the latter, and at higher yields. More work is needed to confirm whether sulfidic anions were indeed early ingredients in brewing up the first life forms, but there is now little doubt that these molecules were part of the prebiotic milieu.

For now, Ranjan says the results open up new opportunities for collaboration.

“This demonstrates a need for people in the planetary science community and origins-of-life community to talk to each other,” Ranjan says. “It’s an example of how cross-pollination between disciplines can really yield simple but robust and important insights.”

This work was funded, in part, by the Simons Foundation, via the Simons Collaboration on the Origin of Life.

Story Source:

Materials provided by Massachusetts Institute of Technology. Original written by Jennifer Chu. Note: Content may be edited for style and length.


Journal Reference:

  1. Sukrit Ranjan, Zoe R. Todd, John D. Sutherland, Dimitar D. Sasselov. Sulfidic Anion Concentrations on Early Earth for Surficial Origins-of-Life ChemistryAstrobiology, 2018; DOI: 10.1089/ast.2017.1770

 

Source: Massachusetts Institute of Technology. “Brewing up Earth’s earliest life: Large concentrations of sulfites and bisulfites in shallow lakes may have set the stage for Earth’s first biological molecules.” ScienceDaily. ScienceDaily, 9 April 2018. <www.sciencedaily.com/releases/2018/04/180409103833.htm>.

Filed Under News | Leave a Comment 

Winter in April – NYC 2018

 

Last week, another snow storm hit New York City. The barely visible view is the Empire State Building.

NYC: Winter in April – View from the 24th Floor ©Target Health Inc.

 

For more information about Target Health contact Warren Pearlson (212-681-2100 ext. 165). For additional information about software tools for paperless clinical trials, please also feel free to contact Dr. Jules T. Mitchel. The Target Health software tools are designed to partner with both CROs and Sponsors. Please visit the Target Health Website.

 

Joyce Hays, Founder and Editor in Chief of On Target

Jules Mitchel, Editor

 

Filed Under News, What's New | Leave a Comment 

The Porphyrias are Generally Considered Genetic in Nature

A skin rash in a person with porphyria: From Wikimedia Commons, the free media repository

 

 

Porphyria is the name for certain medical conditions or diseases, which have been known since the days of Hippocrates. Those who suffer from the disease can not make certain substances in the blood. They have disorders of certain 1) _____ which normally work in the production of porphyrins and heme. The condition is usually caused by a genetic deficiency, but chemicals which affect metabolism may also cause it. Arsenic is an example. It can be triggered by various drugs or by some environmental conditions. The disease causes skin problems, or some diseases of the nervous system, or both. Severe pain is often present. When someone is affected by porphyria they will start to lose their 2) ____ about two weeks after having an attack. There is no cure for the hair loss.

 

In humans, porphyrins are the main precursors of heme, an essential constituent of hemoglobin, myoglobin, catalase, peroxidase, and P450 liver cytochromes. The body requires porphyrins to produce heme, which is used to carry oxygen in the blood among other things, but in the porphyrias there is a deficiency (inherited or acquired) of the enzymes that transform the various porphyrins into others, leading to abnormally high levels of one or more of these substances. Porphyrias are classified in two ways, by symptoms and by pathophysiology. Physiologically, porphyrias are classified as liver or erythropoietic based on the sites of accumulation of heme precursors, either in the liver or in the bone marrow and red blood cells.

 

There are eight enzymes in the heme biosynthetic pathway, four of which – the first one and the last three – are in the mitochondria, while the other four are in the cytosol. Defects in any of these can lead to some form of porphyria. The hepatic porphyrias are characterized by acute neurological attacks (seizures, psychosis, extreme back and abdominal pain, and an acute polyneuropathy), while the erythropoietic forms present with skin problems, usually a light-sensitive blistering rash and increased hair growth.

 

The first reports of clinical porphyria appeared in the 1870’s, describing patients who excreted porphyrin-like chemicals and had symptoms ranging from abdominal pain to photosensitivity. Some had disfiguring photocutanous damage with loss of tissue from the ears, eyelids and fingers, in association with very high urine porphyrins. One patient was hired as a laboratory assistant for Hans Fischer, a German physician and chemist, and provided urine from which the chemical structure of uroporphyrin was deduced. Fischer then synthesized the compound de novo. In the process he established the structures of bilirubin and heme, respectively, work for which he received the Nobel Prize in Chemistry in 1930. Studies of heme synthesis in living organisms followed, starting in the 1940s, facilitated by the advent of isotopic tracer techniques.  By the 1960s, the pathway was well delineated, allowing predictions of the specific enzyme deficiency associated with each of the porphyrias. The predictions were confirmed initially with assay of the individual enzymes, then by deoxyribonucleic acid (DNA) analysis of the relevant genes.

 

Three of the acute hepatic porphyrias (acute intermittent porphyria (AIP), hereditary coproporphyria (HCP), and variegate porphyria (VP)) are autosomal dominant disorders, affecting males and females equally. A fourth type, delta-aminolevulinic aciduria (ALAD), is autosomal recessive and very rare. For the three dominant types, family studies with DNA analysis have indicated multiple mutations. While enzyme activity varies with the nature of the mutation, on average it is roughly 50% of normal. Efforts to associate specific mutations with clinical manifestations have been largely negative. Regardless of genotype, the vast majority (perhaps 90%) of confirmed genetic carriers never experience an attack. In families with multiple documented genetic carriers, symptoms often are limited to one or two individuals.

 

People with symptoms, represent only a small fraction of those who carry a relevant mutation and are at risk of an attack. In one study of mutation prevalence from France, 3,350 healthy blood donors were screened for HMBS (PBG deaminase) deficiency. The test was positive in four, and a known AIP mutation was documented in two. Thus, the prevalence of mutations in this group was at least 1:1,675 (60:100,000) – far larger than is generally assumed. Another study from northwestern Russia (St. Petersburg) and Finland screened patients who were admitted to a neurology ward with acute polyneuropathy or encephalopathy and abdominal pain. Out of 108 patients, 11% proved to have previously undiagnosed acute porphyria. While these studies are small, they suggest that people coming to an Emergency Department (ED) with recent onset abdominal pain may have a mutation for AIP more often than is generally assumed.

 

The most common acute symptom is abdominal 3) ____, which is usually diffuse. In some patients, it is localized to the back or an extremity. By the time the patient is seen, pain has increased relentlessly for several days, not hours, often accompanied by nausea, vomiting, and constipation. Fever and leukocytosis are not present in the absence of an accompanying infectious process. Mild elevation of the transaminases is common. Hyponatremia may be seen and occasionally is severe. Because patients tend to present dehydrated after several days of nausea and inadequate oral intake, hyponatremia may be masked initially by hemoconcentration but can develop rapidly after rehydration with dextrose in water.

 

In an attack that has not been recognized and treated, acute visceral symptoms may progress to motor neuropathy, which is manifested initially as weakness of the proximal limb muscles but in advanced cases the respiratory muscles as well. Seizures occur in 10-20% of cases. Patients presenting with seizure represent a difficult challenge for the consulting neurologist. If acute porphyria is not considered, the treatment may include medications such as phenytoin or valproic acid, which intensify the attack with potentially disastrous results. For this reason, early diagnosis is critical, followed by definitive treatment with intravenous hemin. Seizures in acute porphyria can be controlled with a short-acting benzodiazepine, gabapentin, or magnesium.

 

For people who are known carriers of a porphyria 4) _____ determining whether symptoms represent a porphyria exacerbation or are due to a more common problem can be a diagnostic challenge. In those with recurrent attacks, the pattern of symptoms is largely reproduced with each episode, but urine PBG still should be evaluated for biochemical confirmation. A random urine sample is adequate, provided a urine creatinine is obtained, so that results can be expressed per gram creatinine. The sample should be collected prior to intravenous (IV) fluid resuscitation. If it is very dilute, it could yield a false-negative result.

 

Typically, acute 5) ____ porphyria is considered only after a patient has made several visits to the ED. What is needed is a standard quantitative test that is available on an urgent or expedited basis. The concern of the laboratories is the cost of providing a service that may be requested only occasionally. However, the actual cost per positive test of implementing rapid PBG screening is unknown, because very little information exists on the prevalence of acute porphyria in the urgent care setting. An analysis from the Mayo 6) ____ concluded that a rapid PBG test would be cost-effective.45 While additional studies are needed, it should be noted that the cost of a delayed diagnosis is also high: multiple unnecessary procedures, prolonged hospitalization, and treatment for an erroneous diagnosis, including intensive care unit (ICU) care for those in neurological crisis.

 

Misdiagnosis also occurs but in the outpatient rather than the ED. It arises when the gamut of tests and procedures has failed to identify a reason for the patient’s chronic abdominal pain. Rare diseases then come under consideration, and a porphyrin screen is ordered. The latter consists of fractionated urine porphyrins only; it does not include ALA and PBG, which must be ordered separately. In some cases, the report shows elevation of several porphyrin fractions, predominantly COPROs. Although the changes are quantitatively minor (2- to 4-fold the upper limit of normal) and well below the threshold for symptoms for cutaneous porphyria, the patient receives a diagnosis of porphyria and feels relief at having, at last, a name for the recurring symptoms.

 

Initial management is focused on eliminating factors that may be contributing to an attack, including inducer medications, caloric deprivation, and dehydration. Medications considered risky for genetic 7) ____ of acute porphyria and all nonessential medications are discontinued. The American Porphyria Foundation (APF) and the European Porphyria Network (EPNET) maintain lists of drugs that are considered safe or hazardous. If possible, calories and rehydration are administered orally to reverse the fasting state; otherwise, 10% dextrose in 0.45% normal saline is administered IV. Although this therapy has not been tested in controlled trials, it does stop the attack for some patients. Hyponatremia may be severe, requiring urgent 8) _____ administration. A protocol that minimizes the risk of brainstem damage should be followed.47 Pain relief generally requires opiates, often in large doses, while waiting for porphyria-specific therapy to take effect.

 

Fifty years ago, the outlook for acute porphyria with neurological complications was poor, with a reported mortality of 35%. Although the prognosis remains guarded, the number of cases progressing to advanced disease has declined, as a result of heightened awareness, early identification of genetic carriers, and specific therapy in the form of hemin infusion. In patients with neuropathy who respond to treatment, motor deficits resolve slowly but usually completely, over an average of 10-11 months. Because the manifestations of an attack can include an altered 9) ____ state, there has been speculation that people with undiagnosed acute porphyria may be institutionalized for psychiatric reasons. One study indeed found an unexpectedly high prevalence of elevated urine 10) ____ in residents of a mental health facility, without symptoms suggestive of acute porphyria. When the question was reexamined, abnormal tests were no more frequent than expected.  In patients with known porphyria who have been monitored long-term, to date there has been no evidence for excess chronic mental illness.

 

ANSWERS:  1) enzymes; 2) hair; 3) pain; 4) mutation; 5) porphyria; 6) clinic; 7) carriers; 8) saline; 9) mental; 10) PGB

 

Filed Under News | Leave a Comment 

King George III of Great Britain

Full-length portrait in oils of a clean-shaven young George in eighteenth century dress: gold jacket and breeches, ermine cloak, powdered wig, white stockings, and buckled shoes.

 

Graphic credit: English painter, Allan Ramsay – vgGv1tsB1URdhg at Google Cultural Institute maximum zoom level, Public Domain, https://commons.wikimedia.org/w/index.php?curid=23604082

 

 

Editor’s note: We are including extra information about King George III, not only because all of Europe was seething during his reign, and because the King’s illness is curious because there’s no real diagnosis to this day, and also because American history at the time of King George III, is inextricably bound. Finally, we thought readers should know more than the average educated American, that King George III was not the tyrant that most Americans believe, but an astute politician, a curious intellectual, a highly cultured person, and a moral person within his own family, very dear to him and on behalf of his beloved country. We hope you come away, understanding King George III better after you read this short piece. Readers may not realize that this king was exceedingly popular with the people of Great Britain.

George III (George William Frederick; 4 June 1738 – 29 January 1820) was King of Great Britain and King of Ireland from 25 October 1760 until the union of the two countries on 1 January 1801. After the union, he was King of the United Kingdom of Great Britain and Ireland until his death. He was concurrently Duke and prince-elector of Brunswick-Luneburg (?Hanover’) in the Holy Roman Empire before becoming King of Hanover on 12 October 1814. He was the third British monarch of the House of Hanover, but unlike his two predecessors, he was born in England, spoke English as his first language, and never visited Hanover. His life and with it his reign, which were longer than those of any of his predecessors, were marked by a series of military conflicts involving his kingdoms, much of the rest of Europe, and places farther afield in Africa, the Americas and Asia. Early in his reign, Great Britain defeated France in the Seven Years’ War, becoming the dominant European power in North America and India. However, many of Britain’s American colonies were soon lost in the American War of Independence. Further wars against revolutionary and Napoleonic France from 1793 concluded in the defeat of Napoleon at the Battle of Waterloo in 1815.

 

In the later part of his life, George III had recurrent, and eventually permanent, mental illness. Although it has since been suggested that he had the blood disease porphyria, the cause of his illness remains unknown. After a final relapse in 1810, a regency was established, and George III’s eldest son, George, the conniving Prince of Wales, ruled as Prince Regent. On George III’s death, the Prince Regent succeeded his father as George IV. Historical analysis of George III’s life has gone through a ?kaleidoscope of changing views’ that have depended heavily on the prejudices of his biographers and the sources available to them. Until it was reassessed in the second half of the 20th century, his reputation in the United States was one of a tyrant; and in Britain he became, for (a minority) ?the scapegoat for the failure of imperialism’.

 

George was born in London at Norfolk House in St James’s Square. He was the grandson of King George II, and the eldest son of Frederick, Prince of Wales, and Augusta of Saxe-Gotha. As Prince George was born two months prematurely and he was thought unlikely to survive, he was baptized the same day by Thomas Secker, who was both Rector of St James’s and Bishop of Oxford. One month later, he was publicly baptized at Norfolk House, again by Secker. His godparents were the King of Sweden (for whom Lord Baltimore stood proxy), his uncle the Duke of Saxe-Gotha (for whom Lord Carnarvon stood proxy) and his great-aunt the Queen of Prussia (for whom Lady Charlotte Edwin stood proxy). George grew into a healthy but reserved and shy child. The family moved to Leicester Square, where George and his younger brother Prince Edward, Duke of York and Albany, were educated together by private tutors. Family letters show that he could read and write in both English and German, as well as comment on political events of the time, by the age of eight. He was the first British monarch to study science systematically. Apart from chemistry and physics, his lessons included astronomy, mathematics, French, Latin, history, music, geography, commerce, agriculture and constitutional law, along with sporting and social accomplishments such as dancing, fencing, and riding. His religious education was wholly Anglican. At age 10 George took part in a family production of Joseph Addison’s play Cato and said in the new prologue: ?What, tho’ a boy! It may with truth be said, A boy in England born, in England bred.’ Historian Romney Sedgwick argued that these lines appear ?to be the source of the only historical phrase with which he is associated’. Clearly, this historian, was one of the minority who downplayed King George’s many talents.

 

George’s grandfather, King George II, disliked the Prince of Wales, and took little interest in his grandchildren. However, in 1751 the Prince of Wales died unexpectedly from a lung injury, and George became heir apparent to the throne. He inherited his father’s title of Duke of Edinburgh. Now more interested in his grandson, three weeks later the King created George Prince of Wales (the title is not automatically acquired). In the spring of 1756, as George approached his 18th birthday, the King offered him a grand establishment at St James’s Palace, but George refused the offer, guided by his mother and her confidant, Lord Bute, who would later serve as Prime Minister. George’s mother, now the Dowager Princess of Wales, preferred to keep George at home where she could imbue him with her strict moral values.

 

In 1759, George was smitten with Lady Sarah Lennox, sister of the Duke of Richmond, but Lord Bute advised against the match and George abandoned his thoughts of marriage. ?I am born for the happiness or misery of a great nation,’ he wrote, ?and consequently must often act contrary to my passions.’ Nevertheless, attempts by the King to marry George to Princess Sophie Caroline of Brunswick-Wolfenbuttel were resisted by him and his mother; Sophie married the Margrave of Bayreuth instead. The following year, at the age of 22, George succeeded to the throne when his grandfather, George II, died suddenly on 25 October 1760, two weeks before his 77th birthday. The search for a suitable wife intensified. On 8 September 1761 in the Chapel Royal, St James’s Palace, the King married Princess Charlotte of Mecklenburg-Strelitz, whom he met on their wedding day. A fortnight later on 22 September both were crowned at Westminster Abbey. George remarkably never took a mistress (in contrast with his grandfather and his sons), and the couple enjoyed a genuinely happy marriage until his mental illness struck. They had 15 children – nine sons and six daughters. In 1762, George purchased Buckingham House (on the site now occupied by Buckingham Palace) for use as a family retreat. His other residences were Kew and Windsor Castle. St James’s Palace was retained for official use. He did not travel extensively, and spent his entire life in southern England. In the 1790s, the King and his family took holidays at Weymouth, Dorset, which he thus popularized as one of the first seaside resorts in England.

 

George, in his accession speech to Parliament, proclaimed: ?Born and educated in this country, I glory in the name of Britain.’ He inserted this phrase into the speech, written by Lord Hardwicke, to demonstrate his desire to distance himself from his German forebears, who were perceived as caring more for Hanover than for Britain. Although his accession was at first welcomed by politicians of all parties, the first years of his reign were marked by political instability, largely generated as a result of disagreements over the Seven Years’ War. George was also perceived as favoring Tory ministers, which led to his denunciation by the Whigs as an autocrat. On his accession, the Crown lands produced relatively little income; most revenue was generated through taxes and excise duties. George surrendered the Crown Estate to Parliamentary control in return for a civil list annuity for the support of his household and the expenses of civil government. Claims that he used the income to reward supporters with bribes and gifts are disputed by historians who say such claims ?rest on nothing but falsehoods put out by disgruntled opposition’.

 

Debts amounting to over3 million pounds over the course of George’s reign were paid by Parliament, and the civil list annuity was increased from time to time. He aided the Royal Academy of Arts with large grants from his private funds and may have donated more than half of his personal income to charity. Of his art collection, the two most notable purchases are Johannes Vermeer’s Lady at the Virginals and a set of Canalettos, but it is as a collector of books that he is best remembered. The King’s Library was open and available to scholars and was the foundation of a new national library. In May 1762, the incumbent Whig government of the Duke of Newcastle was replaced with one led by the Scottish Tory Lord Bute. Bute’s opponents worked against him by spreading the calumny that he was having an affair with the King’s mother, and by exploiting anti-Scottish prejudices amongst the English. John Wilkes, a member of parliament, published The North Briton, which was both inflammatory and defamatory in its condemnation of Bute and the government. Wilkes was eventually arrested for seditious libel but he fled to France to escape punishment; he was expelled from the House of Commons, and found guilty in absentia of blasphemy and libel. In 1763, after concluding the Peace of Paris which ended the war, Lord Bute resigned, allowing the Whigs under George Grenville to return to power. Later that year, the Royal Proclamation of 1763 placed a limit upon the westward expansion of the American colonies. The Proclamation aimed to divert colonial expansion to the north (to Nova Scotia) and to the south (Florida). The Proclamation Line did not bother the majority of settled farmers, but it was unpopular with a vocal minority and ultimately contributed to conflict between the colonists and the British government.

 

With the American colonists generally unburdened by British taxes, the government thought it appropriate for them to pay towards the defense of the colonies against native uprisings and the possibility of French incursions. The central issue for the colonists was not the amount of taxes but whether Parliament could levy a tax without American approval, for there were no American seats in Parliament. The Americans protested that like all Englishmen they had rights to ?no taxation without representation’.In 1765, Grenville introduced the Stamp Act, which levied a stamp duty on every document in the British colonies in North America. Since newspapers were printed on stamped paper, those most affected by the introduction of the duty were the most effective at producing propaganda opposing the tax. Meanwhile, the King had become exasperated at Grenville’s attempts to reduce the King’s prerogatives, and tried, unsuccessfully, to persuade William Pitt the Elder to accept the office of Prime Minister. After a brief illness, which may have presaged his illnesses to come, George settled on Lord Rockingham to form a ministry, and dismissed Grenville. Lord Rockingham, with the support of Pitt and the King, repealed Grenville’s unpopular Stamp Act, but his government was weak and he was replaced in 1766 by Pitt, on whom George bestowed the title, Earl of Chatham. The actions of Lord Chatham and George III in repealing the Act were so popular in America that statues of them both were erected in New York City. Lord Chatham fell ill in 1767, and the Duke of Grafton took over the government, although he did not formally become Prime Minister until 1768. That year, John Wilkes returned to England, stood as a candidate in the general election, and came at the top of the poll in the Middlesex constituency. Wilkes was again expelled from Parliament. Wilkes was re-elected and expelled twice more, before the House of Commons resolved that his candidature was invalid and declared the runner-up as the victor. Grafton’s government disintegrated in 1770, allowing the Tories led by Lord North to return to power.

 

George was deeply devout and spent hours in prayer, but his piety was not shared by his brothers. George was appalled by what he saw as their loose morals. In 1770, his brother Prince Henry, Duke of Cumberland and Strathearn, was exposed as an adulterer, and the following year Cumberland married a young widow, Anne Horton. The King considered her inappropriate as a royal bride: she was from a lower social class and German law barred any children of the couple from the Hanoverian succession. George insisted on a new law that essentially forbade members of the Royal Family from legally marrying without the consent of the Sovereign. The subsequent bill was unpopular in Parliament, including among George’s own ministers, but passed as the Royal Marriages Act 1772. Shortly afterward, another of George’s brothers, Prince William Henry, Duke of Gloucester and Edinburgh, revealed he had been secretly married to Maria, Countess Waldegrave, the illegitimate daughter of Sir Edward Walpole. The news confirmed George’s opinion that he had been right to introduce the law: Maria was related to his political opponents. Neither lady was ever received at court. Lord North’s government was chiefly concerned with discontent in America. To assuage American opinion most of the custom duties were withdrawn, except for the tea duty, which in George’s words was ?one tax to keep up the right [to levy taxes]’. In 1773, the tea ships moored in Boston Harbor were boarded by colonists and the tea thrown overboard, an event that became known as the Boston Tea Party. In Britain, opinion hardened against the colonists, with Chatham now agreeing with North that the destruction of the tea was ?certainly criminal’. With the clear support of Parliament, Lord North introduced measures, which were called the Intolerable Acts by the colonists: the Port of Boston was shut down and the charter of Massachusetts was altered so that the upper house of the legislature was appointed by the Crown instead of elected by the lower house. Up to this point, in the words of Professor Peter Thomas, George’s ?hopes were centered on a political solution, and he always bowed to his cabinet’s opinions even when skeptical of their success. The detailed evidence of the years from 1763 to 1775 tends to exonerate George III from any real responsibility for the American Revolution.’ Though the Americans characterized George as a tyrant, in these years he acted as a constitutional monarch supporting the initiatives of his ministers.

 

George III is often accused of obstinately trying to keep Great Britain at war with the revolutionaries in America, despite the opinions of his own ministers. In the words of the Victorian author George Trevelyan, the King was determined ?never to acknowledge the independence of the Americans, and to punish their contumacy by the indefinite prolongation of a war which promised to be eternal.’ The King wanted to ?keep the rebels harassed, anxious, and poor, until the day when, by a natural and inevitable process, discontent and disappointment were converted into penitence and remorse’. However, more recent historians defend George by saying in the context of the times, no king would willingly surrender such a large territory, and his conduct was far less ruthless than contemporary monarchs in Europe. In early 1778, France (Britain’s chief rival) signed a treaty of alliance with the United States and the conflict escalated. The United States and France were soon joined by Spain and the Dutch Republic, while Britain had no major allies of its own. As late as the Siege of Charleston in 1780, Loyalists could still believe in their eventual victory, as British troops inflicted heavy defeats on the Continental forces at the Battle of Camden and the Battle of Guilford Court House. In late 1781, the news of Lord Cornwallis’s surrender at the Siege of Yorktown reached London; Lord North’s parliamentary support ebbed away and he resigned the following year. The King drafted an abdication notice, which was never delivered, finally accepted the defeat in North America, and authorized peace negotiations. The Treaties of Paris, by which Britain recognized the independence of the American states and returned Florida to Spain, were signed in 1782 and 1783. When John Adams was appointed American Minister to London in 1785, George had become resigned to the new relationship between his country and the former colonies. He told Adams, ?I was the last to consent to the separation; but the separation having been made and having become inevitable, I have always said, as I say now, that I would be the first to meet the friendship of the United States as an independent power.’

George III was extremely popular in Britain. The British people admired him for his piety, and for remaining faithful to his wife. He was fond of his children and was devastated at the death of two of his sons in infancy in 1782 and 1783 respectively. By this time, George’s health was deteriorating. He had a mental illness, characterized by acute mania, which was possibly a symptom of the genetic disease porphyria, although this has been questioned. A study of samples of the King’s hair published in 2005 revealed high levels of arsenic, a possible trigger for the disease. The source of the arsenic is not known, but it could have been a component of medicines or cosmetics.

 

The King may have had a brief episode of disease in 1765, but a longer episode began in the summer of 1788. At the end of the parliamentary session, he went to Cheltenham Spa to recuperate. It was the furthest he had ever been from London – just short of 100 miles (150 km) – but his condition worsened. In November he became seriously deranged, sometimes speaking for many hours without pause, causing him to foam at the mouth and making his voice hoarse. His doctors were largely at a loss to explain his illness, and spurious stories about his condition spread, such as the claim that he shook hands with a tree in the mistaken belief that it was the King of Prussia.

 

Editor’s note: The King’s German wife, Charlotte, was intelligent, educated and cultured. She brought German musicians into the Court of George III. Mozart and his family lived for a while there. Handel was such a favorite of the English Court that not only did he compose some of his greatest pieces while living there, but he became an English citizen. As many readers may know, the first aria in Handel’s great opera, Xerxes, depicts a man singing to a tree. For your enjoyment, this aria can be clicked on below.

 

Treatment for mental illness was primitive by modern standards, and the King’s doctors, who included Francis Willis, treated the King by forcibly restraining him until he was calm, or applying caustic poultices to draw out ?evil humors’. In February 1789, the Regency Bill, authorizing his, always scheming, oldest son, the Prince of Wales to act as regent, was introduced and passed in the House of Commons, but before the House of Lords could pass the bill, George III recovered. After George’s recovery, his popularity, and that of Pitt, continued to increase at the expense of Fox and the Prince of Wales. His humane and understanding treatment of two insane assailants, Margaret Nicholson in 1786 and John Frith in 1790, contributed to his popularity. James Hadfield’s failed attempt to shoot the King in the Drury Lane Theatre on 15 May 1800 was not political in origin but motivated by the apocalyptic delusions of Hadfield and Bannister Truelock. George seemed unperturbed by the incident, so much so that he fell asleep during the intermission.

 

The French Revolution of 1789, in which the French monarchy had been overthrown, worried many British landowners. France declared war on Great Britain in 1793; in the war attempt, George allowed Pitt to increase taxes, raise armies, and suspend the right of habeas corpus. The First Coalition to oppose revolutionary France, which included Austria, Prussia, and Spain, broke up in 1795 when Prussia and Spain made separate peace with France. The Second Coalition, which included Austria, Russia, and the Ottoman Empire, was defeated in 1800. Only Great Britain was left fighting Napoleon Bonaparte, the First Consul of the French Republic. At about the same time, the King had a relapse of his previous illness, which he blamed on worry over the Catholic question. On 14 March 1801, Pitt was formally replaced by the Speaker of the House of Commons, Henry Addington. Addington opposed emancipation, instituted annual accounts, abolished income tax and began a program of disarmament. In October 1801, he made peace with the French, and in 1802 signed the Treaty of Amiens. George did not consider the peace with France as real; in his view it was an ?experiment’. In 1803, the war resumed but public opinion distrusted Addington to lead the nation in war, and instead favored Pitt. An invasion of England by Napoleon seemed imminent, and a massive volunteer movement arose to defend England against the French. George’s review of 27,000 volunteers in Hyde Park, London, on 26 and 28 October 1803 and at the height of the invasion scare, attracted an estimated 500,000 spectators on each day. The Times said, ?The enthusiasm of the multitude was beyond all expression.’ A courtier wrote on 13 November that, ?The King is really prepared to take the field in case of attack, his beds are ready and he can move at half an hour’s warning.’ George wrote to his friend Bishop Hurd, ?We are here in daily expectation that Bonaparte will attempt his threatened invasion … Should his troops effect a landing, I shall certainly put myself at the head of mine, and my other armed subjects, to repel them.’ After Admiral Lord Nelson’s famous naval victory at the Battle of Trafalgar, the possibility of invasion was extinguished.

 

In 1804, George’s recurrent illness returned. In late 1810, at the height of his popularity, already virtually blind with cataracts and in pain from rheumatism, George became dangerously ill. In his view the malady had been triggered by stress over the death of his youngest and favorite daughter, Princess Amelia. The Princess’s nurse reported that ?the scenes of distress and crying every day were melancholy beyond description.’ He accepted the need for the Regency Act of 1811, and the Prince of Wales acted as Regent for the remainder of George III’s life. Despite signs of a recovery in May 1811, by the end of the year George had become permanently insane and lived in seclusion at Windsor Castle until his death. Prime Minister Spencer Perceval was assassinated in 1812 and was replaced by Lord Liverpool. Liverpool oversaw British victory in the Napoleonic Wars. The subsequent Congress of Vienna led to significant territorial gains for Hanover, which was upgraded from an electorate to a kingdom.

 

Meanwhile, George’s health deteriorated. He developed dementia, and became completely blind and increasingly deaf. He was incapable of knowing or understanding either that he was declared King of Hanover in 1814, or that his wife died in 1818. At Christmas 1819, he spoke nonsense for 58 hours, and for the last few weeks of his life was unable to walk. He died at Windsor Castle at 8:38 pm on 29 January 1820, six days after the death of his fourth son, the Duke of Kent. His favorite son, Frederick, Duke of York, was with him. George III was buried on 16 February in St George’s Chapel, Windsor Castle. George was succeeded by two of his sons George IV and William IV, who both died without surviving legitimate children, leaving the throne to the only legitimate child of the Duke of Kent, Victoria, the last monarch of the House of Hanover. George III lived for 81 years and 239 days and reigned for 59 years and 96 days: both his life and his reign were longer than those of any of his predecessors. Only Victoria and Elizabeth II have since lived and reigned longer.

 

George III was dubbed ?Farmer George’ by satirists, at first to mock his interest in mundane matters rather than politics, but later to contrast his homely thrift with his son’s grandiosity and to portray him as a man of the people. Under George III, the British Agricultural Revolution reached its peak and great advances were made in fields such as science and industry. There was unprecedented growth in the rural population, which in turn provided much of the workforce for the concurrent Industrial Revolution. George’s collection of mathematical and scientific instruments is now owned by King’s College London but housed in the Science Museum, London, to which it has been on long-term loan since 1927. He had the King’s Observatory built in Richmond-upon-Thames for his own observations of the 1769 transit of Venus. When William Herschel discovered Uranus in 1781, he at first named it Georgium Sidus (George’s Star) after the King, who later funded the construction and maintenance of Herschel’s 1785 40-foot telescope, which was the biggest ever built at the time. In the mid-twentieth century the work of historian, Lewis Namier, who thought George was ?much maligned’, started a re-evaluation of the man and his reign.

 

The very cultured court of King George III would have invited creative composers like Handel and Mozart to perform there often. The most talented opera singer of the time, Farinelli, would have performed at this King’s court for King George and his wife Charlotte.

 

George Frederic Handel (1685-1759); Painting is by Balthasar Denner – National Portrait Gallery: NPG 1976; Public Domain, https://commons.wikimedia.org/w/index.php?curid=6364709

 

Carlo Broschi Farinelli, (1705-1782) wearing the Order of Calatrava, by Jacopo Amigoni c1750-52; Painting by Jacopo Amigoni – Manuel Parada Lopez de Corselas User: Manuel de Corselas ARS SUMMUM, Centro para el Estudio y Difusion Libres de la Historia del Arte. Summer 2007., Public Domain, https://commons.wikimedia.org/w/index.php?curid=2568895

 

Farinelli was the most celebrated Italian castrato singer of the 18th century and one of the greatest singers in the history of opera. Farinelli has been described as having soprano vocal range and sang the highest note customary at the time.

 

 

For your enjoyment

 

Counter tenor, David Daniels, Xerxes, by George Frederic Handel

(The first aria in Xerxes, is the well known Ombra Mai Fu. A man sings to a tree about his love and admiration for it’s existence. Could Handel have heard the malicious gossip after King George III had one of his episodes, that the King was seen talking to a tree? We’ll never know, but the coincidence seems too great, not to come to this conclusion.)

 

Handel’s opera Rinaldo sung in the film, Farinelli

 

Filed Under History of Medicine, News | Leave a Comment 

NIH Completes In-Depth Genomic Analysis of 33 Cancer Types

 

The NIH has completed a detailed genomic analysis, known as the PanCancer Atlas, on a data set of molecular and clinical information from over 10,000 tumors representing 33 types of cancer. The PanCancer Atlas, published as a collection of 27 papers across a suite of Cell journals, sums up the work accomplished by The Cancer Genome Atlas (TCGA). The PanCancer Atlas effort complements the over 30 tumor-specific papers that have been published by TCGA in the last decade and expands upon earlier pan-cancer work that was published in 2013. The project focused not only on cancer genome sequencing, but also on different types of data analyses, such as investigating gene and protein expression profiles, and associating them with clinical and imaging data.

 

The PanCancer Atlas is divided into three main categories, each anchored by a summary paper that recaps the core findings for the topic. The main topics include cell of origin, oncogenic processes and oncogenic pathways. Multiple companion papers report in-depth explorations of individual topics within these categories.

 

In the first summary paper, the authors summarize the findings from a set of analyses that used a technique called molecular clustering, which groups tumors by parameters such as genes being expressed, abnormality of chromosome numbers in tumor cells and DNA modifications. The paper’s findings suggest that tumor types cluster by their possible cells of origin, a result that adds to our understanding of how tumor tissue of origin influences a cancer’s features and could lead to more specific treatments for various cancer types.

 

The second summary paper, presents a broad view of the TCGA findings on the processes that lead to cancer development and progression. Specifically, the authors noted that the findings identified three critical oncogenic processes: mutations, both germline (inherited) and somatic (acquired); the influence of the tumor’s underlying genome and epigenome on gene and protein expression; and the interplay of tumor and immune cells. These findings will help prioritize the development of new treatments and immunotherapies for a wide range of cancers.

 

The final summary paper, details TCGA investigations on the genomic alterations in the signaling pathways that control cell cycle progression, cell death and cell growth, revealing the similarities and differences in these processes across a range of cancers. Their findings reveal new patterns of cancer’s potential vulnerabilities that will aid in the development of combination therapies and personalized medicine.

 

The entire collection of papers comprising the PanCancer Atlas are available through a portal on cell.com. Additionally, as the decade-long TCGA effort wraps up, there will be a three-day symposium, TCGA Legacy: Multi-Omic Studies in Cancer, in Washington, D.C., September 27-29, 2018, that will discuss the future of large-scale cancer studies, with a session focusing on the PanCancer Atlas. The meeting will feature the latest advances on the genomic architecture of cancer and showcase recent progress toward therapeutic targeting.

 

Filed Under News | Leave a Comment 

Elevated Blood Pressure Before Pregnancy May Increase Chance of Pregnancy Loss

 

According to a study published in the journal Hypertension (2 April 2018), elevated blood pressure before conception may increase the chances for pregnancy loss. According to the authors, lifestyle changes to keep blood pressure under control could potentially reduce the risk of loss. T. The analysis found that for every 10 mmHg increase in diastolic blood pressure (pressure when the heart is resting between beats), there was an 18% higher risk for pregnancy loss among the study population. Millimeter of mercury, or mmHg, is the unit of measure used for blood pressure. The authors also found a 17% increase in pregnancy loss for every 10 mmHg increase in mean arterial pressure, a measure of the average pressure in the arteries during full heart beat cycles.

 

The authors analyzed data collected as part of the Effects of Aspirin in Gestation and Reproduction (EAGeR) trial, which sought to determine if daily low-dose aspirin (81 milligrams) could prevent miscarriage in women who had a history of pregnancy loss. The trial enrolled more than 1,200 women ages 18 to 40 years and took blood pressure readings before the women were pregnant and again in the fourth week of pregnancy. Average diastolic blood pressure for the women in the study was 72.5 mmHg; normal blood pressure in adults is a diastolic reading of below 80 mmHg. The authors began to see an increase in pregnancy loss among women who had a diastolic reading above 80 mmHg (approximately 25% of the participants). None of the women in the study had stage II high blood pressure (above 90 mmHg in diastolic high blood pressure or above 140 mmHg in systolic blood pressure).

 

The authors cautioned that the study does not prove that elevated blood pressure causes pregnancy loss. It is possible that another, yet-to-be identified factor could account for the findings. They added, however, that the relationship between preconception blood pressure and pregnancy loss remained the same when they statistically accounted for other factors that could increase pregnancy loss, such as increasing maternal age, higher body mass index or smoking.

 

Filed Under News | Leave a Comment 

The Voice of the Patient

 

The following is based on a press release from Dr. Scott Gottlieb, FDA Commissioner.

 

Benefit-risk assessment is at the heart of what FDA does to ensure that Americans have access to medical products that are safe, effective and meet their needs. But FDA is also deeply aware that serious chronic illnesses aren’t monolithic. Patient perception of the benefits and risks of different treatment options can vary based on the stage of the disease, the age of onset, alternative therapies available to treat the disease (if any) and whether a novel therapy improves a patient’s ability to function normally, slows the rate of disease progression or impacts other aspects of a patient’s quality of life.

 

A 45-year-old father of two who is diagnosed with aggressive prostate cancer may have very different goals than an 80-year-old man diagnosed with the same disease. To address these realities, FDA will continue to work in close partnership with patients to incorporate their experience into FDA’s benefit-risk assessments. First-hand knowledge of living with a serious illness – communicated in science-based terms that patients value and understand – is integral to facilitating the successful development of safe and effective products that can deliver meaningful benefits in each disease, or disease state.

 

Today there are many more tools to measure these patient benefits – including wearable devices, medical apps and even machine-learning programs. These tools can bring a better understanding of how patients experience their illness, including how it affects their day-to-day feeling or functioning and how a given treatment may impact the course of that illness. Tools for capturing the patient experience may be quantitative or qualitative, but they are transforming nearly every aspect of medical product development. Patients can teach all of us about the benefits that matter most to them and the risks that they are most concerned about. Patients are, rightly so, becoming the driving force of the medical research enterprise.

 

Structured and Transparent Benefit-Risk Assessments

 

FDA’s ongoing work to enhance their benefit-risk assessment and communication in the human drug review process began in 2013 as part of the Prescription Drug User Fee Act (PDUFA) V. The priority to enhance benefit-risk assessment has continued with new efforts begun in 2017, as part of PDUFA VI and further expanded under 21st Century Cures. Implementing these key pieces of legislation is improving clarity and consistency in communicating the reasoning behind the FDA’s drug regulatory decisions. It’s also helping integrate the patient’s perspective into drug development and regulatory decision-making.

FDA has issued an update to their implementation plan, titled “Benefit-Risk Assessment in Drug Regulatory Decision-Making.“ This document provides an overview of the steps the FDA has taken since 2013 to enhance benefit-risk assessment in human drug review, which included implementation of the FDA’s Benefit-Risk Framework into our drug regulatory review processes and documentation. This document also provides a roadmap for enhancing the Benefit-Risk Framework, working toward a goal of providing guidance by June 2020 that articulates the FDA’s decision-making context and framework for benefit-risk assessment. This forthcoming guidance will also outline how patient experience data and related information can be used to inform benefit-risk assessment.

 

In order for a drug or biologic product to be approved, the FDA conducts a comprehensive analysis of all available data to determine if the drug is effective and that its expected benefits outweigh its potential risks. This assessment is fundamental to our regulatory process. The goal of the FDA’s Benefit-Risk Framework is to improve the clarity and consistency in communicating the reasoning behind drug regulatory decisions, and ensure that the FDA reviewers’ detailed assessments can be readily understood in the broader context of patient care and public health. The structured framework also helps drug sponsors and other external stakeholders better understand the factors that contribute to the FDA’s decision-making process when evaluating new drugs, including drugs under development. A standard Benefit-Risk Framework will also better ensure that the patient community can continue to engage effectively with the agency, and help FDA improve how it evaluates benefits and risks from the patient’s perspective. The Benefit-Risk Framework has been applied in FDA reviews of novel drugs and biologics over the past few years, and FDA is now using it more broadly.

 

Incorporating Patient Voice into Benefit-Risk Assessments

 

The Benefit-Risk Framework recognizes that when FDA reviewers conduct a benefit-risk assessment, they consider not only the submitted evidence related to the benefit and risk and effects reported in clinical studies, but also, importantly, the “clinical context“ of the disease. This context encompasses two major considerations: 1) an analysis of the disease condition, including the severity of the condition; and 2) the degree of unmet medical need. As part of this work, the FDA recognizes a need to learn about the clinical context more comprehensively and directly from the perspective of the patients who live with the disease and their caregivers. After conducting patient-focused drug development meetings in over 20 disease areas, the FDA has concluded that patient input can: 1) inform the clinical context and provide insights to frame the assessment of benefits and risk; and 2) provide a direct source of evidence regarding the benefits and risks, if methodologically-sound data collection tools could be developed and used within clinical studies of an investigational therapy. The FDA is now developing guidance to enable more widespread development of such patient experience data to inform regulatory decision-making, as part of our implementation of PDUFA VI and 21st Century Cures. Other efforts to more systematically incorporate patients’ experiences and perspectives include:

 

1. Hosting patient-focused drug development public meetings to advance a more systematic way of gathering patients’ perspectives on their conditions and available treatments;

2. Encouraging patient stakeholders and others to conduct their own externally-led, patient-focused drug development meetings;

3. Providing patients, caregivers, advocates and others with more channels to provide meaningful input into drug development and regulatory decision-making, and to more easily access information provided by others; and

4. Launching pilot programs – and advancing policies, in collaboration with the medical community – that help foster the design of clinical trials that place less burden on patients.

 

The benefit-risk implementation plan issued today is part of the FDA’s ongoing commitment to advancing its mission of protecting and promoting public health. It marks another important step forward in increasing transparency of FDA decisions as well as streamlining the process by which FDA obtains input from the patient and stakeholder communities. In the battle against disease, engaged and informed patients are our best allies and one of the greatest resources.

Filed Under News, Regulatory | Leave a Comment 

← Previous PageNext Page →