Early Trial Results Show New Malaria Vaccine Stimulates Strong Immune Response
A mosquito sucking blood — The red bump and itching caused by a mosquito bite is
actually an allergic reaction to the mosquito’s saliva.
NIH.gov, March 4, 2010 — A new candidate vaccine to prevent clinical malaria has passed an important hurdle on the development path, according to researchers from the University of Bamako in Mali, West Africa and the University of Maryland School of Medicine’s Center for Vaccine Development. In a new study of the candidate vaccine in young children in Mali, researchers found it stimulated strong and long-lasting immune responses. The antibody levels the vaccine produced in the children were as high or even higher than the antibody levels found in adults who have naturally developed protective immune responses to the parasite over lifelong exposure to malaria.
The new candidate vaccine is based on a single strain of the Plasmodium falciparum parasite—the most common and deadliest form of the protozoa—and targets malaria in the blood stage. The blood stage refers to the period following initial infection by mosquito bite when the parasite multiplies in red blood cells, causing disease and death.
The team tested the vaccine in 100 children ages 1-6 in rural Mali. It was shown to be safe and well tolerated, and strong antibody responses were sustained for at least a year. Based on results from this early trial, the research teams will hold a larger trial of 400 Malian children to further evaluate its effectiveness. That study also will examine whether the vaccine—though it is based on a single strain of falciparum malaria—can protect against the other strains of Plasmodium falciparum.
Thera MA, Doumbo OK, Coulibaly D, Laurens MB, Kone AK, et al. Safety and Immunogenicity of an AMA1 Malaria Vaccine in Malian Children: Results of a Phase 1 Randomized Controlled Trial. PLoS ONE, 2010; 5 (2): e9041 DOI: 10.1371/journal.pone.0009041
Caspar Controls Resistance to Plasmodium falciparum in Diverse Anopheline Species
In this study, researchers studied the interaction between Plasmodium falciparum, the parasite that causes human malaria, and its mosquito host, Anopheles gambiae. They found that a single mosquito gene called caspar, which helps to regulate the mosquito’s immune response, appears to confer resistance to the parasite. When this gene is silenced, Plasmodium falciparum is unable to develop in three different mosquito species that carry malaria to humans. In the future, this gene could be used to develop novel malaria control methods that would apply to different species of Anopheline mosquitoes around the world.
Garver LS., Dong Y., and Dimopoulos G. (2009) Caspar control resistant to Plasmodium falciparum in diverse Anopheline species. PLoS Pathogens.
Invasion of mosquito salivary gland requires interaction between malaria parasite and mosquito proteins
The malaria parasite Plasmodium falciparum must undergo a complex series of developmental stages inside the mosquito in order to ensure its transmission to humans. The final stage of development which allows the parasite to be transmitted is the sporozoite, the infectious form of P. falciparum. The sporozoite must be present in the mosquito salivary gland tissue in order to be transmitted to humans during a mosquito’s blood feeding.
NIAID-supported researchers have recently shed light on the interaction between the sporozoite stage and the mosquito salivary gland. Led by Dr. Anil Ghosh of the Johns Hopkins School of Public Health, scientists identified a protein (saglin) as the receptor for the sporozoite surface protein TRAP. An enhanced understanding of how malaria sporozoites invade mosquito salivary glands is important because if this process can be blocked, no transmission of the parasite from mosquitoes to humans will occur.
Ghosh AK, et al. Malaria parasite invasion of the mosquito salivary gland requires interaction between the Plasmodium TRAP and the Anopheles saglin proteins. PLoS Pathog. 2009 Jan;5(1):e1000265.
Two duplicated P450 genes are associated with pyrethroid resistance in Anopheles funestus, a major malaria vector
The Anopheles funestus species of mosquito is an important but understudied vector of malaria in Africa, where more than 300 million acute cases of the disease occur each year. Insecticides, which contain the chemical compound pyrethroid, have been used as an effective tool in controlling mosquito populations. Unfortunately, resistance in mosquitoes to pyrethroid is increasing which has lessened the effectiveness of this insecticide in malaria control efforts. Identification of the genes involved in resistance and development of diagnostic tests that are easily deployed in the field and are sufficiently sensitive to identify the emergence of resistance are essential to developing more effective vector control methods. In a study, supported by NIAID, a research team, led by Dr. Charles Wondji of the Liverpool School of Tropical Medicine, identified the major genes conferring pyrethroid resistance in An. funestus. They identified two specific P450 genes and established that these genes could be used as valid resistance markers in laboratory-raised strains of An. funestus. Studies are underway to establish if these markers will be useful in mosquitoes which are found in the field.
Wondji, Charles, et. al. Two duplicated P450 genes are associated with pyrethroid resistance in Anopheles funestus, a major malaria vector. Genome Research. February 5, 2009.
Few animals on Earth evoke the antipathy that mosquitoes do. Their itchy, irritating bites and nearly ubiquitous presence can ruin a backyard barbecue or a hike in the woods. They have an uncanny ability to sense our murderous intentions, taking flight and disappearing milliseconds before a fatal swat. And in our bedrooms, the persistent, whiny hum of their buzzing wings can wake the soundest of sleepers.
Beyond the nuisance factor, mosquitoes are carriers, or vectors, for some of humanity’s most deadly illnesses, and they are public enemy number one in the fight against global infectious disease. Mosquito-borne diseases cause millions of deaths worldwide every year with a disproportionate effect on children and the elderly in developing countries.
There are more than 3,000 species of mosquitoes, but the members of three bear primary responsibility for the spread of human diseases. Anopheles mosquitoes are the only species known to carry malaria. They also transmit filariasis (also called elephantiasis) and encephalitis. Culex mosquitoes carry encephalitis, filariasis, and the West Nile virus. And Aedes mosquitoes, of which the voracious Asian tiger is a member, carry yellow fever, dengue, and encephalitis.
Mosquitoes use exhaled carbon dioxide, body odors and temperature, and movement to home in on their victims. Only female mosquitoes have the mouth parts necessary for sucking blood. When biting with their proboscis, they stab two tubes into the skin: one to inject an enzyme that inhibits blood clotting; the other to suck blood into their bodies. They use the blood not for their own nourishment but as a source of protein for their eggs. For food, both males and females eat nectar and other plant sugars.
Mosquitoes transmit disease in a variety of ways. In the case of malaria, parasites attach themselves to the gut of a female mosquito and enter a host as she feeds. In other cases, such as yellow fever and dengue, a virus enters the mosquito as it feeds on an infected human and is transmitted via the mosquito’s saliva to a subsequent victim.
The only silver lining to that cloud of mosquitoes in your garden is that they are a reliable source of food for thousands of animals, including birds, bats, dragonflies, and frogs. In addition, humans are actually not the first choice for most mosquitoes looking for a meal. They usually prefer horses, cattle, and birds.
All mosquitoes need water to breed, so eradication and population-control efforts usually involve removal or treatment of standing water sources. Insecticide spraying to kill adult mosquitoes is also widespread. However, global efforts to stop the spread of mosquitoes are having little effect, and many scientists think global warming will likely increase their number and range
The-Scientist.com, March 4, 2010 — It’s like Hollywood in the fish room of the animal biology department at the University of Illinois, Urbana-Champaign. Dave Ernst, the lab tech, points the camera through a peep hole in a black plastic drape towards a small fish tank, while behind him, postdoc Katie McGhee dips a net into a larger tank of juvenile three-spined sticklebacks, ready to pick out the day’s first star. “C’mon, who wants to be famous?” she clucks, transferring a fish via a plastic beaker to the smaller tank to be filmed. The camera rolls.
The selected stickleback does what sticklebacks do—it swims. It circles up and down the smaller tank, pokes around the fake plants at the bottom, then moves back up the water column, its little fish mouth swishing back and forth. Meanwhile, McGhee gets into what she calls “pike position.” Crouching behind the tank, she dangles a green ceramic replica of a pike—a common stickleback predator—above the water. At the 3-minute mark, she drops the fake pike into the water and slides it back and forth along the tank’s back wall. Upon seeing the intruder, the stickleback freezes in the bottom right corner of the tank. But after a few minutes the fish gets positively cheeky, swimming right up to the pike’s head, before seemingly losing interest and meandering off nearby.
After 9 minutes, the stickleback’s time in the spotlight is over. There are still about 130 more fish to film before going through the footage to assess individual differences in factors like how actively each fish foraged in the tank before the pike appeared and how it responded to the predator’s presence. Scientists are finding that fish raised by a father (the species’ sole caretaker) tend to take fewer risks with predators than fish raised in incubators. Ultimately, the scientists also hope to probe the genomic underpinnings of this behavioral variation. Or,as they’re calling it, the differences in stickleback personality.
Classic “personality” traits include boldness, shyness, aggression, curiosity, sociability, and level of activity.
Though it sounds like an almost heretical term to use for fish, “personality,” say McGhee and her boss, Alison Bell, is nothing more than consistent, individual differences in behavior. And in any species—even surprising ones such as squid, birds, and insects—one can find such variability in spades. Meaning that, even if their environment is the same, one individual will consistently act differently from another. Classic traits include shyness or boldness in response to threats such as the presence of a predator, and aggression to conspecifics, but there’s also how actively an individual explores a new environment—curiosity, one might say—or how sociable it is, or its general level of activity. While these are the traits most widely studied so far, Bell and others say they’re probably just a start. In some cases, personality traits might be heritable, while in others they might develop as a learned response to differences in conditions of an organism’s life—the kind of parental care it receives, for example, as in McGhee’s experiment, or an influx of predators into its habitat.
Download Flash player to listen to Alison Bell, who studies personality in sticklebacks and Luc-Alain Giraldeau, a self-described ‘old school ecologist’ from the University of Quebec in Montreal.
Despite the environmental component, personality can be more than just a learned response to environment, since a learned behavior can be forgotten relatively quickly. Bell likens personality to factors such as height or weight, which clearly can have both a genetic and an environmental component, while being more stable. “I would say the key thing to personality is that there is individual variation and individual consistency,” she says. That variation and consistency might also explain why some individuals might learn from their environment faster than others.
The field of animal personality has become a fad, with few novel ideas to offer behavioral and evolutionary biology at large, says Luc-Alain Giraldeau.
It’s not at all obvious from an evolutionary standpoint why these consistent individual differences should appear in many species—why one stickleback might repeatedly venture out in search of its next meal while a pike lurks nearby, for instance, but another consistently hides behind a rock until the danger passes. “People say there’s no way a fish can be smart enough” to have what humans call personality, says Bell. “But having personality is actually a stupid thing to do.” Consider a species of spider in which particularly aggressive females have a leg up on both fighting off predators and competing for food. That aggression, though, spills over into another context—those females may not be able to hold back from cannibalizing a potential mate, cutting off their chance to reproduce. This is clearly not optimum behavior.1 The central question for animal personality researchers, then, is how such a range of differences might have evolved. And if these differences are maintained in a population, they must carry some adaptive value. If so, what is it?
Even scientists in the field’s inner circle can be skittish about applying the term “personality” to the likes of fish and spiders.
Of course, there are many other reasons besides personality for why individual animals behave differently in the same environment. Ants become workers versus drones and bees or naked mole rats take on their roles within the colony due to their genetics or based on strictly environmental cues, like how much nutrition an organism got in development. Alternatively, an organism might come by its role over time, as in bees, who start out as workers and become foragers later in life. A couple key caveats illustrate how personality is different. First, while selection pressures on personality traits would be acting on individuals, in social organisms they’d more often be acting on the group as a whole, since only some individuals are able to reproduce. Additionally, while colony roles tend to be distinct, personality traits are on a continuous gradient; in other words, you can be more or less shy or bold, but you’re categorically either a worker or a drone.
The field is not without naysayers. Even scientists in its inner circle can be skittish about applying the term “personality” to the likes of fish and spiders. Others say the data don’t necessarily support the concept that these consistent, individual differences in behavior reflect a distinct phenomenon, whether it’s called “personality” or not. The field has become a fad, claims Luc-Alain Giraldeau, a behavioral ecologist at the University of Quebec at Montréal, with few novel ideas to offer behavioral and evolutionary biology at large. “Animal personality does not have the foundations of theory,” he says. “So when we find it, we don’t know exactly what we’ve learned about biology.”
It’s no surprise that animals of all kinds react differently to their surroundings, but for the most part, researchers have historically assumed that an individual organism’s behavior follows the rules of maximum fitness in any given situation, changing its behavior as the situation dictates. If the amount of food available in a habitat suddenly drops, for example, or changes location, an animal ups its foraging activity or changes its search strategy. Variation around this ideal response, in classical models of behavior, was considered to be simply noise.
So when is it personality, and when just “noise” around an optimal response?
But variability among individual animals that persists regardless of circumstances—whether that variability benefits survival or not—suggests there may be more to the story. One of the earliest observations by behavioral ecologists of this type of variability was published by Felicity Huntingford, a functional ecologist at the University of Glasgow, in 1976. Sticklebacks who were bold, she observed—that is, who didn’t shy away from predators—also tended to pick fights with members of their own species. Furthermore, individuals measured at different points in their lives tended to maintain the same level of aggression and boldness relative to other members of their group. That consistency in individuals implied that some fish behaved more boldly and aggressively than others not because their circumstances dictated those responses, but because that’s just how they were.
In some cases, personality traits might be heritable.
Huntingford’s 1976 study lay dormant and rarely cited in the literature, perhaps at best an observational curiosity, for almost 20 years. But in the early to mid-1990s, researchers began to study the continuum of traits such as boldness and shyness in species in which you wouldn’t necessarily expect to see them, perhaps most notably David Sloan Wilson, who studied fish at the University of Binghamton in New York. Meanwhile, other labs began observing individual differences in foraging, risk-taking, and other types of behaviors in the wild. Piet Drent’s group at the Netherlands Institute of Ecology found that great tits who were quick to explore novel environments—so-called fast explorers—were not very good at picking up on changes to those environments, such as when the researchers moved the food. Birds who were slower to explore, however, were better at switching up the routine. By following a natural population over 5 years, the group provided evidence that these so-called personality traits were heritable. The researchers found that different personality types also seemed to be differentially affected by selection pressures depending on the circumstances. In a year when extra food wasn’t present, fast explorers fared poorer than slow, perhaps because they were slow to pick up on this change to their environment.
A more recent example from the avian world also highlights how individual behavioral differences can sometimes be adaptive. As a postdoc at Harvard, Renée Duckworth wanted to study how a factor such as choice of habitat changes the make-up of the population of two sister species of bluebirds in the northwest United States. “I never intended to work on animal personality,” recalls Duckworth, an evolutionary biologist now at the University of Arizona. “I actually started from the viewpoint of assuming I was going to find a lot of phenotypic plasticity.” The two species she studied don’t coexist, and when western bluebirds colonize areas already inhabited by mountain bluebirds, they force the latter to find new homes. In tracking the animals over several breeding seasons, she found that certain western birds, specifically the aggressive ones, go off to colonize new regions. Those animals also aren’t as attentive parents, so they have fewer offspring. After the colonizers have lived there a while, the more consistently mellow western bluebirds, who also tend to have more offspring overall, come in to take advantage of the newly captured land. The link between aggression, dispersal, and parental behavior is “a whole suite of traits that’s basically integrated into [the animals’] life history strategy,” Duckworth says.
Meanwhile, Denis Réale, a behavioral ecologist at the University of Quebec at Montréal, was also encountering individual behavioral differences that reflected animals’ life histories. While conducting a population genetics study on a group of bighorn sheep on a mountaintop in Alberta, the students tagging the animals once per year told Réale they could predict from how the sheep had behaved last year whether they’d be easy to tag or not. “At first I thought it was kind of funny—an anecdote,” he says. “The more I thought about it, the more I wondered, why do we have this variation in what we would call docility?” They started to track how easily the animals could be handled. The team found that male animals who were more aggressive also reproduced early in life, and those who were more docile sired later. Those aggressive early in life often sired lambs even at 2 or 3 years of age, but rarely survive past 8 years. Docile animals generally didn’t start reproducing until they were 5 or 6, but lived much longer, and ultimately became dominant in the group. Globally, though, they attained the same level of fitness, in terms of number of offspring. Studies such as Duckworth’s, Drent’s, Réale’s and Huntingford’s showed “that there was a relationship between [traits that] was kind of surprising,” says Judy Stamps, a behavioral ecologist at the University of California, Davis. “Basically, it says that for whatever reason, these things are tied together.”
Animal personality researchers say they’ve observed meaningful and consistent individual differences in organisms including hermit crabs, squirrels, sheep, spiders, and lizards, to name a few.
So when is it personality, and when just “noise” around an optimal response? “My pet idea,” says Bell, “is that sticklebacks will behave consistently under high predation pressure but not low [predation pressure],” she says. In a recent study, she took a population of the fish in which aggression and boldness were not correlated and exposed them for a day to hungry rainbow trout predators who were allowed to eat half the prey population. Voilá—in the remaining sticklebacks, the pair of personality traits became linked, with some fish consistently bolder than others, and the bolder ones also consistently more aggressive.
The association between such traits seems to be heritable, according to Niels Dingemanse, an evolutionary ecologist at the Max Planck Institute for Ornithology in Seewiesen. Offspring of a population of sticklebacks that had a history of predation showed more consistent personalities than offspring whose parents were naïve to predator threats. The study needs to be replicated before firm conclusions can be drawn, says Dingemanse, who did his PhD in Drent’s great tit lab, but he suggests that the effect might be due to genetic variation in some complex of genes, which are expressed differently when the population experiences predators. Alternatively, he says that the traits might be regulated by a single gene with a pleiotropic effect—that is, acting on a handful of phenotypes at once. “If there’s no predator, that gene is silenced somehow,” he speculates, but when the population is threatened it switches on, and “all these things become correlated.”
Animal personality researchers say they’ve observed meaningful and consistent individual differences in organisms including hermit crabs, squirrels, sheep, spiders, and, lizards,7 to name a few, not to mention organisms like primates. But calling those differences “personality” remains controversial.
At a behavior meeting in Oxford last December, Bell recalls, one postdoc gave a talk on individual differences in the behavior of aphids, plant-eating insects, in response to their ladybug predators. In the presence of a ladybug, an aphid will either drop from its leaf or hang on; the speaker suggested that some individuals consistently take one route, while some the other, referring to this variance as a personality trait throughout her talk. When it came time for questions, “she got ripped,” says Bell, by audience members who claimed the word “personality” shouldn’t be used in an insect for such a mundane behavior. But Bell can’t see what the fuss is about; if you define personality as consistent individual differences in behavior, that’s a definition that applies to more than just complex species. “If that’s the definition we want to use, then why do we want to restrict ourselves to one kind of organism?”
However, some researchers who study such individual differences in behavior eschew the word “personality,” instead favoring monikers such as “behavioral syndromes” or “temperament,” or simply “variability,” in part to sidestep any semblance of anthropomorphizing. “I don’t really use the term,” says Duckworth. “I think it invokes too much about humans.”
Alison Bell can’t see what the fuss is about; if you define personality as consistent individual differences in behavior, “then why do we want to restrict ourselves to one kind of organism?”
Terminology, though, isn’t the only objection. Evolutionary biologists have studied variation and how it persists for decades, and the same explanations for the persistence of variation in traits like morphology and color should also apply to behavior, says Alex Kacelnik, a behavioral ecologist at Oxford University. The focus on behavioral variation is producing some interesting findings, he notes, but he’s not convinced that the novel explanations some personality researchers invoke are warranted. Researchers often claim that certain correlated traits reflect personality, but those correlated traits might have a common cause that doesn’t have to do with personality, he says. For example, an animal might be willing to forage farther from its home and eat more diverse food for a reason related to personality—because it is generally more receptive to new environments—or simply because it is more mobile, and by nature of that fact, encounters a more varied diet.
What’s more, says Giraldeau, animal personality researchers haven’t come up with a good theoretical framework that explains why the consistent differences they describe have evolved or predicts which behaviors can be called personality and which are, in fact, merely noise or some other phenomenon. One species might be aggressive in several contexts, from mating to responding to predators, he says, and so might another, but it might not be for the same reason. “I’m not convinced there is a unique phenomenon being addressed here,” he says.
Of course, says Dingemanse, studying variation and its adaptive value is nothing new for evolutionary biologists and ecologists. But for the most part, he notes, researchers have looked at traits as averages from within the population or species—that is, some species might be said to be on the whole bolder than others. Previously, scientists have also assumed that traits are fully flexible, meaning that any individual can be as shy or as bold as the situation demands. By focusing on variation between individuals, not between species or populations, the animal personality field “asks questions that have not been asked,” he says. “The study of animal personalities is just the logical continuation of looking at variation in more detail.”
If these individual differences are maintained in a population, they must carry some adaptive value. If so, what is it?
Also, existing models in evolutionary theory aren’t very good at dealing with correlated clusters of traits, notes Max Wolf, a postdoc at the Centre for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin. Aggressive individuals might be bolder and have different styles of exploration than nonaggressive ones, or they might parent differently; what interests personality researchers is how one behavioral trait develops over time and how different behavioral traits interact with each other. Previous scientists “traditionally took a much more simplified approach [of] one trait at a time,” he says.
On top of that, says Bell, studies of animal personality have exposed problems with long-held evolutionary models. Take, for example, mate selection, which is based on the assumption that females of a single population all have the same preferences and that they are all equally choosy. But her lab recently published a meta-analysis of studies on a wide variety of organisms that found the opposite. Different females don’t all pick the largest, fittest male; just as with humans, there often seems to be no rhyme or reason for mate choice. Yet the fact that classical models are static in this way leads to an inaccurate picture of selection pressures, Bell says.
Animal personality researchers agree that for the field to move forward, it must develop theoretical models explaining how or why stable individual differences might have evolved. One of the first computational models, published by Wolf, Franjo Weissing, and their colleagues at the University of Groningen in the Netherlands posits that individuals differ in their risk-taking behavior, with those that have more to lose, evolutionarily speaking, developing personality traits that could be described as more cautious. Another proposed by Stamps, but with as yet no math to back it up, links suites of personality traits to individual differences in physiology; individuals with a fast metabolism need to consume more resources, thus tend to be bolder and more aggressive.
But so far, no single theory explains the range of empirical data on personality that researchers have amassed. Researchers working on these theoretical models therefore are beginning to test their predictions in the field and the lab. Those tests, says Weissing, will determine whether the concept of animal personality has much to add to behavioral biology. “In 10 years we will know much more.”
Fish with True Person-alities
If They Have Human Created AI, Do They Also Have Person-alities?
BountyFishing.com — Scientists in the UK plan to release a school of autonomous robotic fish into the sea off northern Spain to help detect for hazardous pollutants in the water.
The robots are designed to look like carp and swim like real fish so they won’t scare the local meat-based wildlife while patrolling the port of Gijon. Each robo-carp costs upwards of £20,000 to make, measures 1.5 meters long (about the size of a seal) and can swim a maximum speed of about one meter per second (~2.24mph).
Five fish are being built by a robotics team at the University of Essex’s school of computer science and electronic engineering. The project has been funded by the European Commission and is being coordinated by the engineering consultancy firm, BMT Group.
“In using robotic fish we are building on a design created by hundreds of millions of years’ worth of evolution which is incredibly energy efficient,” said BMT senior research scientists, Rory Doyle. “This efficiency is something we need to ensure that our pollution detection sensors can navigate in the underwater environment for hours on end.”
Each robot fish is armed with autonomous navigation capabilities, allowing them to swim around the port without the need of human intervention. They also can return automatically to a charging station when their batteries run low after about eight hours of use.
When the fish return to robo-carp central for a charge, they beam water quality data to boffins via Wi-Fi. Scientists hope to use tiny chemical sensors on the fish to find sources of potentially hazardous pollutants in the water, such as underwater pipeline leaks.
Assuming the trial run goes well, scientists hope to use the robo-carp to detect pollution in rivers, lakes, and seas across the world.
See the fish in action:
Robo-carp In Action
Image: John Henry Fuseli, The Nightmare
Wired.com, by Alexis Madrigal — You wake up, but you can’t move a muscle. Lying in bed, you’re totally conscious, and you realize that strange things are happening. There’s a crushing weight on your chest that’s humanoid. And it’s evil.
You’ve awakened into the dream world.
This is not the conceit for a new horror movie starring a ragged middle-aged Freddie Prinze Jr., it’s a standard description of the experience of a real medical condition: sleep paralysis. It’s a strange phenomenon that seems to happen to about half the population at least once.
People who experience it find themselves awake in the dream world for anywhere from a few seconds to 10 minutes, often experiencing hallucinations with dark undertones. Cultures from everywhere from Newfoundland to the Caribbean to Japan have come up with spiritual explanations for the phenomenon. Now, a new article in The Psychologist suggests sleep researchers are finally figuring out the neurological basis of the condition.
“This research strongly suggests that sleep paralysis is related to REM sleep, and in particular REM sleep that occurs at sleep onset,” write researchers Julia Santomauro and Christopher C. French of the Anomalistic Psychology Research Unit, Goldsmiths, at the University of London. “Shift work, jet lag, irregular sleep habits, overtiredness and sleep deprivation are all considered to be predisposing factors to sleep paralysis; this may be because such events disrupt the sleep–wake cycle, which can then cause [sleep-onset REM periods].”
In other words, you experience just a piece of REM sleep.
As David McCarty, a sleep researcher at Louisiana State University Health Sciences Center’s Sleep Medicine Program, explained it, humans tend to think about the elements of the different stages of sleep as packaged nicely together. So, in REM sleep, you’re unconscious, experiencing a variety of sensory experiences, and almost all of your muscles are paralyzed (that’s called atonia).
“But in reality you can disassociate those elements,” McCarty said.
In sleep paralysis, two of the key REM sleep components are present, but you’re not unconscious.
Narcolepsy, which can be linked with sleep paralysis, has a similar pathology. For narcoleptics, some of the elements of rapid eye movement can “come out of nowhere,” he McCarty said.
Sleep paralysis was first identified within the scientific community by psychologist Weir Mitchell in 1876. He laid down this syntactically old-school, but accurate description of how it works. “The subject awakes to consciousness of his environment but is incapable of moving a muscle; lying to all appearance still asleep. He is really engaged in a struggle for movement fraught with acute mental distress; could he but manage to stir, the spell would vanish instantly.”
But the condition lived in folklore long before anyone tried to subject it to even semi-rigorous study. The various responses have fascinated some researchers and they were cataloged in the 2007 book, Tall Tales About the Mind and Brain. In Japan, the problem was termed kanashibar. In Newfoundland, people called it “the old hag.” In China, “ghost oppression” was the preferred nomenclature.
A study released earlier this year found that more than 90 percent of Mexican adolescents know the phrase “a dead body climbed on top of me” to describe the disorder. More than 25 percent of them had experienced it themselves.
Having an element of REM sleep mix with your consciousness is scarier than it sounds. I experienced sleep paralysis on several occasions when I was in college. I can testify: It’s run-to-your-mama scary.
In my case, it would happen right as I was falling asleep on the two twin beds that I had taped together. The most vivid time, I “woke up” with the uneasy feeling that something awful was to my left, on the border of my peripheral vision. I couldn’t really see it, but I knew that it was evil and coming closer to me. I felt true terror, like you experience when you are about to get in a car crash. I was sure it was going to hurt me.
After a few minutes, I could finally move and took the opportunity to run across campus to a friend’s house and asked to sleep on the couch. With the lights on. It happened a few more times.
Then, it just stopped. It hasn’t ever happened again.
The good news, McCarty said, is that my experience is actually pretty standard. Sleep paralysis rarely persists or causes serious life damage.
“It’s very common, way more common than people realize, but usually it doesn’t recur,” he said. “It’s not frequent enough to make people come in and ask the doctor for help.”
NeuronCulture.com, by David Dobbs — Like so many things that work in medicine, adjuvants were discovered more or less by accident — and were in fact a “dirty little secret” in a fairly literal sense. As the Wikipedia entry puts it, summarizing neatly some materials from a paywalled The Scientist article a couple years back:
“Adjuvants have been whimsically called the dirty little secret of vaccines  in the scientific community. This dates from the early days of commercial vaccine manufacture, when significant variations in the effectiveness of different batches of the same vaccine were observed, correctly assumed to be due to contamination of the reaction vessels. However, it was soon found that more scrupulous attention to cleanliness actually seemed to reduce the effectiveness of the vaccines, and that the contaminants – “dirt” – actually enhanced the immune response.”
At that point, vaccine geeks started trying various additives to see (in animals) how to boost vaccine effectiveness — and had fair luck, which they didn’t quite understand. As a fine account of this work by Iayork, of the fabulous blog Mystery Rays from Outer Space, puts it:
no one knew how adjuvants worked. They just … worked. There were a myriad of choices (for animals; in the US and Canada there’s only one adjuvant, alum, that’s licensed for humans), and they all mostly worked, and sometimes one worked better and sometimes another worked better, or differently; but there was no understanding of how, or why. Sometimes toe of newt was the best choice, and sometimes you were better off with eye of toad, and it depended on the phase of the moon and on which malign vapours were influencing your system.
Sounds scary, and I suppose it is — but then again, a lot of things in medicine work this way. But don’t get skeered; we use not the eye of newt. Early on in that run of adjuvant experimentation, immunologists recognzied that one adjuvant in particular, the above-mentioned alum (or alum salts), dissolved in mineral oil, was both effective and safe to use in humans. While a few new adjuvants are coming online (most notably MF59, the adjuvant used in seasonal flu vaccines in the EU, as well as in many of the swine-flu vaccines now being made), the most common adjuvant for human vaccines remains alum, and alum is, at this point, the only adjuvant approved for use in the U.S.
Now we get to the “Eureka” part of the tale. In 1989, Yale immunologist Charles Janeway, and one of a long line of distinguished physicians in his family (his dad was a noted pediatrician), gave a startling lecture at an annual symposium at the Cold Spring Harbor in which he proposed a solution to the adjuvant mystery — and to the larger mystery of vaccines. Asked by Cold Spring Harbor director James Watson, of double helix fame, to write the introductory essay to a summer symposium, Janeway “agreed,” he later recalled,
“with the proviso that [I] could write about anything [I] wanted to.”
What he wrote was “Approaching the Asymptote: Revolution and Evolution in Immunology,” which laid out the ‘pattern recognition’ theory, now dominant, by which the immune system mobilizes when it recognizes conserved features (that is, typical features that are conserved through evolutionary time because they work well) of pathogens. Accordingly, as Iayork puts it,
adjuvants work because they mimic these conserved pathogen-assocaited molecular patterns. (Polly Matzinger [another giant of immunology] also proposed a related model, in which immune responses start because cells are damaged — the danger-signal hypothesis.) Since then, many of the pathogen-associated patterns have been identified, and many of the pattern receptors have been identified; adjuvants are no longer magic, they’re science.
In rough terms, the pattern-recognition and danger-signal theories can make room for each other. (Though people argue about this.) They describe two different triggers for the immune system. One, pattern recognition, is a threat-detection alarm that mobilizes the immune system simply because a stranger enters the house. The other, the danger-signal response, rallies the troops because the stranger — someone who didn’t look nasty, apparently — has begun breaking up the furniture.
These seemed to explain how many adjuvants worked, and they have helped (and are helping) scientists design new adjuvants now. But as Iayork notes, there remains a weird exception to this understanding y. These theories account for all adjuvants …
except for one: Alum, the most important one of all (because it’s the main adjuvant jused for human vaccines).
It alone remains unexplained. Which is why, as Vincent Raceniello recently told me, “We still don’t really understand how most adjuvants work.”
As Iayork notes, a recent paper argued that alum’s activity comes from uric acid, which is released by dying or damaged cells (and is a powerful natural adjuvant), and that alum thus works along the lines proposed by Metzinger’s danger hypothesis: alum, mimicking uric acid, sends a danger signal that accelerates the body’s immune response. Jury’s still out on that one, though, so alum’s action still remains unexplained. (Oct 2, 2009: Alert reader passionlessdrone notes another paper, this one from Nature, argues that alum sets off the danger signal via another route.)
This puts me in mind of two things: that (as every ER doctor knows): that kids who spend more time on floors develop stronger immune systems; and that — as every ER doc and surgeon knows — a ragged incision (a tear) will heal faster (if not prettier) than a clean, straight incision made by a scalpel.
A little sloppiness can draw a stronger response. And we often don’t know why something that works, works.
Mass Extinctions and Genomics
ByteSizeBio.net — The geological signs for mass extinctions are very distinct: the photo shows the boundary of the Cretaceous-Tertiary KT extinction that happened ~65 million years ago (Mya), and killed some 70% of the species on Earth, most famously the dinosaurs. This was the last mass extinction, and its effects on Earth’s life is very clear and dramatic. Mammals have evolved and spread (radiated is the term used in evolutionary biology) to occupy many of the ecological niches dinosaurs have left vacant. The dinosaurs that remained are now birds (yes, superficial explanation, I know, but basically true), while one mammalian group, primates, have evolved an intelligence which ultimately lead to smart phones and blogging. Plant life has changed as well, with many more flowering plant species, and fewer ferns and conifers. The marks of the KT extinction are therefore found everywhere: in fossils, in geological records and in extant life.
Badlands near Drumheller, Alberta where erosion has exposed the KT boundary. From Wikimedia Commons
Everywhere? Can we also find marks of the KT extinction in genomes? A study that has been published recently claims so. The study was published in the first issue of a new open access journal, Genome Biology and Evolution by a group in Indiana University, Bloomington. To understand what they discovered, some background information is needed.
Mobile DNA Gain and Loss
Organisms can acquire DNA from other organisms by inserting bits of foreign DNA, known as mobile DNA, into the genome. One way this is done is by viral infections. Some viruses integrate genomic material of their own, and sometimes of other host organisms into the hosts they infect. If those viruses happen to also infect germ cells – sperm or ova – those insertions or retrotransposons would be passed on to subsequent generations. It is quite easy to identify these viral insertions: they are flanked by characteristic DNA stretches called Long Terminal Repeats or LTRs. During the infection and insertion process, LTRs serve as “insertion hooks” if you will, allowing the the virus to insert its genome, and whatever other genomic elements from former hosts that happened to hitch a ride with the virus. Once the LTRs are in place in the host’s genome, they serve no purpose. Over generations, the left-side LTR and the right-side LTR acquire mutations and drift away from their original sequences. For evolutionary biologists, paired LTRs thus serve as a molecular clock. Since the LTRs were identical at the time of insertion, the amount of dissimilarity between the paired LTRs can tell us how long ago they were inserted. Also, we can look at the total number of LTRs in a species and see how many are newly acquired LTRs, how many older. So we get a picture of LTR acquisition into the lineage leading to that species species over millions of years. Of course, after a very long time, the paired LTRs will not be recognizable as such, since they have diverged too far away from each other to be recognizable as a paired element. But we can recognize LTRs up to a divergence of 50%: a pretty high divergence rate.
Upon insertion: CCCAAAGGG——————-CCCAAAGGG
Generations later: CCGAATGGG——————-CCCAGAGAG
Therefore, looking at a complete genome, we can see old LTRs, young LTRs, and many in-between. It is like looking an old house, which every new owner has decided to do something when they occupied it: this one added a patio, that one carved out a window, and a third has installed a porch swing. We can tell the window is fairly old because of its design, while the porch swing is new because the brand name did not exist five years ago.
LTRs are also lost, not just gained. There are many mechanisms for LTRs to be removed from a lineage: it may be lost by “fading out” through an accumulation of mutations, or by being excised from the genome through some loss of a section. An LTR may also have a deleterious effect, such as increasing the possibility of cancer, or decreasing he viability of the immune system. After all, an LTR is an uninvited guest in the genome, and we all know that uninvited guests are not the most desirable ones… therefore, those LTRs will be selected against in a Darwinian fashion, as they reduce their host’s fitness. Using the house analogy, the owners may change the house to revert to its original design in some places, fill up the pool the previous owner has dug, or the porch swing may simply have been sold.
LTR loss rate vs. gain rate can be modeled statistically, and from that model the expected distribution of LTRs of different ages in the genome can be inferred. Basically the model states that we will see a distribution of many LTRs that have been gained fairly recently in the genome, and fewer and fewer older LTRs. This is because the probability of any single LTR being lost from the genome increases exponentially over time. Indeed, the authors looked at a fly (Drosophila), plant (Arabidopsis) and fish (Fugu) genomes. They found that the distribution of LTRs fit the expected statistical model quite well.
Loss of LTRs in Mammals
But when they looked at mammalian genomes, including those of primates, things became strange. Instead of seeing a predominantly young LTR population, they saw a middle-age population. Young LTRs were few, while there was and unexpectedly high number of old LTRs. Was it because mammalian genomes have been gaining less LTRs lately? Or was it because old LTRs were being lost at an excessive rate? A combination of both? Something else? And why is this only in mammals? And not even all mammals at that, because we do not see this anomaly in rodent lineages, for example.
When they looked closely at how long ago the LTR population has peaked, they discovered another weirdness: almost without exception, the peak (and subsequent decline) occurred just after the KT extinction. Of course, “just after” in evolutionary terms can mean one to five million years, but the association with the KT boundary was too clear to ignore. We know why there is an iridium-rich line in the hills of south Texas, with many dinosaur fossils below it but none above it: the iridium-rich meteor that devastated Earth left an indelible mark. But a sudden peaking and subsequent decline in this mobile DNA element in mammals was puzzling.
To check whether this phenomenon was due primarily to an increased rate of LTR loss or to a decreased gain the scientists looked at another type of mobile DNA element. An element that, unlike LTRs, does not fade and is rarely excised. This nuclear-mitochondrial gene or numt stems from is the slow migration of genes from mitochondrial genomes to nuclear genomes. Mitochondria are organelles existing in all animal and plant life that have their own, much reduced genomes: most of the genes encoding the proteins that are active in mitochondria were lost from the mitochondria and gained in the cell nucleus. But unlike most LTRs, these genes are essential; therefore mitochondrial gene loss from the nuclear genome is rare. Examining the mitochondrial gene insertion serves as a control: if there are more old mitochondrial elements than young ones, compared to non-mammals then that means that general acquisition rate of of mobile DNA elements in the genome is declining, and not because elements are being removed faster. They found that with mammalian numts, the rate of migration from the mitochondria to the nucleus has indeed slowed down.
So it seems that mammalian genomes have been purging themselves from mobile DNA elements just around the KT boundary, give or take a couple of million years. (Or rather: not taking in new elements). Why is that? One hypothesis is the selective advantage: mobile DNA elements can disrupt the genome, decreasing a host’s fitness. But mammals have existed for millions of years before the KT extinction. According to the fossil records they were small carnivores: dinosaurs took up the large (and XXL) herbivore niches, and the large carnivore niches. Once they were gone, mammals started to radiate, fill those niches, and a whole new level of competition arose. The selective advantage of not having a genome encumbered by potentially damaging mobile DNA elements has probably become critical at this “be ye fruitful and multiply; bring forth abundantly in the earth, and multiply therein” stage. In effect, the genomes of mammals has been shrinking by removing mobile DNA elements, just after the KT boundary. And according to the model presented in this study, this process is still ongoing: mammalian genomes are not at an equilibrium size. Unlike flies, mammals are still cleaning up.
Mammals Rule: Time to Clean House Genome?
Many questions are left unanswered: why would this genome cleaning take place as a result of a sudden reshuffling of the evolutionary deck, and the opportunity that was given to mammals? (If indeed the shrinking genome is a consequence of the KT extinction). Why would mobile DNA elements become a detriment then, more than before the KT extinction when mammals lived in the shade of the dinosaurs? If this genome house cleaning and shrinking is a result of rapid speciation that followed the KT extinction, wouldn’t we expect to see it in other groups besides mammals? To the authors’ credit, the paper is written very cautiously, and the authors are very careful to present all caveats and controls they could muster. This makes it something of a long read, but a fascinating one at that.
This article has been slashdotted. Exercise extreme caution.
Rho, M., Zhou, M., Gao, X., Kim, S., Tang, H., & Lynch, M. (2009). Independent Mammalian Genome Contractions Following the KT Boundary Genome Biology and Evolution, 2009, 2-12 DOI: 10.1093/gbe/evp007
WorldHealth.net, March 4, 2010
Oats have been long proposed to have heart healthy benefits, and US Agriculture Research Services (ARS) researchers have elucidated the mechanism for this association. Previously Mohsen Meydani, from Tufts University (Massachusetts, USA), and colleagues have shown that phenolic antioxidants in oats obstruct the ability of blood cells to stick to artery walls. In new research, the team has found that another oat compound, avenanthramides, decrease the expression of pro-inflammatory cytokines. The study provides additional indications of the potential health benefit of oat consumption in the prevention of coronary heart disease beyond its known effect through lowering blood cholesterol.
Close-up of a chewy granola bar showing the detail of its pressed shape.
Start your day with a steaming bowl of oats, which are full of omega-3 fatty acids, folate, and potassium. This fiber-rich superfood can lower levels of LDL (or bad) cholesterol and help keep arteries clear.
Opt for coarse or steel-cut oats over instant varieties—which contain more fiber—and top your bowl off with a banana for another 4 grams of fiber, and/or blueberries or strawberries.
Oatmeal cookies with raisins