Psychology professor Stanka Fitneva. (Credit: Image courtesy of Queen’s University), November 1, 2010 — Seven-year-old children only need to interact with a person once to learn who to trust and seek information from, according to a study by Queen’s University researchers.

“It shows that kids really pay attention to people’s accuracy and they don’t forget it, even after interacting with that person one time,” says psychology professor Stanka Fitneva, who conducted the study with graduate student Kristen Dunfield.

The study tested adults, seven-year-olds and four-years-olds by asking a question and then having two people on a computer screen give a right and wrong answer.

When a second question was asked and participants were told they could only ask one person for the answer, the adults and seven year olds always choose to ask the person who previously gave the right answer. The result of four-year olds varied on the way the question was asked, showing that four year olds generally need more than a single encounter to affect the way they seek information from people.

While there have been studies before on how kids react to multiple exposures from people, this study focused on how one sentence from a person affects the way children seek information.

There were three different experiments conducted during the study.

The findings are published in the September issue of Developmental Psychology.

Visualization of the brain’s network: A new computer program shows how the brain’s complex fiber tracks mature as we grow up. (Credit: Image courtesy of Ecole Polytechnique Fédérale de Lausanne), November 1, 2010 — The brain’s inner network becomes increasingly more efficient as humans mature. Now, for the first time without invasive measures, a joint study from the Ecole Polytechnique Fédérale de Lausanne (EPFL) and the University of Lausanne (UNIL), in collaboration with Harvard Medical School, has verified these gains with a powerful new computer program.

Reported in the Proceedings of the National Academy of Sciences early online edition, the soon-to-be-released software allows for individualized maps of vital brain connectivity that could aide in epilepsy and schizophrenia research.

“The computer program brings together a series of processes in a ‘pipeline’ beginning with individual MRIs and ending with a personalized map of the fiber optics-like network in the brain. It takes a whole team of engineers, mathematicians, physicists, and medical doctors to come up with this type of neurobiological understanding,” explains Jean-Philippe Thiran, an EPFL professor and head of the Signal Processing Laboratory 5.

A young child’s brain is similar to the early Internet with isolated, poorly linked hubs and inefficient connections, say the researchers from EPFL and UNIL. An adult brain, on the other hand, is more like a modern day, fully integrated fiber optic network. The scientists hypothesized that while the brain does not undergo significant topographical changes in childhood, its white matter — the bundles of nerve cells connecting different parts of the brain — transitions from weak and inefficient connections to powerful neuronal highways. To test their idea, the team worked with colleagues at Harvard Medical School and Indiana University to map the brains of 30 children between the ages of two and 18.

With MRI, they tracked the diffusion of water in the brain and, in turn, the fibers that carry this water. Thiran and UNIL professor Patric Hagmann, in the Department of Radiology, then created a database of the various fiber cross-sections and graphed the results. In the end, they had a 3D model of each brain showing the thousands of strands that connect different regions.

These individual models provide insight not only into how a child’s brain develops but also into the structural differences in the brain between left-handed and right-handed people, for example, or between a control and someone with schizophrenia or epilepsy. The models may also help inform brain surgeons of where, or where not, to cut to relieve epilepsy symptoms. Thiran and Hagmann plan to make the tool available early next year free of charge to hospitals around the world., November 1, 2010 — With Election Day right around the corner, political egos are on full display. One might even think that possessing a “big ego” is a prerequisite for success in politics, or in any position of leadership. High achievers-CEO’s, top athletes, rock stars, prominent surgeons, or scientists-often seem to be well endowed in ego.

But when does a “healthy ego” cross the line into unhealthy territory? Where is the line between confident, positive self-image and grandiose self-importance, which might signal a personality disorder or other psychiatric illness? More fundamentally, what do we mean by ego, from a neural perspective? Is there a brain circuit or neurotransmitter system underlying ego that is different in some people, giving them too much or too little?

What is Ego?

What ego is depends largely on who you ask. Philosophical and psychological definitions abound. Popularly, ego is generally understood as one’s sense of self-identity or how we view ourselves. It may encompass self-confidence, self-esteem, pride, and self-worth, and is therefore influenced by many factors, including genes, early upbringing, and stress.

The popular concept of ego is a far cry from what Sigmund Freud elaborated at the turn of the 19th century in his seminal work on psychoanalytical theory. Freud distinguished between primary (id) and secondary (ego) cognitive systems and proposed that the id, or unconscious, was characterized by a free exchange of neural energy and more primitive or animistic thinking. It was the job of the ego, the conscious mind, to minimize that free energy, to “bind” it and thereby regulate the impulses of the unconscious. It was Freud’s attempt to “link the workings of the unconscious mind to behavior,” says Joseph T. Coyle, M.D., chair of psychiatry and neuroscience at Harvard School of Medicine/McLean Hospital and a Dana Alliance for Brain Initiatives member.

Ego constructs continue to be used in some psychoanalytical therapies, but beyond that, the term seems to be falling out of favor in modern psychiatry. (“Ego is so last century,” quips Coyle.) Dana Alliance member Jerome Kagan, Ph.D., professor emeritus of psychology at Harvard, says: “Ego is a terrible word. In Freudian theory, ego has a meaning-not a very precise one, but a meaning. But you can’t take the word ego out of Freudian theory and apply it in non-Freudian ways. It just doesn’t work.”

According to psychiatrist John M. Oldham, M.D., chief of staff at Baylor College of Medicine’s Menninger Clinic and President-elect of the American Psychiatric Association (APA), terms like sense of self or self-identity are more common today. The new diagnostic criteria for personality disorders being developed for the revised APA Diagnostic and Statistical Manual of Psychiatric Disorders (DSM-5) will reflect this newer language, he says.

Where’s the Ego in Neuroscience?

If ego is loosely defined in psychiatric circles, a neural definition is virtually nonexistent. “Ego doesn’t exist in the brain,” says Kagan. What does exist, he explains, is a brain circuit that controls the intrusiveness of feelings of self-doubt and anxiety, which can modulate self-confidence. But, Kagan says, “We are nowhere near naming the brain circuit that might mediate the feeling of ‘God, I feel great; I can conquer the world.’ I believe it’s possible to do, but no one knows that chemistry or that anatomy.”

Dana Alliance member Joseph LeDoux, Ph.D., a neurobiologist at New York University, has argued that psychological constructs such as ego are not incompatible with modern neuroscience; scientists just need to come up with better ways of thinking about the self and its relation to the brain. “For many people, the brain and the self are quite different,” he writes in The Synaptic Self, where he made the opposite case. For LeDoux, it’s a truism that our personality — who we are in totality — is represented in the brain as a complex pattern of synaptic connectivity, because synapses underlie everything the brain does. “We are our synapses,” he says.

Researchers are increasingly applying the tools of modern neuroscience to try to understand how the brain represents self and other aspects of ego as popularly defined — they just don’t call it ego. Brain-imaging studies have used self-reference experiments to investigate the neurobiology of self. For example, asking a subject to make a judgment about a statement, such as “I am a good friend” versus a statement that is self-neutral, such as “water is necessary for life.” Others have looked at brain pathology in people with disorders of self. These studies have fairly consistently linked self-referential mental activity to the medial prefrontal cortex, a subregion of the frontal lobe where higher-order cognitive functions are processed.

The medial prefrontal cortex is the locus of the brain’s “default mode” network, where metabolic activity is highest when the brain is not actively engaged in a task. During task performance, default mode activity decreases. Washington University neuroimaging pioneer and Dana Alliance member Marcus E. Raichle, M.D., first reported the default mode and has argued that default-state activity may hold clues to the neurobiology of self.[i]

Could Raichle’s default mode state be Freud’s ego? Robin Carhart-Harris and Karl Friston of Imperial College London explored that question in a recent article in Brain,[ii] where they proposed that the Freudian ideas of primary and secondary cognitive processes (corresponding to the id and the ego, respectively) “fit comfortably with modern notions of functional brain architecture, at both a computational and neurophysiological level.” Acknowledging the “ambitious” nature of that thesis, the authors reviewed a large body of evidence to support it. Freud’s theory that ego represses id is consistent, they argued, both with the default mode’s characteristic ebb and flow of neuronal activity in opposition to neuronal firing in other brain areas and with theories about the hierarchy of brain systems (e.g., the cortical “thinking” brain is higher-order and therefore regulates the subcortical “primitive” brain).

The Disordered Self

Clues about the neurobiological underpinnings of self can also be seen in psychopathology. “There are a whole range of disorders in which self-identity is affected, in the sense of ‘who am I?’ and ‘how am I distinguished from those around me and things occurring around me?,” says Coyle.

The delusions of schizophrenia, for example, have been described as a loss of ego boundaries. Patients may interpret neutral events as being self-referential or may be unable to distinguish what’s happening “in here” from “out there,” as in the case of auditory hallucinations. These disruptions are thought to be linked to structural changes seen in the brains of people with schizophrenia, including smaller cortical neurons that have fewer connections than normal.[iii]

In frontotemporal dementia (FTD), a key feature is loss of self-awareness or self-identity, sometimes to the point of a complete shift in personality.[iv] Imaging studies have revealed severe abnormalities in frontal regions among FTD patients with the most dramatic changes, further supporting the frontal lobe’s role in mediating self.[v]

Narcissistic Personality Disorder is characterized by grandiose self-importance and such extreme preoccupation with self that “you lose the capacity to see things through other people’s eyes,” says Oldham. In contrast, people with Borderline Personality Disorder characteristically lack a strong sense of identity and sometimes get intrusively close to other people, “as if they’re putting on the costume of somebody else’s personality,” he says. In autism, the representation of self may appear to be wholly absent or greatly exaggerated, to the extent that others are under-recognized.[vi]

The manic phase of bipolar disorder is often marked by grandiosity, which represents “the extreme of what we would call egocentricity, a logarithmic multiplication of extreme narcissism.” says Oldham. Depression, conversely, often goes hand in hand with extremely low self-esteem.

All personality traits exist on a continuum, Oldham points out, with extremes at either end that sometimes cross the line into psychopathological behavior. The key determinants of whether that line has been crossed are the degree of disruption on interpersonal relations and daily activities. Who goes over the line and who doesn’t involves a complex interplay of genetic factors — comprising up to 50 percent of the risk — and environmental triggers, mostly related to stress. Beyond that, there are many more questions than answers.

“We’re just beginning to understand this,” says Kagan. “There are no firm facts yet. We have some hints, but at this point everything is up for grabs.”

A bubonic plague smear, prepared from a lymph removed from an adenopathic lymph node, or bubo, of a plague patient, demonstrates the presence of the Yersinia pestis bacteria that causes the plague

The New York Times, November 1, 2010, by Nicholas Wade  —  The great waves of plague that twice devastated Europe and changed the course of history had their origins in China, a team of medical geneticists reported Sunday, as did a third plague outbreak that struck less harmfully in the 19th century.

And in separate research, a team of biologists reported conclusively this month that the causative agent of the most deadly plague, the Black Death, was the bacterium known as Yersinia pestis. This agent had always been the favored cause, but a vigorous minority of biologists and historians have argued the Black Death differed from modern cases of plague studied in India, and therefore must have had a different cause.

The Black Death began in Europe in 1347 and carried off an estimated 30 percent or more of the population of Europe. For centuries the epidemic continued to strike every 10 years or so, its last major outbreak being the Great Plague of London from 1665 to 1666. The disease is spread by rats and transmitted to people by fleas or, in some cases, directly by breathing.

One team of biologists, led by Barbara Bramanti of the Institut Pasteur in Paris and Stephanie Haensch of Johannes Gutenberg University in Germany, analyzed ancient DNA and proteins from plague pits, the mass burial grounds across Europe in which the dead were interred. Writing in the journal PLoS Pathogens this month, they say their findings put beyond doubt that the Black Death was brought about by Yersinia pestis.

Dr. Bramanti’s team was able to distinguish two strains of the Black Death plague bacterium, which differ both from each other and from the three principal strains in the world today. They infer that medieval Europe must have been invaded by two different sources of Yersinia pestis. One strain reached the port of Marseilles on France’s southern coast in 1347, spread rapidly across France and by 1349 had reached Hereford, a busy English market town and pilgrimage center near the Welsh border.

The strain of bacterium analyzed from the bones and teeth of a Hereford plague pit dug in 1349 is identical to that from a plague pit of 1348 in southern France, suggesting a direct route of travel. But a plague pit in the Dutch town of Bergen op Zoom has bacteria of a different strain, which the researchers infer arrived from Norway.

The Black Death is the middle of three great waves of plague that have hit in historical times. The first appeared in the 6th century during the reign of the Byzantine emperor Justinian, reaching his capital, Constantinople, on grain ships from Egypt. The Justinian plague, as historians call it, is thought to have killed perhaps half the population of Europe and to have eased the Arab takeover of Byzantine provinces in the Near East and Africa.

The third great wave of plague began in China’s Yunnan province in 1894, emerged in Hong Kong and then spread via shipping routes throughout the world. It reached the United States through a plague ship from Hong Kong that docked at Hawaii, where plague broke out in December 1899, and then San Francisco, whose plague epidemic began in March 1900.

The three plague waves have now been tied together in common family tree by a team of medical geneticists led by Mark Achtman of University College Cork in Ireland. By looking at genetic variations in living strains of Yersinia pestis, Dr. Achtman’s team has reconstructed a family tree of the bacterium. By counting the number of genetic changes, which clock up at a generally steady rate, they have dated the branch points of the tree, which enables the major branches to be correlated with historical events.

In the issue of Nature Genetics published online Sunday, they conclude that all three of the great waves of plague originated from China, where the root of their tree is situated. Plague would have reached Europe across the Silk Road, they say. An epidemic of plague that reached East Africa was probably spread by the voyages of the Chinese admiral Zheng He who led a fleet of 300 ships to Africa in 1409.

“What’s exciting is that we are able to reconstruct the historical routes of bacterial disease over centuries,” Dr. Achtman said.

Lester K. Little, an expert on the Justinian plague at Smith College, said in an interview from Bergamo, Italy, that the epidemic was first reported by the Byzantine historian Procopius in 541 A.D. from the ancient port of Pelusium, near Suez in Egypt. Historians had assumed it arrived there from the Red Sea or Africa, but the Chinese origin now suggested by the geneticists is possible, Dr. Little said.

The geneticists’ work is “immensely impressive,” Dr. Little said, and adds a third leg to the studies of plague by historians and by archaeologists.

The likely origin of the plague in China has nothing to do with its people or crowded cities, Dr. Achtman said. The bacterium has no interest in people, whom it slaughters by accident. Its natural hosts are various species of rodent such as marmots and voles, which are found throughout China.