Gregor Mendel (1822-1884)

This photo is from a book published in 1913 by R.C. Punnett, of Punnett Square fame, on Mendelism. Private Collection, Jules T. Mitchel. ©Target Health Inc.

 

 

Gregor Johann Mendel was a scientist, Augustinian friar and abbot of St. Thomas’ Abbey in Brno, Margraviate of Moravia. He was born in a German-speaking family in the Silesian part of the Austrian Empire (today’s Czech Republic) and gained posthumous recognition as the founder of the modern science of genetics. Though farmers had known for millennia that crossbreeding of animals and plants could favor certain desirable traits, Mendel’s pea plant experiments conducted between 1856 and 1863 established many of the rules of heredity, now referred to as the laws of Mendelian inheritance.

 

Mendel worked with seven characteristics of pea plants: plant height, pod shape and color, seed shape and color, and flower position and color. Taking seed color as an example, Mendel showed that when a true-breeding yellow pea and a true-breeding green pea were cross-bred their offspring always produced yellow seeds. However, in the next generation, the green peas reappeared at a ratio of 1 green to 3 yellow. To explain this phenomenon, Mendel coined the terms “recessive“ and “dominant“ in reference to certain traits. (In the preceding example, the green trait, which seems to have vanished in the first filial generation, is recessive and the yellow is dominant.) He published his work in 1866, demonstrating the actions of invisible “factors“ – now called genes – in predictably determining the traits of an organism. The profound significance of Mendel’s work was not recognized until the turn of the 20th century (more than three decades later) with the rediscovery of his laws. Erich von Tschermak, Hugo de Vries, Carl Correns, and William Jasper Spillman independently verified several of Mendel’s experimental findings, ushering in the modern age of genetics.

 

Mendel was the son of Anton and Rosine (Schwirtlich) Mendel, and had one older sister, Veronika, and one younger, Theresia. They lived and worked on a farm which had been owned by the Mendel family for at least 130 years. During his childhood, Mendel worked as a gardener and studied beekeeping. Later, as a young man, he attended gymnasium in Opava (called Troppau in German). He had to take four months off during his gymnasium studies due to illness. From 1840 to 1843, he studied practical and theoretical philosophy and physics at the Philosophical Institute of the University of Olomouc, taking another year off because of illness. He also struggled financially to pay for his studies, and Theresia gave him her dowry. Later he helped support her three sons, two of whom became doctors. He became a friar in part because it enabled him to obtain an education without having to pay for it himself. As the son of a struggling farmer, the monastic life, in his words, spared him the “perpetual anxiety about a means of livelihood.“

 

When Mendel entered the Faculty of Philosophy, the Department of Natural History and Agriculture was headed by Johann Karl Nestler who conducted extensive research of hereditary traits of plants and animals, especially sheep. Upon recommendation of his physics teacher Friedrich Franz, Mendel entered the Augustinian St Thomas’s Abbey in Brno (called Brunn in German) and began his training as a priest. Born Johann Mendel, he took the name Gregor upon entering religious life. Mendel worked as a substitute high school teacher. In 1850, he failed the oral part, the last of three parts, of his exams to become a certified high school teacher. In 1851, he was sent to the University of Vienna to study under the sponsorship of Abbot C. F. Napp so that he could get more formal education. At Vienna, his professor of physics was Christian Doppler. Mendel returned to his abbey in 1853 as a teacher, principally of physics. In 1856, he took the exam to become a certified teacher and again failed the oral part. In 1867, he replaced Napp as abbot of the monastery. After he was elevated as abbot in 1868, his scientific work largely ended, as Mendel became overburdened with administrative responsibilities, especially a dispute with the civil government over its attempt to impose special taxes on religious institutions. Mendel died on 6 January 1884, at the age of 61, in Brno, Moravia, Austria-Hungary (now Czech Republic), from chronic nephritis. Czech composer Leo? Jan?cek played the organ at his funeral. After his death, the succeeding abbot burned all papers in Mendel’s collection, to mark an end to the disputes over taxation.

 

Gregor Mendel, who is known as the “father of modern genetics“, was inspired by both his professors at the Palacky University, Olomouc (Friedrich Franz and Johann Karl Nestler), and his colleagues at the monastery (such as Franz Diebl) to study variation in plants. In 1854, Napp authorized Mendel to carry out a study in the monastery’s 2 hectares (4.9 acres) experimental garden, which was originally planted by Napp in 1830. Unlike Nestler, who studied hereditary traits in sheep, Mendel focused on plants. Mendel carried out his experiments with the common edible pea in his small garden plot in the monastery. These experiments were begun in 1856 and completed some eight years later. In 1865, he described his experiments in two lectures at a regional scientific conference. In the first lecture he described his observations and experimental results. In the second, which was given one month later, he explained them. After initial experiments with pea plants, Mendel settled on studying seven traits that seemed to be inherited independent of other traits: seed shape, flower color, seed coat tint, pod shape, unripe pod color, flower location, and plant height. He first focused on seed shape, which was either angular or round. Between 1856 and 1863 Mendel cultivated and tested some 28,000 plants, the majority of which were pea plants (Pisum sativum). This study showed that, when true-breeding different varieties were crossed to each other (e.g., tall plants fertilized by short plants), one in four pea plants had purebred recessive traits, two out of four were hybrids, and one out of four were purebred dominant. His experiments led him to make two generalizations, the Law of Segregation and the Law of Independent Assortment, which later came to be known as Mendel’s Laws of Inheritance.

 

A specific illustration: Crossing tall and short plants clarifies some of Mendel’s key observations and deductions.

 

At the time, gardeners could obtain true-breeding pea varieties from commercial seed houses. For example, one variety was guaranteed to give only tall pea plants (2 meters or so); another, only short plants (about 1/3 of a meter in height). If a gardener crossed one tall plant to itself or to another tall plant, collected the resultant seeds some three months later, planted them, and observed the height of the progeny, he would observe that all would be tall. Likewise, only short plants would result from a cross between true-breeding short peas. However, when Mendel crossed tall plants to short plants, collected the seeds, and planted them, all the offspring were just as tall, on average, as their tall parents. This led Mendel to the conclusion that the tall characteristic was dominant, and the short recessive. Mendel then crossed these second-generation tall plants to each other. The actual results from this cross were: 787 plants among the next generation (“grandchildren“ of the original cross of true-breeding cross of tall and short plants) were tall, and 277 were short. Thus, the short characteristic – which disappeared from sight in the first filial generation – resurfaced in the second, suggesting that two factors (now known as genes) determined plant height. In other words, although the factor which caused short stature ceased to exert its influence in the first filial generation, it was still present. Note also that the ratio between tall and short plants was 787/277, or 2.84 to 1 (approximately 3 to 1), again suggesting that plant height is determined by two factors. Mendel obtained similar results for six other pea traits, suggesting that a general rule is at work here: That most given characteristics of pea plants are determined by a pair of factors (genes in contemporary biology) of which one is dominant and the other is recessive.

 

Mendel presented his paper, “Versuche uber Pflanzenhybriden“ (“Experiments on Plant Hybridization“), at two meetings of the Natural History Society of Brno in Moravia on 8 February and 8 March 1865. It generated a few favorable reports in local newspapers, but was ignored by the scientific community. When Mendel’s paper was published in 1866 in Verhandlungen des naturforschenden Vereins Brunn, it was seen as essentially about hybridization rather than inheritance, had little impact, and was only cited about three times over the next thirty-five years. His paper was criticized at the time, but is now considered a seminal work. Notably, Charles Darwin was unaware of Mendel’s paper, and it is envisaged that if he had, genetics as we know it now might have taken hold much earlier. Mendel’s scientific biography thus provides one more example of the failure of obscure, highly-original, innovators to receive the attention they deserve.

 

Mendel began his studies on heredity using mice. He was at St. Thomas’s Abbey but his bishop did not like one of his friars studying animals, so Mendel switched to plants. Mendel also bred bees in a bee house that was built for him, using bee hives that he designed. He also studied astronomy and meteorology, founding the ‘Austrian Meteorological Society’ in 1865. The majority of his published works were related to meteorology. Mendel also experimented with hawkweed (Hieracium) and honeybees. He published a report on his work with hawkweed, a group of plants of great interest to scientists at the time because of their diversity. However, the results of Mendel’s inheritance study in hawkweeds was unlike his results for peas; the first generation was very variable and many of their offspring were identical to the maternal parent. In his correspondence with Carl Nageli, he discussed his results but was unable to explain them. It was not appreciated until the end of the nineteen century that many hawkweed species were apomictic, producing most of their seeds through an asexual process. None of his results on bees survived, except for a passing mention in the reports of Moravian Apiculture Society. All that is known definitely is that he used Cyprian and Carniolan bees, which were particularly aggressive to the annoyance of other monks and visitors of the monastery, such that he was asked to get rid of them. Mendel, on the other hand, was fond of his bees, and referred to them as “my dearest little animals“.

 

During Mendel’s own lifetime, most biologists held the idea that all characteristics were passed to the next generation through blending inheritance, in which the traits from each parent are averaged. Instances of this phenomenon are now explained by the action of multiple genes with quantitative effects. Charles Darwin tried unsuccessfully to explain inheritance through a theory of pangenesis. It was not until the early twentieth century that the importance of Mendel’s ideas was realized. By 1900, research aimed at finding a successful theory of discontinuous inheritance rather than blending inheritance, led to independent duplication of his work by Hugo de Vries and Carl Correns, and the rediscovery of Mendel’s writings and laws. Both acknowledged Mendel’s priority, and it is thought probable that de Vries did not understand the results he had found until after reading Mendel. Though Erich von Tschermak was originally also credited with rediscovery, this is no longer accepted because he did not understand Mendel’s laws. Though de Vries later lost interest in Mendelism, other biologists started to establish modern genetics as a science. All three of these researchers, each from a different country, published their rediscovery of Mendel’s work within a two-month span in the Spring of 1900. Mendel’s results were quickly replicated, and genetic linkage quickly worked out. Biologists flocked to the theory; even though it was not yet applicable to many phenomena, it sought to give a genotypic understanding of heredity which they felt was lacking in previous studies of heredity which focused on phenotypic approaches. Most prominent of these previous approaches was the biometric school of Karl Pearson and W. F. R. Weldon, which was based heavily on statistical studies of phenotype variation. The strongest opposition to this school came from William Bateson, who perhaps did the most in the early days of publicizing the benefits of Mendel’s theory (the word “genetics“, and much of the discipline’s other terminology, originated with Bateson). This debate between the biometricians and the Mendelians was extremely vigorous in the first two decades of the twentieth century, with the biometricians claiming statistical and mathematical rigor, whereas the Mendelians claimed a better understanding of biology. (Modern genetics shows that Mendelian heredity is in fact an inherently biological process, though not all genes of Mendel’s experiments are yet understood.) In the end, the two approaches were combined, especially by work conducted by R. A. Fisher as early as 1918. The combination, in the 1930s and 1940s, of Mendelian genetics with Darwin’s theory of natural selection resulted in the modern synthesis of evolutionary biology.

 

In 1936, R.A. Fisher, a prominent statistician and population geneticist, reconstructed Mendel’s experiments, analyzed results from the F2 (second filial) generation and found the ratio of dominant to recessive phenotypes (e.g. green versus yellow peas; round versus wrinkled peas) to be implausibly and consistently too close to the expected ratio of 3 to 1. Fisher asserted that “the data of most, if not all, of the experiments have been falsified so as to agree closely with Mendel’s expectations,“ Mendel’s alleged observations, according to Fisher, were “abominable“, “shocking“, and “cooked“. Other scholars agree with Fisher that Mendel’s various observations come uncomfortably close to Mendel’s expectations. Dr. Edwards, for instance, remarks: “One can applaud the lucky gambler; but when he is lucky again tomorrow, and the next day, and the following day, one is entitled to become a little suspicious“. Three other lines of evidence likewise lend support to the assertion that Mendel’s results are indeed too good to be true. Fisher’s analysis gave rise to the Mendelian Paradox, a paradox that remains unsolved to this very day. Thus, on the one hand, Mendel’s reported data are, statistically speaking, too good to be true; on the other, “everything we know about Mendel suggests that he was unlikely to engage in either deliberate fraud or in unconscious adjustment of his observations.“ A number of writers have attempted to resolve this paradox. One attempted explanation invokes confirmation bias. Fisher accused Mendel’s experiments as “biased strongly in the direction of agreement with expectation to give the theory the benefit of doubt“. This might arise if he detected an approximate 3 to 1 ratio early in his experiments with a small sample size, and, in cases where the ratio appeared to deviate slightly from this, continued collecting more data until the results conformed more nearly to an exact ratio.

 

In his 2004, J.W. Porteous concluded that Mendel’s observations were indeed implausible. However, reproduction of the experiments has demonstrated that there is no real bias towards Mendel’s data. Another attempt to resolve the Mendelian Paradox notes that a conflict may sometimes arise between the moral imperative of a bias-free recounting of one’s factual observations and the even more important imperative of advancing scientific knowledge. Mendel might have felt compelled “to simplify his data in order to meet real, or feared, editorial objections.“ Such an action could be justified on moral grounds (and hence provide a resolution to the Mendelian Paradox), since the alternative – refusing to comply – might have retarded the growth of scientific knowledge. Similarly, like so many other obscure innovators of science, Mendel, a little known innovator of working class background, had to “break through the cognitive paradigms and social prejudices of his audience. If such a breakthrough “could be best achieved by deliberately omitting some observations from his report and adjusting others to make them more palatable to his audience, such actions could be justified on moral grounds.“

 

Daniel L. Hartl and Daniel J. Fairbanks reject outright Fisher’s statistical argument, suggesting that Fisher incorrectly interpreted Mendel’s experiments. They find it likely that Mendel scored more than 10 progeny, and that the results matched the expectation. They conclude: “Fisher’s allegation of deliberate falsification can finally be put to rest, because on closer analysis it has proved to be unsupported by convincing evidence.“ In 2008 Hartl and Fairbanks (with Allan Franklin and AWF Edwards) wrote a comprehensive book in which they concluded that there were no reasons to assert Mendel fabricated his results, nor that Fisher deliberately tried to diminish Mendel’s legacy. Reassessment of Fisher’s statistical analysis, according to these authors, also disprove the notion of confirmation bias in Mendel’s results.

 

Circumcision

Customs of Central Asians. Circumcision. Photograph shows a group of men seated on the ground near a small boy who is being circumcised. Album print. Illus. in: Turkestanskii al’bom, chast’ etnograficheskaia, 1871-1872, part 2, vol. 1, pl. 71. Batga [sic] buri translated from Persian as circumcision. Photo credit: Unknown – Library of Congress, Public Domain, Wikipedia Commons

 

There is a huge amount of information regarding the history of circumcision, dates which are way before the Bible was written; the Bible has many references to circumcision. Vast historical references include religious and social customs as well as superstitions and taboos, in addition to medical evidence. There is only room here for just a glimpse of the medical point of view.

 

Sir John Hutchinson MD – Photo credit: Unknown; Wikipedia Commons

 

 

Jonathan Hutchinson (1828-1913), an eminent English physician was the first prominent medical advocate for circumcision. Hutchinson’s activity in the cause of scientific surgery and in advancing the study of the natural sciences was unwearying. He published more than 1,200 medical articles and also produced the quarterly Archives of Surgery from 1890 to 1900, being its only contributor. His lectures on neuropathogenesis, gout, leprosy, diseases of the tongue, etc., were full of original observation; but his principal work was connected with the study of syphilis, on which he became the first living authority. He was the first to describe his triad of medical signs for congenital syphilis: notched incisor teeth, labyrinthine deafness and interstitial keratitis, which was very useful for providing a firm diagnosis long before the Treponema pallidum or the Wassermann test were discovered. Hutchinson was the founder of the Medical Graduates’ College and Polyclinic; and both in his native town of Selby and at Haslemere, Surrey, he started educational museums for popular instruction in natural history. He published several volumes on his own subjects and was given an Hon. LL.D degree by both the University of Glasgow and University of Cambridge. He received a knighthood in 1908.

 

In 1855, Hutchinson published a study in which he compared the rate of contraction of venereal disease amongst the gentile and Jewish population of London. His study appeared to demonstrate that circumcised men were significantly less vulnerable to venereal diseases. In fact, a 2006 systematic review concluded that the evidence strongly indicates that circumcised men are at lower risk of chancroid and syphilis. Clearly, Dr. Hutchinson was ahead of his time. Hutchinson also became a notable leader in the campaign for medical circumcision for the next fifty years, publishing A Plea for Circumcision. in the British Medical Journal (1890). In that article, he contended that,“the foreskin constitutes a harbor for filth, and is a constant source of irritation. It conduces to [self eroticism], and adds to the difficulties of continence. It increases the risk of syphilis in early life, and of cancer in the aged. In an 1893 article, On circumcision as a preventive of self-eroticism, he wrote: I am inclined to believe that circumcision may often accomplish much, both in breaking the habit as an immediate result, and in diminishing the temptation to it subsequently.“

 

Nathaniel Heckford, a pediatrician at the East London Hospital for Children, wrote Circumcision as a Remedial Measure in Certain Cases of Epilepsy, Chorea, etc. (1865), in which he argued that circumcision acted as an effective remedial measure in the prevention of certain cases of epilepsy and chorea. These increasingly common medical beliefs were even applied to females. The controversial obstetrical surgeon Isaac Baker Brown founded the London Surgical Home for Women in 1858, where he worked on advancing surgical procedures. In 1866, Baker Brown described the use of clitoridectomy, as a cure for several conditions, including epilepsy, catalepsy and mania, which he attributed to self-stimulation. In On the Curability of Certain Forms of Insanity, Epilepsy, Catalepsy, and Hysteria in Females, he gave a 70% success rate using this treatment. However, during 1866, Baker Brown began to receive negative feedback from within the medical profession from doctors who questioned the validity of Baker Brown’s claims of success. An article appeared in The London Times, which was favorable towards Baker Brown’s work but suggested that Baker Brown had treated women of unsound mind. He was also accused of performing procedures without the consent or knowledge of his patients or their families. In 1867 he was expelled from the Obstetrical Society of London for carrying out the operations without consent. Baker Brown’s ideas were more accepted in the United States, where, from the 1860s, the operation was being used to cure hysteria, and in young girls what was called rebellion or unfeminine aggression“.

 

Lewis Sayre, New York orthopedic surgeon, became a prominent advocate for circumcision in America. In 1870, he examined a five-year-old boy who was unable to straighten his legs, and whose condition had so far defied treatment. Upon noting that the boy’s genitals were inflamed, Sayre hypothesized that chronic irritation of the boy’s foreskin had paralyzed his knees via reflex neurosis. Sayre circumcised the boy, and within a few weeks, he recovered from his paralysis. After several additional incidents in which circumcision also appeared effective in treating paralyzed joints, Sayre began to promote circumcision as a powerful orthopedic remedy. Sayre’s prominence within the medical profession allowed him to reach a wide audience. As more practitioners tried circumcision as a treatment for otherwise intractable medical conditions, sometimes achieving positive results, the list of ailments reputed to be treatable through circumcision grew. By the 1890s, hernia, bladder infections, kidney stones, insomnia, chronic indigestion, rheumatism, epilepsy, asthma, bedwetting, Bright’s disease, erectile dysfunction, syphilis, insanity, and skin cancer had all been linked to the foreskin, and many physicians advocated universal circumcision as a preventive health measure.

 

Specific medical arguments aside, several hypotheses have been raised in explaining the public’s acceptance of infant circumcision as preventive medicine. The success of the germ theory of disease had not only enabled physicians to combat many of the postoperative complications of surgery, but had made the wider public deeply suspicious of dirt and bodily secretions. Accordingly, the smegma that collects under the foreskin was viewed as unhealthy, and circumcision readily accepted as good hygiene. In this Victorian climate, circumcision could be employed as a means of discouraging self-stimulation. All About the Baby, a popular parenting book of the 1890s, recommended infant circumcision for precisely this purpose. As hospitals proliferated in urban areas, childbirth, at least among the upper and middle classes, was increasingly under the care of physicians in hospitals rather than with midwives in the home. It has been suggested that once a critical mass of infants were being circumcised in the hospital, circumcision became a class marker of those wealthy enough to afford a hospital birth.

 

During the same time period, circumcision was becoming easier to perform. William Stewart Halsted’s 1885 discovery of hypodermic cocaine as a local anesthetic made it easier for doctors without expertise in the use of chloroform and other general anesthetics to perform minor surgeries. Also, several mechanically aided circumcision techniques, forerunners of modern clamp-based circumcision methods, were first published in the medical literature of the 1890s, allowing surgeons to perform circumcisions more safely and successfully. By the 1920s, advances in the understanding of disease had undermined much of the original medical basis for preventive circumcision. Doctors continued to promote it, however, as good penile hygiene and as a preventive for a handful of conditions such as: balanitis, phimosis, and cancer.

 

By 2014 the American Academy of Pediatrics found that the health benefits of newborn male circumcision outweigh the risks

 

Circumcision in English-speaking countries arose in a climate of antiquated, negative attitudes towards relationships. In her 1978 article The Ritual of Circumcision, Karen Erickson Paige writes: The current medical rationale for circumcision developed after the operation was in wide practice. The original reason for the surgical removal of the foreskin, or prepuce, was to control ‘insanity’ – the range of mental disorders that people believed were caused by the polluting’ practice of self-abuse.“

 

Editor’s note: Page is pointing out, how hard it is to believe that anyone could have such outrageous ideas, completely lacking in any scientific evidence and so harshly punitive.

 

Self-abuse was a term commonly used to describe self-stimulation in the 19th century. According to Paige, treatments ranged from diet, moral exhortations, hydrotherapy, and marriage, to such drastic measures as surgery, physical restraints, frights, and punishment. Some doctors recommended using plaster of Paris, leather, or rubber; cauterization; making boys wear chastity belts or spiked rings; and in extreme cases, castration. Paige details how circumcision became popular as a remedy:

 

In the 1890s, it became a popular technique to prevent, or cure, insanity. In 1891 the president of the Royal College of Surgeons of England published On Circumcision as a Preventive, and two years later another British doctor wrote Circumcision: Its Advantages and How to Perform It, which listed the reasons for removing the vestigial prepuce. Evidently the foreskin could cause nocturnal incontinence, hysteria, epilepsy, and irritation that might give rise to erotic stimulation. Another physician, P.C. Remondino, added that circumcision is like a substantial and well-secured life annuity as it insures better health, greater capacity for labor, longer life, less nervousness, sickness, loss of time, and less doctor bills. No wonder it became a popular remedy.

 

One of the leading advocates of circumcision was John Harvey Kellogg. He advocated the consumption of Kellogg’s corn flakes as a remedy, and he believed that circumcision would be an effective way to eliminate stimulation in males.

 

Editor’s note: Talk about plain old Puritanical meanness can hardly believe some of this, but it’s true. Eighteenth and Nineteenth Century solutions: Covering the organs with a cage had been practiced with entire success. A remedy which is almost always successful in small boys is circumcision, especially when there is any degree of phimosis. The operation should be performed by a surgeon without administering an anesthetic, as the brief pain attending the operation will have a salutary effect upon the mind, especially if it be connected with the idea of punishment, as it may well be in some cases. The soreness which continues for several weeks interrupts the practice, and if it had not previously become too firmly fixed, it may be forgotten and not resumed. If any attempt is made to watch the child, he should be so carefully surrounded by vigilance that he cannot possibly transgress without detection. If he is only partially watched, he soon learns to elude observation, and thus the effect is only to make him cunning in his vice.

 

Robert Darby (2003), writing in the Medical Journal of Australia, noted that some 19th-century circumcision advocates – and their opponents – believed that the foreskin was highly erotic and sensitive:

 

In the 19th century the role of the foreskin in erotic sensation was well understood by physicians who wanted to cut it off precisely because they considered it the major factor leading boys to self-stimulation. The Victorian physician and venerealologist William Acton (1814-1875) damned it as a source of serious mischief, and most of his contemporaries concurred. Both opponents and supporters of circumcision agreed that the significant role the foreskin played in responses was the main reason why it should be either left in place or removed. William Hammond, a Professor of Mind in New York in the late 19th century, commented that circumcision, when performed in early life, generally lessens the voluptuous sensations of intimacy, and both he and Acton considered the foreskin necessary for optimal reproductive function, especially in old age. Jonathan Hutchinson, English surgeon and pathologist (1828-1913), and many others, thought this was the main reason why it should be excised.

 

Born in the United Kingdom during the late-nineteenth century, John Maynard Keynes and his brother Geoffrey, were both circumcised in boyhood due to parents’ concern about their habits. Mainstream pediatric manuals continued to recommend circumcision as a deterrent until the 1950s.

 

Wikipedia; http://www.nytimes.com/1997/04/02/us/study-is-adding-to-doubts-about-circumcision.html

 

Freudian Psychoanalysis – Two Other Branches, Out of Many

Graphic image: by historicair 16:56, 16 December 2006 (UTC) – en:Image:Structural-Iceberg.png by en:User:Jordangordanier, Public Domain; Wikipedia Commons

 

 

Freudian psychoanalytic theory spawned other creative approaches to the practice of psychoanalysis, which built upon Freud’s theories of psychic development.

 

Object Relations and The Basic Fault

 

Michael Balint (1896-1970) was a Hungarian psychoanalyst who spent most of his adult life in England. He was a proponent of the Object Relations school.

 

Balint was born Mihaly Maurice Bergsmann, the son of a practicing physician in Budapest. It was against his father’s will that he changed his name to Balint Mihaly. He also changed religion, from Judaism to Unitarian Christianity. During World War I Balint served at the front, first in Russia, then in the Dolomites. He completed his medical studies in Budapest in 1918. On the recommendation of his future wife, Alice Szekely-Kovacs, Balint read Sigmund Freud’s “Drei Abhandlungen zur Sexualtheorie“ (1905) and “Totem und Tabu“. He also began attending the lectures of Sandor Ferenczi, who in 1919 became the world’s first university professor of psychoanalysis. In 1920, Balint married and then moved to Berlin, where he worked in the biochemical laboratory of Otto Heinrich Warburg (1883-1970), who won the Nobel Prize in 1931. Balint worked on his doctorate in biochemistry, while also working half time at the Berlin Institute of psychoanalysis. In 1924 the Balints returned to Budapest, where he assumed a leading role in Hungarian psychoanalysis. During the 1930s the political conditions in Hungary made the teaching of psychotherapy practically impossible, and they emigrated to London in 1938, settling in Manchester, England. In early 1939, Balint became Clinical Director of the Child Guidance Clinic. In 1944, his parents, about to be arrested by the Nazis in Hungary, committed suicide. That year Balint moved from Manchester to London, where he was attached to the Tavistock Clinic and began learning about group work from W.R. Bion; he also obtained the Master of Science degree in psychology. In 1949, Balint became the leader of the Tavistock Institute of Human Relations and developed what is now known as the “Balint group“: The Balint Group consisted of a group of physicians sharing the problems of general practice, in particular, focusing on the responses of the doctors to their patients. This first group of practicing physicians was established in 1950. In 1968 Balint became president of the British Psychoanalytical Society.In Hamburg, Germany, The Michael-Balint-Institut fur Psychoanalyse, Psychotherapie und analytische Kinder- und Jugendlichen- Psychotherapie is named for him.

 

Balint took an early interest in the mother-infant relationship, a key paper on “Primary Object-Love“ was received with approval by other Freudian psychoanalysts. One respected psychoanalyst wrote that “Michael Balint has analyzed in a thoroughly penetrating way the intricate interaction of theory and technique in the genesis of a new conception of analysis, of a “two-body psychology“. On that basis, Balint explored the idea of what he called “the basic fault“: this was that there was often the experience in the early two-person relationship that something was wrong or missing, and this carried over into the Oedipal period (age 2-5). By 1968, then, Balint had distinguished three levels of experience, each with its particular ways of relating, its own ways of thinking, and its own appropriate therapeutic procedures. Balint’s “three person or level 3,“ was the level at which a person is capable of a three-sided experience, primarily the Oedipal problems between self, mother, and father’. By contrast, ‘the area of the Basic Fault is characterized by a very peculiar exclusively two-person relationship’; while a ‘third area is characterized by the fact that there are no external objects in it – level number 1.

 

Therapeutic failure is attributed by Balint to the analyst’s inability to “click in“ to the mute needs of the patient who has descended to the level of the basic fault; and he maintained that the basic fault can only be overcome if the patient is allowed to regress to a state of oral dependence on the analyst and experience a new beginning. Balint developed a process of brief psychotherapy he termed “focal psychotherapy,“ in which one specific problem presented by the patient is chosen as the focus of interpretation. The therapy was carefully targeted around that key area to avoid (in part) the risk that the focal therapy would have degenerated into long-term psychotherapy or psychoanalysis. Here as a rule interpretation remained ‘entirely on the whole-person adult level, it was the intention to reduce the intensity of the feelings in the therapeutic relationship. In accordance with the thinking of other members of what is known as the British independent perspective, such as W. R. D. Fairbairn and D. W. Winnicott, great stress was laid upon the creative role of the patient in focal therapy: To our minds, an “independent discovery“ by the patient has the greatest dynamic power. It has been suggested that it was in fact this work of Michael Balint and his colleagues which led to time-limited therapies being rediscovered.

 

Michael Balint as part of the independent tradition in British psychoanalysis, was influential in setting up groups (now known as “Balint groups“) for medical doctors to discuss psychodynamic factors in relation to patients. Instead of repeating futile investigations of increasing complexity and cost, Balint taught active search for causes of anxiety and unhappiness, and treatment by remedial education aiming at insight by the patient. Such seminars provided opportunities for GPs to discuss with each other and with him aspects of their work with patients for which they had previously felt ill equipped. Since his death the continuance of this work has been assured by the formation of the Balint Society.

 

Psychoanalysis and Low Dose LSD

 

The term anaclitic (from the Greek anaklinein – to lean upon) refers to various early infantile needs and tendencies directed toward a pregenital love object. This method was developed in the 1950s by two London Freudian psychoanalysts, Joyce Martin MD and Pauline McCririck MD. It is based on clinical observations of deep age regression occurring in LSD sessions of psychiatric patients. During these periods many of them relive episodes of early infantile frustration and emotional deprivation. This is typically associated with agonizing cravings for love, physical contact, and other instinctual needs experienced on a very primitive level. The technique of LSD therapy practiced by Martin and McCririck was based on psychoanalytic understanding and interpretation of all the situations and experiences occurring in drug sessions and in this sense is very close to psycholytic approaches. The critical difference distinguishing this therapy from any other was the element of direct satisfaction of anaclitic needs of the patients. In contrast to the traditional detached attitude characteristic of psychoanalysis and psycholytic treatment, Martin and McCririck assumed an active mothering role and entered into close physical contact with their patients to help them to satisfy primitive infantile needs reactivated by the drug.

 

More superficial aspects of this approach involve holding the patients and feeding them warm milk from a bottle, caressing and offering reassuring touches, holding their heads in one’s lap, or hugging and rocking. The extreme of psycho-dramatic involvement of the therapist is the so-called “fusion technique,“ which consists of full body contact with the client. The patient lies on the couch covered with a blanket and the therapist lies beside his or her body, in close embrace, usually simulating the gentle comforting movements of a mother caressing her baby. The subjective reports of patients about these periods of “fusion“ with the therapist are quite remarkable. They describe authentic feelings of symbiotic union with the nourishing mother image, experienced simultaneously on the level of the “good breast“ and “good womb.“ In this state, patients can experience themselves as infants receiving love and nourishment at the breast of the nursing mother and at the same time feel totally identified with a fetus in the oceanic paradise of the womb. This state can simultaneously involve archetypal dimensions and elements of mystical rapture, and the above situations be experienced as contact with the Great Mother or Mother Nature. It is not uncommon that the deepest form of this experience involves feelings of oneness with the entire cosmos and the ultimate creative principle, or God. The fusion technique seems to provide an important channel between the psychodynamic, biographical level of the LSD experience and the transcendental states of consciousness. Patients in anaclitic therapy relate that during their nourishing exchange with the mother image, the milk seemed to be “coming directly from the Milky Way.“ In the imaginary re-enactment of the placentary circulation the life-giving blood can be experienced as sacramental communion, not only with the material organism, but with the divine source. Repeatedly, the situations of “fusion“ have been described in all their psychological and spiritual ramifications as fulfillment of the deepest needs of human nature, and as extremely healing experiences. Some patients described this technique as offering the possibility of a retroactive intervention in their deprived childhood. When the original traumatic situations from childhood become reenacted in all their relevance and complexity with the help of the “psychedelic time-machine,“ the therapist’s affection and loving care can fill the vacuum caused by deprivation and frustration.

 

The dosages used in this treatment technique ranged between 100 and 200 micrograms of LSD, sometimes with the addition of Ritalin in later hours of the sessions. Martin and McCririck described good and relatively rapidly achieved results in patients with deep neuroses or borderline psychotic disorders who had experienced severe emotional deprivation in childhood. Their papers, presentations at scientific meetings, and a film documenting the anaclitic technique stirred up an enormous amount of interest among LSD therapists and generated a great deal of fierce controversy. The reactions of colleagues to this treatment modality ranged from admiration and enthusiasm to total condemnation. Since most of the criticism from the psychoanalytically oriented therapists revolved around the violation of the psychoanalytic taboo against touching and the possible detrimental consequences of the fusion technique for transference-countertransference problems, it is interesting to describe the authors’ response to this serious objection. Both Martin and McCririck seemed to concur that they had experienced much more difficulty with transference relationships before they started using the fusion technique. According to them, it is the lack of fulfillment in the conventional therapeutic relationship that foments and perpetuates transference. The original traumatic situations are continuously reenacted in the therapeutic relationship and the patient essentially experiences repetitions of the old painful rejections. When the anaclitic needs are satisfied in the state of deep regression induced by the drug, the patients are capable of detaching themselves emotionally from the therapist and look for more appropriate objects in their real life. This situation has a parallel in the early developmental history of the individual. Those children whose infantile emotional needs were adequately met and satisfied by their parents find it relatively easy to give up the affective ties to their family and develop independent existence. By comparison, those individuals who experienced emotional deprivation and frustration in childhood tend to get trapped during their adult life in symbiotic patterns of interaction, destructive and self-destructive clinging behavior, and life-long problems with dependence-independence. According to Martin and McCririck, the critical issue in anaclitic therapy is to use the fusion technique only during periods of deep regression, and keep the experience strictly on the pregenital level. It should not be used in the termination periods of the sessions when the anaclitic elements could get easily confused with adult sexual patterns.

 

The anaclitic technique never achieved wide acceptance; its use seemed to be closely related to unique personality characteristics in its authors. Most other therapists, particularly males, found it emotionally difficult and uncomfortable to enter into the intimate situation of fusion with their clients. However, the importance of physical contact in LSD psychotherapy is unquestionable and many therapists have routinely used various less-intense forms of body contact.

 

Sources: History of LSD Therapy by Stanislav Grof, M.D.; Wikipedia

Moritz Kaposi, MD

Moritz Kaposi – Photo credit: Unknown – Images from the History of Medicine (NLM), Public Domain; Wikipedia Commons

 

According to his biographer, Dr. J.D. Oriel, “in his lifetime, Moritz Kaposi, MD, was acknowledged as one of the great masters of the Vienna School of Dermatology, a superb clinician and renowned teacher“. While his mentor, Ferdinand von Hebra, is considered the “father of dermatology“, Kaposi was one of the first to establish dermatology on its anatomical pathology scientific basis. He became the chairman of the Vienna School of Dermatology, after Hebra’s death in 1880.

 

Moritz Kaposi, a Hungarian physician, was born on 23 October 1837 in Kaposvar, Austria-Hungary and died on 6 March 1902 in Vienna. This well-known physician is best known as the dermatologist who discovered the skin tumor that received his name (Kaposi’s sarcoma). Kaposi was born to a Jewish family, whose original surname was Kohn. But with his conversion to the Catholic faith, he changed it to Kaposi in 1871, in reference to his town of birth. One purported reason behind this is that he wanted to marry a daughter of current dermatology chairman, Ferdinand Ritter von Hebra, and advance in the society, which he could not have done being of Jewish faith. This seems unlikely because he married Martha Hebra and converted to Catholicism several years prior to changing his name, by which time he was already well established in the Vienna University faculty and a close associate of von Hebra. A more plausible explanation is based on his own comments to colleagues that he changed his name to avoid confusion with five other similarly named physicians on the Vienna faculty. Rumors about the sincerity of both his marriage and his concerns about his Jewish ancestry may have arisen through professional jealousy. According to William Dubreuilh (1857-1935), first professor and chairman of dermatology in Bordeaux: “On disait de Kaposi qu’il avait pris la fille de Hebra, sa maison, sa chaire et sa clientele, laissant le reste a son beau-frere Hans Hebra.“ – “It was said of Kaposi that he had taken the daughter of Hebra, his home, his chair and his clientele, leaving the rest to his brother-in-law, Hans Hebra.“

 

In 1855, Kaposi began to study medicine at the University of Vienna and attained a doctorate in 1861. In his dissertation, titled Dermatologie und Syphilis (1866), he made an important contribution to the field. Kaposi was appointed as professor at the University of Vienna in 1875, and in 1881 he became a member of the board of the Vienna General Hospital and director of its clinic of skin diseases. Together with his mentor, Ferdinand Ritter von Hebra, he authored the book Lehrbuch der Hautkrankheiten (Textbook of Skin Diseases) in 1878. Kaposi’s main work, however, was Pathologie und Therapie der Hautkrankheiten in Vorlesungen fur praktische Arzte und Studierende (Pathology and Therapy of the Skin Diseases in Lectures for Practical Physicians and Students), published in 1880, which became one of the most significant books in the history of dermatology, being translated to several languages. Kaposi is credited with the description of xeroderma pigmentosum, a rare genetic disorder now known to be caused by defects in nucleotide excision repair (“Ueber Xeroderma pigmentosum. Medizinische Jahrbucher, Wien, 1882: 619-633“). Among other diseases, Kaposi was the first to study Lichen scrofolosorum and Lupus erythematosus. In all, he published over 150 books and papers and is widely credited with advancing the use of pathologic examination in the diagnosis of dermatologic diseases.

 

Kaposi’s name entered into the history of medicine in 1872, when he described for the first time Kaposi’s sarcoma, a cancer of the skin, which he had discovered in five elderly male patients and which he initially named “idiopathic multiple pigmented sarcoma“. More than a century later, the appearance of this disease in young gay men in New York City, San Francisco and other coastal cities in the United States was one of the first indications that a new disease, now called AIDS, had appeared. In 1993, the discovery that Kaposi’s sarcoma was associated with the herpesvirus, sparked considerable controversy and scientific in-fighting until sufficient data had been collected to show that indeed KSHV was the causative agent of Kaposi’s sarcoma. The virus is now known to be a widespread infection of people living in sub-Saharan Africa; intermediate levels of infection occur in Mediterranean populations (including Israel, Saudi Arabia, Italy and Greece) and low levels of infection occur in most Northern European and North American populations. Kaposi’s sarcoma is now the most commonly reported cancer in parts of sub-Saharan Africa. Kaposi’s sarcoma is usually a localized tumor that can be treated either surgically or through local irradiation. Chemotherapy with drugs such as liposomal anthracyclines or paclitaxel may be used, particularly for invasive disease. Antiviral drugs, such as ganciclovir, that target the replication of herpesviruses such as KSHV have been used to successfully prevent development of Kaposi’s sarcoma, although once the tumor develops these drugs are of little or no use.

 

Michael D. Gershon, MD

Michael Gershon MD: “Serotonin is a sword and a shield of the bowel: serotonin plays offense and defense.“ Photo credit: Columbia University Medical School, MD/PhD Program

 

Michael D. Gershon, is Professor of Pathology and Cell Biology, at Columbia University Medical School and Center. Gershon has been called the “father of neurogastroenterology“ because, in addition to his seminal work on neuronal control of gastrointestinal (GI) behavior and development of the enteric nervous system (ENS), his classic trade book, The Second Brain, has made physicians, scientists, and the lay public aware of the significance of the unique ability of the ENS to regulate GI activity in the absence of input from the brain and spinal cord. Gershon’s demonstration that serotonin is an enteric neurotransmitter was the first indication that the ENS is more than a collection of cholinergic relay neurons transmitting signals from the brain to the bowel. He was the first to identify intrinsic primary afferent neurons that initiate peristaltic and secretory reflexes and he demonstrated that these cells are activated by the mucosal release of serotonin. Dr. Gershon has published almost 400 peer-reviewed papers including major contributions relative to disorders of GI motility, including irritable bowel syndrome, identification of serotonin as a GI neurotransmitter and the initial observation in the gut of intrinsic sensory nerve cells that trigger propulsive motor activity. Dr. Gershon also discovered that the serotonin transporter (SERT) is expressed by enterocytes (cells that line the lumen of the gut) as well as by enteric neurons and is critical in the termination of serotonin-mediated effects.

 

Dr. Gershon has identified roles in GI physiology that specific subtypes of serotonin receptor play and he has provided evidence that serotonin is not only a neurotransmitter and a paracrine factor that initiates motile and secretory reflexes, but also as a hormone that affects bone resorption and inflammation. He has called serotonin “a sword and shield of the bowel“ because it is simultaneously proinflammatory and neuroprotective. Mucosal serotonin triggers inflammatory responses that oppose microbial invasion, while neuronal serotonin protects the ENS from the damage that inflammation would otherwise cause. Neuron-derived serotonin also mobilizes precursor cells, which are present in the adult gut, to initiate the genesis of new neurons, an adult function that reflects a similar essential activity of early-born serotonergic neurons in late fetal and early neonatal life to promote development of late-born sets of enteric neurons.

 

Dr. Gershon has made many additional contributions to ENS development, including the identification of necessary guidance molecules, adhesion proteins, growth and transcription factors; his observations suggest that defects that occur late in ENS development lead to subtle changes in GI physiology that may contribute to the pathogenesis of functional bowel disorders. More recently, Drs. Michael and Anne Gershon have demonstrated that varicella zoster virus (VZV) infects, becomes latent, and reactivates in enteric neurons, including those of humans. They have demonstrated that “enteric zoster (shingles)“ occurs and may thus be an unexpected cause of a variety of gastrointestinal disorders, the pathogenesis of which is currently unknown.

 

Born in New York City in 1938, Dr. Michael D. Gershon received his B.A. degree in 1958 “with distinction from Cornell University and his M.D. in 1963, again from Cornell. Gershon received postdoctoral training with Edith Bulbring in Pharmacology at Oxford University before returning to Cornell as an Assistant Professor of Anatomy in 1967. He was promoted to Professor before leaving Cornell to Chair the Department of Anatomy & Cell Biology at Columbia University’s College of P&S from 1975-2005. Gershon is now a Professor of Pathology & Cell Biology at Columbia.

 

Gershon’s contributions to the identification, location, and functional characterization of enteric serotonin receptors have been important in the design of drugs to treat irritable bowel syndrome, chronic constipation, and chemotherapy-associated nausea. Gershon’s discovery that the serotonin transporter (SERT), which terminates serotonergic signaling, is expressed in the bowel both by enterocytes and neurons opened new paths for research into the pathophysiology of irritable bowel syndrome and inflammatory bowel disease. He has linked mucosal serotonin to inflammation and neuronal serotonin to neuroprotection and the generation of new neurons from adult stem cells. These discoveries have led to the new idea that the function of serotonin is not limited to paracrine signaling and neurotransmission in the service of motility and secretion, but is also a sword and a shield of the gut.

 

Gershon has teamed with his wife, Anne Gershon, to show that the mannose 6-phosphate receptor plays critical roles in the entry and exit of varicella zoster virus (VZV). The Gershons have also developed the first animal model of VZV disease, which enables lytic and latent infection as well as reactivation to be studied in isolated enteric neurons. The Gershons have also shown that following varicella, VZV establishes latency in the human ENS. Finally, Gershon has made major contributions to understanding the roles played by a number of critical transcription and growth factors in enabling emigres from the neural crest to colonize the bowel, undergo regulated proliferation, find their appropriate destinations in the gut wall, and terminally differentiate into the most phenotypcially diverse component of the peripheral nervous system.

 

Dr. Michael Gershon has devoted his career to understanding the human bowel (the stomach, esophagus, small intestine, and colon). His thirty years of research have led to an extraordinary rediscovery: nerve cells in the gut that act as a brain. This “second brain“ can control our gut all by itself. Our two brains — the one in our head and the one in our bowel — must cooperate. If they do not, then there is chaos in the gut and misery in the head — everything from “butterflies“ to cramps, from diarrhea to constipation.

 

Gershon’s groundbreaking book, The Second Brain represents a quantum leap in medical knowledge and is already benefiting patients whose symptoms were previously dismissed as neurotic or “it’s all in your head.“ Dr. Gershon’s research, clearly demonstrates that the human gut actually has a brain of its own. This remarkable scientific breakthrough offers fascinating proof that “gut instinct“ is biological, a function of the second brain. An alarming number of people suffer from heartburn, nausea, abdominal pain, cramps, diarrhea, constipation, or related problems. Often thought to be caused by a “weakness“ of the mind, these conditions may actually be a reflection of a disorder in the second brain. The second brain, located in the bowel, normally works smoothly with the brain in the head, enabling the head-brain to concentrate on the finer pursuits of life while the gut-brain attends to the messy business of digestion. A breakdown in communication between the two brains can lead to stomach and intestinal trouble, causing sufferers great abdominal grief and too often labeling them as neurotic complainers. Dr. Gershon’s research into the second brain provides understanding for those who suffer from gut-related ailments and offers new insight into the origin, extent, and management. The culmination of his work is an extraordinary contribution to the understanding of gastrointestinal illnesses, as well as a fascinating glimpse into how our gut really works.

 

A light touch: The irreplaceable, indomitable, Stephen Colbert interviews the great Michael Gershon MD about the Second Brain, in the gut

 

Michael Gershon clearly explains some of his research. This is video one out of seven. You can find the other six

videos on YouTube.

 

Very short student note regarding Dr Gershon

 

Serotonin Research & Three Great Scientists’ Contributions

Vittorio Erspamer MD, Photo credit: Unknown; Public Domain, Wikipedia Commons

 

Dr. Vittorio Erspamer (1909-1999), the well-known discoverer of serotonin and octopamine, was an Italian pharmacologist and chemist, known for the identification, synthesis and pharmacological studies of more than sixty new chemical compounds, most notably serotonin and octopamine.

 

Erspamer was born in 1909 in the small village of Val di Non in Malosco, a municipality of Trentino in northern Italy. He attended school in the Roman Catholic Archdiocese of Trento and then moved to Pavia, where he studied at Ghislieri College, graduating in medicine and surgery in 1935. He then took the post of assistant professor in anatomy and physiology at the University of Pavia – one of the oldest universities in Europe, founded in 1361. In 1936, he obtained a scholarship to study at the Institute of Pharmacology at the University of Berlin. After returning to Italy in 1939, he moved to Rome where he took up the position of professor in pharmacology. In Rome, the focus of his research shifted to drugs and he used his past biological experience to focus on compounds isolated from animal tissues. In 1947 he became professor of pharmacology at the Faculty of Medicine at the University of Bari. In 1955, he moved from Bari to Parma, to assume an equivalent position of professor of pharmacology at the Faculty of Medicine, University of Parma. Erspamer was one of the first Italian pharmacologists to realize that fruitful scientific research benefits from building a relationship with the chemical and pharmaceutical industries. In the late 1950s, he established a collaboration with chemists at the Farmitalia company. The collaboration was useful, not only for the analysis of the structure of new molecules which he isolated and characterized pharmacologically, but also for the subsequent industrial synthesis of these chemicals and their synthetic analogs.

 

Thanks to funding received from Farmitalia, over the years Erspamer collected more than five hundred species of marine organisms from all around the world, including amphibians, shellfish, sea anemones and other species. For this purpose, he spent much time in traveling, and was known among his colleagues for his careful preparation of expeditions and knowledge of geography. Using these world-wide observations he developed a theory of geo-phylogenetic correlations among the different amphibian species of the world, which was based on analysis of the peptides and amines in their skin.

 

The research activities of Erspamer spanned more than 60 years and resulted in the isolation, identification, synthesis and pharmacological study of more than sixty new chemical compounds, especially polypeptides and biogenic amines, but also some alkaloids. Most of these compounds were isolated from animals, predominantly amphibians. In the late fifties his research shifted to peptides. In the laboratories of the Institute of Medical Pharmacology, University of Rome, he isolated from amphibians and mollusks more than fifty new bioactive peptides. These became the subjects of numerous studies in other laboratories in Europe and North America. In 1979, he focused on opioid peptides specific to Phyllomedusa, a genus of tree frog from Central and South America. These were used by the native Indians in initiation rites, to increase their prowess as “hunters” and make them feel “invincible”. They applied secretions from the skin of these frogs that resulted in euphoric and analgesic effects. The peptides studied by Erspamer have become essential to characterize the functional role of opioid receptors.

 

Erspamer retired from administrative positions in 1984 because of the age limits, but continued his research and writing until his death in Rome in 1999. His last, unfinished review was completed by his collaborators and published in 2002. During his lifetime he was twice nominated for the Nobel Prize.

 

Between 1933 and 1934, while still a college student, Erspamer published his first work on the histochemical characteristics of enterochromaffin cells using advanced techniques, not normally used at that time, such as diazo reactions, Wood’s lamp and fluorescence microscopy. In 1935, he showed that an extract prepared from enterochromaffin cells made intestinal tissue contract. Other chemists believed the extract contained adrenaline, but two years later Erspamer demonstrated that it was a previously unknown chemical, an amine, which he named enteramine and which was renamed, later as serotonin. In 1948, Maurice M. Rapport, Arda Green, and Irvine Page of the Cleveland Clinic discovered a vasoconstrictor substance in blood serum, and since it was a serum agent affecting vascular tone, they named it serotonin. In 1952 it was shown that enteramine was the same substance as serotonin. Another important chemical, also an amine, was discovered by Erspamer in 1948, in the salivary glands of the octopus, and therefore named by him octopamine.

 

Maurice Rapport (1919-2011) was a biochemist who is best known for his work with the neurotransmitter serotonin. Rapport, Irvine H. Page, and Arda A. Green worked together to isolate and name the chemical. Alone, Rapport identified its structure and published his findings in 1948. Research since its discovery has implicated serotonin with mood regulation, appetite, reproductive drives, and sleep as well as gastrointestinal roles. After his work with serotonin, Rapport did important research with cancer, cardiovascular disease, connective-tissue disease and demyelinating diseases.

 

Maurice Rapoport was born on September 23, 1919 in Atlantic City, New Jersey. His mother changed the spelling of the family name to Rapport. His father was a furrier from Russia who left the family when Rapport was a small child. Rapport graduated from DeWitt Clinton High School in the Bronx, New York and went on to earn a bachelor’s degree in chemistry from the City College of New York in 1940. He obtained his doctorate in organic chemistry in 1946 from California Institute of Technology. In 1946, Maurice Rapport began working in the Cleveland Clinic Foundation which was directed by Irvine H. Page. Since the 1860s, a substance was known about, in the serum of blood vessels, that promoted clotting. Rapport was assigned the project of isolating this serum. They enlisted the help of Arda A. Green, a physical biochemist. The substance was acquired by leaving a test tube of the reagents in a cold room while Rapport went on vacation. When he returned he isolated the crystals of the desired substance. In a paper published in 1948, they gave it a name: serotonin, derived from “serum“ and “tonic“.

 

In 1948, Rapport left the Cleveland Clinic for a position at Columbia University and continued searching for serotonin’s structure. In May 1949, the structure of serotonin was discovered to be 5-hydroxytryptamine (5-HT). Serotonin was found to be the same substance that Dr. Vittorio Erspamer had been studying since the 1930s called “enteramine“. Enteramine had a substantial place in scientific literature due to Erspamer’s research into its role in smooth muscle constriction and intestinal tracts. Erspamer’s research contributed to Rapport’s discovery of serotonin’s structure and allowed other researchers to synthesize the substance and further study its role in the body.

 

The structure of serotonin was given to Upjohn Drug Company where researchers focused on the role of serotonin in the bodily processes such as blood vessel constriction. In 1954, Betty Twarog discovered the distribution of serotonin in the brain. Further research illustrated how serotonin plays a major role in the central nervous system and digestive tract. The understanding of serotonin has led to a progression in our view of mental illness and allowed the development of antidepressants and other drugs for hypertension and migraines. After his work with serotonin, Rapport worked at the Sloan-Kettering Institute for Cancer Research. His contributions involved the activity and structures of lipids in relation to immunological activity. Specifically, he isolated cytolipin H from human cancer tissue in 1958. This led to a better understanding of our immune system. He also was a professor at the Albert Einstein College of Medicine. There he isolated two glysosphingolipids and studied antibodies to gangliosides. These findings were useful to further pharmacological studies relating these substances to demyelinating diseases such as Amyotrophic Lateral Sclerosis (ALS).

In 1968, Rapport returned to Columbia University as chief of pharmacology and professor of biochemistry. The next year, he became the chief of the new neuroscience division which combined the chemistry, pharmacology, and bacteriology divisions. He retired in 1986 and remained in the neurology department of the Albert Einstein College of Medicine as a visiting professor.

 

Betty Mack Twarog (1927 – 2013) was an American biochemist who was the first to find serotonin in the mammalian brain.. She attended Swarthmore College from 1944 to 1948, focusing on mathematics. While studying for an M.Sc. at Tufts College she heard a lecture on mollusc muscle neurology and in 1949 enrolled under John Welsh in the PhD program at Harvard to study this area. By 1952 she had submitted a paper showing that serotonin had a role as a neurotransmitter in mussels. In Autumn 1952 Twarog moved for family reasons to the Kent State University area , and chose the Cleveland Clinic as a place to continue her study of her hypothesis that invertebrate neurotransmitters would also be found in mammals. Although her supporter there, Irvine Page did not believe serotonin would be found in the brain, he nevertheless gave Twarog a laboratory and technician. By June 1953 a paper was submitted announcing the isolation of serotonin in mammalian brain. Twarog left the Cleveland Clinic in 1954 and continued to work on invertebrate smooth muscle at Tufts, Harvard and SUNY at Stony Brook. In later years, at the Bigelow Laboratory for Ocean Sciences in Boothbay Harbor, Maine she worked on how shellfish evade phytoplankton poisons. Twarog died on February 6, 2013, at the age of 85 in Damariscotta, Maine.

 

Twarog’s isolation of serotonin in brain established its potential as a neurotransmitter and thus a modulator of brain action. Her discovery was an essential precursor to the creation in 1978 of the antidepressant SSRI medicines such as fluoxetine and sertraline.

 

Medicine and the Philosophy of Rene Descartes; Cogito ergo sum

Rene Descartes at work Credit: Public Domain, Wikipedia Commons

 

The French philosopher and mathematician Rene Descartes (1596-1650) gave a high priority to medicine and dedicated a great deal of his life to medical studies. Nevertheless, his relation to medicine has always been debated. A number of recent works have contributed to reassessing the earlier critique which nearly wrote him out from medical history. The recent biographical dismissal of a number of earlier allegations and the recent interpretations of the medical contents of his collected writings ought to result in Descartes’ reinstatement in medical history.

 

Painting of Rene Descartes by Frans Hals – Credit: After Frans Hals – Andre Hatala [e.a.] (1997) De eeuw van Rembrandt, Bruxelles: Credit communal de Belgique, ISBN 2-908388-32-4., Public Domain; Wikipedia Commons

 

His novel anti-Aristotelian methodology had a crucial influence on the medicine of the subsequent decades. Also, his early defense of Harvey’s theory of blood circulation had great influence. Especially his thoughts about a mechanical physiology by means of which the functions of the body could be explained without involvement of “occult faculties“ influenced that time. His empirical mistakes, including the central role which he ascribed to the corpus pineale, are offset by his brilliant thoughts about the function and importance of the brain. Although he did not make any really new empirical discoveries within medicine, he advanced a number of concrete ideas which later lead to actual discoveries such as visual accommodation, the reflex concept and the reciprocal innervations of antagonistic muscles. Descartes’ psychosomatic view of the importance of the interplay between sensations, “the passions of the soul“, and the free will in the preservation of health shows in addition that his fundamental soul-body dualism was far more nuanced than is often claimed. Descartes developed a system of dualism which distinguishes between the “mind,“ whose essence is thinking, and “matter,“ whose essence is extended into space; with more flexibility for definition. This dualism influenced his mechanical interpretation of nature and therefore of the human body. He believed that the laws of physics and mathematics explain human physiology.

 

According to One Hundred Books Famous in MedicineDe homine, “is the first work in the history of science and medicine to construct a unified system of human physiology that presents man as a purely material and mechanical being: man as machine de terre.“ This concept helped free the study of physiology from the constraints of religion and culture. De homine is an important early textbook of physiology, but empirically flawed because Descartes’ practical knowledge of his subject was inadequate. With extraordinary courage, Descartes refused to accept the authority of previous philosophers. He frequently set his views apart from those of his predecessors. In the opening section of the Passions of the Soul, a treatise on the early modern version of what are now commonly called emotions, Descartes goes so far as to assert that he will write on this topic “as if no one had written on these matters before“. His best known philosophical statement is “Cogito ergo sum“ The thrilling nature of this stance is not only that Descartes separated the study of man (philosophy and medicine) from religious dogma, but he created new pathways of medical and scientific inquiry, deviating from nearly two thousand years of unquestioned adherence to the medical knowledge of Hippocrates (360 BCE) and Galen (129 CE  200 CE).

 

Human ideas die hard. The history of science and medicine gives clear proof of this. Ideas change fast in the 21st Century, therefore, it’s hard to believe that the approach to medicine barely changed over approximately 2,000 years and that the teachings of Hippocrates and Galen lasted right up to the 17th Century. At this point, the great genius of Rene Descartes asserted, “No, I am different.“ His creativity literally changed the history of human thought. Descartes originally planned to publish De homine in 1633, but hearing of Galileo’s condemnation by the Church, he became concerned for his own safety and refused to have it printed. Consequently, the first edition of this work appeared 12 years after Descartes’ death. The French edition, L’homme, also includes la formation du foetus which explains reproductive generation in physiological terms. Sources: ncbi.nlm.nih.gov/pubmed; Wikipedia; virginia.edu/treasures/rene-descartes-1596-1650/

 

Short History of Fasting

The Buddha emaciated after undergoing severe ascetic practices, including fasting. Gandhara, 2nd to 3rd Century CE. British Museum. Credit: User:World Imaging – Own work, Public Domain; Wikipedia Commons

 

Once when the Buddha was touring in the region of Kasi together with a large sangha of monks he addressed them saying:

 

I, monks, do not eat a meal in the evening. Not eating a meal in the evening I, monks, am aware of good health and of being without illness and of buoyancy and strength and living in comfort. Come, do you too, monks, not eat a meal in the evening. Not eating a meal in the evening you too, monks, will be aware of good health and….. and living in comfort.

 

Used for thousands of years, fasting is one of the oldest therapies in medicine. Many of the great doctors of ancient times and many of the oldest healing systems have recommended it as an integral method of healing and prevention. Hippocrates, the father of Western medicine, believed fasting enabled the body to heal itself. Paracelsus, another great healer in the Western tradition, wrote 500 years ago that “fasting is the greatest remedy, the physician within.“ Ayurvedic medicine, has long advocated fasting as a major treatment. In ancient Greece, Pythagoras was among many who extolled its virtues. During the 14th century, fasting was practiced by St Catherine of Siena, while the Renaissance doctor Paracelsus called it the “physician within“. Indeed, fasting in one form or another is a distinguished tradition and throughout the centuries, devotees have claimed it brings physical and spiritual renewal.

 

In primitive cultures, a fast was often demanded before going to war, or as part of a coming-of-age ritual. It was used to assuage an angry deity and by native north Americans, as a rite to avoid catastrophes such as famine. Fasting has played a key role in all the world’s major religions (apart from Zoroastrianism which prohibits it), being associated with penitence and other forms of self-control. Judaism has several annual fast days including Yom Kippur, the Day of Atonements; in Islam, Muslims fast during the holy month of Ramadan, while Roman Catholics and Eastern orthodoxy observe a 40 day fast during Lent, the period when Christ fasted 40 days in the desert.

 

Women in particular seem to have had a proclivity for religious fasting, known as “anorexia mirabilis“ (miraculous lack of appetite); surviving for periods without nourishment was regarded as a sign of holiness and chastity. Julian of Norwich, an English anchoress and mystic who lived in the 14th century used it as a means of communicating with Christ. In other belief systems, the gods were thought to reveal their divine teaching in dreams and visions only after a fast by the temple priests. Fasting has also long been used as a gesture of political protest, the classic example being the Suffragettes and Mahatma Gandhi who undertook 17 fasts during the struggle for Indian independence: his longest fast lasted 21 days. Gandhi famously led Indians in challenging the British-imposed salt tax with the 250 mile Dandi Salt March in 1930, and later in calling for the British to Quit India in 1942. He was imprisoned for many years, upon many occasions, in both South Africa and India. Gandhi attempted to practice nonviolence and truth in all situations, and advocated that others do the same. He lived modestly in a self-sufficient residential community and wore the traditional Indian dhoti and shawl, woven with yarn hand-spun on a charkha. He ate simple vegetarian food, and also undertook long fasts as a means of both self-purification and social protest.

 

The practice of fasting, has had its dark side, having been exploited by exhibitionists and fraudsters, and foisted on the gullible. Take “Doctor“ Linda Burfield Hazzard, from Minnesota, thought to have caused the death of over 40 patients whom she put on strict fasts, before being convicted of manslaughter in 1912. She died from her own fasting regime in 1938. Then there were the Victorian “fasting girls“ who claimed to be able to survive indefinitely without food; one of them, Sarah Jacobs, was allowed to starve to death at aged 12, as doctors tested her claims in hospital.

 

Therapeutic fasting – in which fasting is used to either treat or prevent ill health, with medical supervision – became popular in the 19th century as part of the “Natural Hygiene Movement“ in the US. Dr Herbert Shelton 1895-1985) was one revered pioneer, opening “Dr Shelton’s Health school“ in San Antonio, Texas, in 1928. He claimed to have helped 40,000 patients recover their health with a water fast. Shelton wrote “Fasting must be recognized as a fundamental and radical process that is older than any other mode of caring for the sick organism, for it is employed on the plane of instinct.“ Shelton was an advocate, of alternative medicine, an author, pacifist, vegetarian, supporter of rawism and fasting. Shelton was nominated by the American Vegetarian Party to run as its candidate for President of the United States in 1956. He saw himself as the champion of original natural hygiene ideas from the 1830s. His ideas have been described as quackery by critics.

 

In the UK, too, fasting became part of the “Nature Cure“, an approach which also stressed the importance of exercise, diet, sunshine, fresh air and “positive thinking“. “Fasting in Great Britain was at its most popular in the 1920s,“ according to Tom Greenfield, a naturopath who now runs a clinic in Canterbury, England. “The first Nature Cure clinic to offer fasting opened in Edinburgh and I still have one or two patients who fasted there many decades ago.“ Other clinics which offered therapeutic fasting included the legendary Tyringham Hall in Buckinghamshire, now closed, and Champneys in Tring, Hertforshire – in those days a naturopathic center, now a destination spa. “Fasting was used to treat heart disease, high blood pressure, obesity, digestive problems, allergies, headaches – pretty much everything,“ says Greenfield. “Fasts were individually tailored and could be anything from a day or two to three months, for obese patients. The clinics would take a full case history to see if people were suitable and they would be closely monitored.“ Eventually, he says, “scientific“ medicine became dominant as better drugs were developed. Fasting and the “Nature Cure“ fell out of favor in Britain.

 

By contrast, in Germany where fasting was pioneered by Dr Otto Buchinger, therapeutic fasting is still popular and offered at various centers. Many German hospitals now run fasting weeks, funded by health insurance programs, to help manage obesity. Fasting holidays, held at centers and spas throughout Europe, include Hungary, the Czech Republic and Austria, and are growing in popularity. “In Germany fasting is part of the naturheilkunde – natural health practice,“ says Greenfield. “It has remained popular because it became integrated into medical practice so patients could be referred for a fast by their doctors.“ More recently, interest in fasting has revived in the UK and in the United States, with millions trying intermittent fasting such as the 5:2 diet, or on modified fasts where only certain foods or juices are taken for a period of time. According to Greenfield, “If people can do a one day fast for a minimum of twice a year – maybe one in spring and one in the autumn and setting aside a day they can rest, when they just drink water – this will help mitigate the toxic effects of daily living.“

 

Fasting has been used in Europe as a medical treatment for years. Many spas and treatment centers, particularly those in Germany, Sweden, and Russia, use medically supervised fasting. Fasting has gained popularity in American alternative medicine over the past several decades, and many doctors feel it is beneficial. Fasting is a central therapy in detoxification , a healing method founded on the principle that the buildup of toxic substances in the body is responsible for many illnesses and conditions.

 

First Contributors to an Understanding of the Blood Brain Barrier

 

Paul Ehrlich

 

Paul Ehrlich MD (1854-1915): Photo credit: Harris & Ewing – This image is available from the United States Library of Congress’ s Prints and Photographs division under the digital ID hec.04709. Public Domain, Wikipedia Commons

 

Paul Ehrlich’ s work illuminated the existence of the blood-brain barrier, and in1908, he was awarded The Nobel Prize in Physiology or Medicine for his work on immunity.

 

Paul Ehrlich, a German Jewish physician, was a bacteriologist studying staining, a procedure that is used in many microscopic studies to make fine biological structures visible using chemical dyes. As Ehrlich injected some of these dyes (notably the aniline dyes that were then widely used), the dye stained all of the organs of some kinds of animals except for their brains. At that time, Ehrlich attributed this lack of staining to the brain simply not picking up as much of the dye. However, in a later experiment in 1913, Edwin Goldman (one of Ehrlich’ s students) injected the dye into the cerebro-spinal fluids of animals’ brains directly. He found that in this case the brains did become dyed, but the rest of the body did not. This clearly demonstrated the existence of some sort of compartmentalization between the two. At that time, it was thought that the blood vessels themselves were responsible for the barrier, since no obvious membrane could be found. The concept of the blood-brain barrier (then termed hematoencephalic barrier) was proposed in 1900 by a Berlin physician, Lewandowsky. It was not until the introduction of the scanning electron microscope to the medical research fields in the 1960s that the actual membrane could be observed and proved to exist.

 

Edwin Goldmann

 

Edwin Goldmann MD (1862-1913)  Credit: Von unbekannt – [1] M?nchen. med. Wchnschr. lx, 2735, 1913, PD-alt-100, https://de.wikipedia.org/w/index.php?curid=5721774

 

Edwin Ellen Goldmann (born November 12, 1862 in Burgherdorp, South Africa), was a German Jewish surgeon. He studied medicine in London, and in 1888 he received the Doctor of Medicine and PhD degrees. He got his first job at Karl Weigert in Frankfurt. He stayed there for six months and then went to Freiburg to join Eugen Baumann, where he devoted himself to physiological-chemical studies. In his work he dealt with the cystine, sulfur-containing compounds of urine and iodothyrine. His Habilitationsschrift from the year 1895 dealt with the doctrine of the neurons. In 1898 he became an extraordinary professor and later a full honorary professor. He headed the surgical department of the Diakonissenkrankenhaus in Freiburg and worked mainly in the field of cancer research.

 

Goldmann made a significant contribution to the discovery of the blood-brain barrier. In 1913, he injected Trypan blue, a water-soluble azo dye stuff first synthesized by Paul Ehrlich in 1904, directly into the cerebrospinal fluid of dogs. The result showed staining of the entire central nervous system (brain and spinal cord) but no other organ.

In 1913 Goldmann died of cancer in Freiburg.

 

Rudolph Virchow

 

Rudolph Virchow (1821-1902); Photo credit: Unknown – http://ihm.nlm.nih.gov; Public Domain, Wikipedia Commons

 

The appearance of perivascular spaces was first noted in 1843 by Durant-Fardel. In 1851, Rudolph Virchow was the first to provide a detailed description of these microscopic spaces between the outer and inner/middle lamina of the brain vessels. Charles-Philippe Robin confirmed these findings in 1859 and was the first to describe the perivascular spaces as channels that existed in normal anatomy. The spaces were called Virchow-Robin spaces and are still also known as such. The immunological significance was discovered by Wilhelm His, Sr. in 1865 based on his observations of the flow of interstitial fluid over the spaces to the lymphatic system. For many years after Virchow-Robin spaces were first described, it was thought that they were in free communication with the cerebrospinal fluid in the subarachnoid space. It was later shown with the use of electron microscopy that the pia mater serves as separation between the two. Upon the application of MRI, measurements of the differences of signal intensity between the perivascular spaces and cerebrospinal fluid supported these findings. As research technologies continued to expand, so too did information regarding their function, anatomy and clinical significance.

 

A perivascular space, also known as a Virchow-Robin space, is an immunological space between an artery and a vein (not capillaries) and the pia mater that can be expanded by leukocytes. The spaces are formed when large vessels take the pia mater with them when they dive deep into the brain. The pia mater is reflected from the surface of the brain onto the surface of blood vessels in the subarachnoid space. Perivascular cuffs are regions of leukocyte aggregation in the spaces, usually found in patients with viral encephalitis. Perivascular spaces are extremely small and can usually only be seen on MRI images when dilated. While many normal brains will show a few dilated spaces, an increase in these has been shown to correlate with the incidence of several neurodegenerative diseases, making the spaces a popular topic of research. One of the most basic roles of the perivascular space is the regulation of fluid movement in the central nervous system and its drainage. The spaces ultimately drain fluid from neuronal cell bodies to the cervical lymph nodes. In particular, the “tide hypothesis“ suggests that the cardiac contraction creates and maintains pressure waves to modulate the flow to and from the subarachnoid space and the perivascular space. By acting as a sort of sponge, they are essential for signal transmission and the maintenance of extracellular fluid. Another function is as an integral part of the blood-brain barrier (BBB). While the BBB is often described as the tight junctions between the endothelial cells, this is an oversimplification that neglects the intricate role that perivascular spaces take in separating the venous blood from the parenchyma of the brain. Often, cell debris and foreign particles, which are impermeable to the BBB will get through the endothelial cells, only to be phagocytosed in the perivascular spaces. This holds true for many T and B cells, as well as monocytes, giving this small fluid filled space an important immunological role. Perivascular spaces also play an important role in immunoregulation; they not only contain interstitial and cerebrospinal fluid, but they also have a constant flux of macrophages, which is regulated by blood-borne mononuclear cells, but do not pass the basement membrane of the glia limitans. Similarly, as part of its role in signal transmission, perivascular spaces contain vasoactive neuropeptides (VNs), which, aside from regulating blood pressure and heart rate, have an integral role in controlling microglia. VNs serve to prevent inflammation by activating the enzyme adenylate cyclase which then produces cAMP.

 

Chronic Pain

Descartes’ pain pathway: “Particles of heat“ (A) activate a spot of skin (B) attached by a fine thread (cc) to a valve in the brain (de) where this activity opens the valve, allowing the animal spirits to flow from a cavity (F) into the muscles causing them to flinch from the stimulus, turn the head and eyes toward the affected body part, and move the hand and turn the body protectively. Illustration of the pain pathway in Rene Descartes’ Traite de l’homme (Treatise of Man) 1664. The long fiber running from the foot to the cavity in the head is pulled by the heat and releases a fluid that makes the muscles contract. Graphic credit: Rene Descartes – Copied from a 345 year old book, Traite de l’homme, Public Domain; Wikipedia Commons

 

Pain has accompanied human beings since the moment this species appeared on Earth. From that moment on, and throughout his long history mankind has tried not only to look for the causes of pain but also to find remedies to relieve pain. The concept of pain has remained a topic of long debate since its emergence in ancient times. The initial ideas of pain were formulated in both the East and the West before 1800. Since 1800, due to the development of experimental sciences, different theories of pain have emerged and become central topics of debate. However, the existing theories of pain may be appropriate for the interpretation of some aspects of pain, but are not yet comprehensive. The history of pain problems is as long as that of human beings; however, the understanding of pain mechanisms is still far from sufficient. Thus, intensive research is required. This historical review mainly focuses on the development of pain theories and the fundamental discoveries in this field. Other historical events associated with pain therapies and remedies are beyond the scope of this review. As long as humans have experienced pain, they have given explanations for its existence and sought soothing agents to dull or cease the painful sensation. Archaeologists have uncovered clay tablets dating back as far as 5,000 BCE which reference the cultivation and use of the opium poppy to bring joy and cease pain. In 800 BCE, the Greek writer Homer wrote in his epic, The Odyssey, of Telemachus, a man who used opium to soothe his pain and forget his worries. While some cultures researched analgesics and allowed or encouraged their use, others perceived pain to be a necessary, integral sensation. Physicians of the 19th century used pain as a diagnostic tool, theorizing that a greater amount of personally perceived pain was correlated to a greater internal vitality, and as a treatment in and of itself, inflicting pain on their patients to rid the patient of evil and unbalanced humors. This article focuses both on the history of how pain has been perceived across time and culture, but also how malleable an individual’s perception of pain can be due to factors like situation, their visual perception of the pain, and previous history with pain.

 

Because of the only relatively recent discovery of neurons and how they conduct and interpret signals, including sensations such as pain, within the body, various theories have been proposed as to the causes of pain and its role or function. Even within seemingly limited groups, such as the ancient Greeks, there were competing theories as to the root cause of pain. Aristotle did not include a sense of pain when he enumerated the five senses; he, like Plato before him, saw pain and pleasure not as sensations but as emotions (“passions of the soul“). Alternatively, Hippocrates believed that pain was caused by an imbalance in the vital fluids of a human. At this time, neither Aristotle nor Hippocrates believed that the brain had any role to play in pain processing but rather implicated the heart as the central organ for the sensation of pain. In the 11th century, Avicenna theorized that there were a number of feeling senses including touch, pain and titillation.

 

Portrait of Rene Descartes: Portrait credit: By After Frans Hals – Andre Hatala [[e.a.] (1997) De eeuw van Rembrandt, Bruxelles: Credit communal de Belgique, ISBN 2-908388-32-4., Public Domain, Wikipedia Commons

 

Even just prior to the scientific Renaissance in Europe, pain was not well understood and it was theorized that pain existed outside of the body, perhaps as a punishment from God, with the only management treatment being prayer. Again, even within the confined group of religious, practicing Christians, more than one theory arose. Alternatively, pain was also theorized to exist as a test or trial on a person. In this case, pain was inflicted by god onto person to reaffirm their faith, or in the example of Jesus, to lend legitimacy and purpose to a trial through suffering. In his 1664 Treatise of Man, Rene Descartes theorized that the body was more similar to a machine, and that pain was a disturbance that passed down along nerve fibers until the disturbance reached the brain. This theory transformed the perception of pain from a spiritual, mystical experience to a physical, mechanical sensation meaning that a cure for such pain could be found by researching and locating pain fibers within the bodies rather than searching for an appeasement for god. This also moved the center of pain sensation and perception from the heart to the brain. Descartes proposed his theory by presenting an image of a man’s hand being struck by a hammer. In between the hand and the brain, Descartes described a hollow tube with a cord beginning at the hand and ending at a bell located in the brain. The blow of the hammer would induce pain in the hand, which would pull the cord in the hand and cause the bell located in the brain to ring, indicating that the brain had received the painful message. Researchers began to pursue physical treatments such as cutting specific pain fibers to prevent the painful signal from cascading to the brain.

 

 

Scottish anatomist Charles Bell proposed in 1811 that there exist different kinds of sensory receptors, each adapted to respond to only one stimulus type. In 1839 Johannes Muller, having established that a single stimulus type (e.g., a blow, electric current) can produce different sensations depending on the type of nerve stimulated, hypothesized that there is a specific energy, peculiar to each of five nerve types that serve Aristotle’s five senses, and that it is the type of energy that determines the type of sensation each nerve produces. He considered feelings such as itching, pleasure, pain, heat, cold and touch to be varieties of the single sense he called “feeling and touch.“ Muller’s doctrine killed off the ancient idea that nerves carry actual properties or incorporeal copies of the perceived object, marking the beginning of the modern era of sensory psychology, and prompted others to ask, do the nerves that evoke the different qualities of touch and feeling have specific characteristics?

 

Filippo Pacini had isolated receptors in the nervous system which detect pressure and vibrations in 1831. Georg Meissner and Rudolf Wagner described receptors sensitive to light touch in 1852; and Wilhelm Krause found a receptor that responds to gentle vibration in 1860. Moritz Schiff was first to definitively formulate the specificity theory of pain when, in 1858, he demonstrated that touch and pain sensations traveled to the brain along separate spinal cord pathways. In 1882 Magnus Blix reported that specific spots on the skin elicit sensations of either cold or heat when stimulated, and proposed that “the different sensations of cool and warm are caused by stimulation of different, specific receptors in the skin.“ Max von Frey found and described these heat and cold receptors and, in 1896, reported finding “pain spots“ on the skin of human subjects. Von Frey proposed there are low threshold cutaneous spots that elicit the feeling of touch, and high threshold spots that elicit pain, and that pain is a distinct cutaneous sensation, independent of touch, heat and cold, and associated with free nerve endings.

 

In the first volume of his 1794 Zoonomia; or the Laws of Organic Life, Erasmus Darwin supported the idea advanced in Plato’s Timaeus, that pain is not a unique sensory modality, but an emotional state produced by stronger than normal stimuli such as intense light, pressure or temperature. Wilhelm Erb, in 1874, also argued that pain can be generated by any sensory stimulus, provided it is intense enough, and his formulation of the hypothesis became known as the intensive theory. Alfred Goldscheider (1884) confirmed the existence of distinct heat and cold sensors, by evoking heat and cold sensations using a fine needle to penetrate to and electrically stimulate different nerve trunks, bypassing their receptors. Though he failed to find specific pain sensitive spots on the skin, Goldscheider concluded in 1895 that the available evidence supported pain specificity, and held the view until a series of experiments were conducted in 1889 by Bernhard Naunyn. Naunyn had rapidly (60-600 times/second) prodded the skin of tabes dorsalis patients, below their touch threshold (e.g., with a hair), and in 6-20 seconds produced unbearable pain. He obtained similar results using other stimuli including electricity to produce rapid, sub-threshold stimulation, and concluded pain is the product of summation. In 1894 Goldscheider extended the intensive theory, proposing that each tactile nerve fiber can evoke three distinct qualities of sensation – tickle, touch and pain – the quality depending on the intensity of stimulation; and extended Naunyn’s summation idea, proposing that, over time, activity from peripheral fibers may accumulate in the dorsal horn of the spinal cord, and “spill over“ from the peripheral fiber to a pain-signaling spinal cord fiber once a threshold of activity has been crossed. The British psychologist, Edward Titchener, pronounced in his 1896 textbook, “excessive stimulation of any sense organ or direct injury to any sensory nerve occasions the common sensation of pain.“

 

By the mid-1890s, specificity was mainly backed by physiologists (prominently by von Frey) and clinicians; and the intensive theory received most support from psychologists. But after Henry Head in England published a series of clinical observations between 1893 and 1896, and von Frey’s experiments between 1894 and 1897, the psychologists migrated to specificity almost en masse, and by century’s end, most textbooks on physiology and psychology were presenting pain specificity as fact, with Titchener in 1898 now placing “the sensation of pain“ alongside that of pressure, heat and cold. Though the intensive theory no longer featured prominently in textbooks, Goldscheider’s elaboration of it nevertheless stood its ground in opposition to von Frey’s specificity at the frontiers of research, and was supported by some influential theorists well into the mid-twentieth century. William Kenneth Livingston advanced a summation theory in 1943, proposing that high intensity signals, arriving at the spinal cord from damage to nerve or tissue, set up a reverberating, self-exciting loop of activity in a pool of interneurons, and once a threshold of activity is crossed, these interneurons then activate “transmission“ cells which carry the signal to the brain’s pain mechanism.  The reverberating interneuron activity also spreads to other spinal cord cells that trigger a sympathetic nervous system and somatic motor system response; and these responses, as well as fear and other emotions elicited by pain, feed into and perpetuate the reverberating interneuron activity. A similar proposal was made by RW Gerard in 1951, who proposed also that intense peripheral nerve signaling may cause temporary failure of inhibition in spinal cord neurons, allowing them to fire as synchronized pools, with signal volleys strong enough to activate the pain mechanism. Building on John Paul Nafe’s 1934 suggestion that different cutaneous qualities are the product of different temporal and spatial patterns of stimulation, and ignoring a large body of strong evidence for receptor fiber specificity, DC Sinclair and G Weddell’s 1955 “peripheral pattern theory“ proposed that all skin fiber endings (with the exception of those innervating hair cells) are identical, and that pain is produced by intense stimulation of these fibers. In 1953, Willem Noordenbos had observed that a signal carried from the area of injury along large diameter “touch, pressure or vibration“ fibers may inhibit the signal carried by the thinner “pain“ fibers – the ratio of large fiber signal to thin fiber signal determining pain intensity; hence, we rub a smack. This was taken as a demonstration that pattern of stimulation (of large and thin fibers in this instance) modulates pain intensity.

 

Ronald Melzack and Patrick Wall introduced their “gate control“ theory of pain in the 1965 Science article “Pain Mechanisms: A New Theory“. The authors proposed that both thin (pain) and large diameter (touch, pressure, vibration) nerve fibers carry information from the site of injury to two destinations in the dorsal horn of the spinal cord: transmission cells that carry the pain signal up to the brain, and inhibitory interneurons that impede transmission cell activity. Activity in both thin and large diameter fibers excites transmission cells. Thin fiber activity impedes the inhibitory cells (tending to allow the transmission cell to fire) and large diameter fiber activity excites the inhibitory cells (tending to inhibit transmission cell activity). So, the large fiber (touch, pressure, vibration) activity relative to thin fiber activity at the inhibitory cell, the less pain is felt. The authors had drawn a neural “circuit diagram“ to explain why we rub a smack. They pictured not only a signal traveling from the site of injury to the inhibitory and transmission cells and up the spinal cord to the brain, but also a signal traveling from the site of injury directly up the cord to the brain (bypassing the inhibitory and transmission cells) where, depending on the state of the brain, it may trigger a signal back down the spinal cord to modulate inhibitory cell activity (and so pain intensity). The theory offered a physiological explanation for the previously observed effect of psychology on pain perception. In 1975, well after the time of Descartes, the International Association for the Study of Pain sought a consensus definition for pain, finalizing “an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage“ as the final definition. It is clear from this definition that while it is understood that pain is a physical phenomenon, the emotional state of a person, as well as the context or situation associated with the pain also impacts the perception of the nociceptive or noxious event. For example, if a human experiences a painful event associated with any form of trauma (an accident, disease, etc.), a reoccurrence of similar physical pain will not only inflict physical trauma but also the emotional and mental trauma first associated with the painful event. Research has shown that should a similar injury occur to two people, one person who associates large emotional consequence to the pain and the other person who does not, the person who associates a large consequence on the pain event will feel a more intense physical pain that the person who does not associate a large emotional consequence with the pain.

 

Modern research has gathered considerable amounts of evidence that support the theory that pain is not only a physical phenomenon but rather a biopsychosocial phenomenon, encompassing culture, nociceptive stimuli, and the environment in the experience and perception of pain. For example, the Sun Dance is a ritual performed by traditional groups of Native Americans. In this ritual, cuts are made into the chest of a young man. Strips of leather are slipped through the cuts, and poles are tied to the leather. This ritual lasts for hours and undoubtedly generates large amounts of nociceptive signaling, however the pain may not be perceived as noxious or even perceived at all. The ritual is designed around overcoming and transcending the effects of pain, where pain is either welcomed or simply not perceived. Additional research has shown that the experience of pain is shaped by a plethora of contextual factors, including vision. Researchers have found that when a subject views the area of their body that is being stimulated, the subject will report a lowered amount of perceived pain. For example, one research study used a heat stimulation on their subjects’ hands. When the subject was directed to look at their hand when the painful heat stimulus was applied, the subject experienced an analgesic effect and reported a higher temperature pain threshold. Additionally, when the view of their hand was increased, the analgesic effect also increased and vice versa. This research demonstrated how the perception of pain relies on visual input. The use of fMRI to study brain activity confirms the link between visual perception and pain perception. It has been found that the brain regions that convey the perception of pain are the same regions that encode the size of visual inputs. One specific area, the magnitude-related insula of the insular cortex, functions to perceive the size of a visual stimulation and integrate the concept of that size across various sensory systems, including the perception of pain. This area also overlaps with the nociceptive-specific insula, part of the insula that selectively processes nociception, leading to the conclusion that there is an interaction and interface between the two areas. This interaction tells the individual how much relative pain they are experiencing, leading to the subjective perception of pain based on the current visual stimulus.

 

Humans have always sought to understand why they experience pain and how that pain comes about. While pain was previously thought to be the work of evil spirits, it is now understood to be a neurological signal. However, the perception of pain is not absolute and can be impacted by various factors in including the context surrounding the painful stimulus, the visual perception of the stimulus, and an individual’s personal history with pain.

Next Page →