The Biological Clock

A mosaic of Hippocrates on the floor of the Asclepieion of Kos, with Asklepius in the middle, 2nd-3rd century.Photo credit: This file is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license. Wikipedia Commons

 

 

Hippocratic medicine was humble and passive. The therapeutic approach was based on the healing power of nature. “If we could give every individual the right amount of nourishment and exercise, not too little and not too much, we would have found the safest way to health”. Hippocrates, 460 BCE.

 

The earth rotates on its axis every 24 hours, with the result that any position on the earth’s surface alternately faces toward or away from the sun -day and night. That the metabolism, physiology, and behavior of most organisms changes profoundly between day and night is obvious to even the most casual observer. These biological oscillations are apparent as diurnal rhythms. It is less obvious that most organisms have the innate ability to measure time. Indeed, most organisms do not simply respond to sunrise but, rather, anticipate the dawn and adjust their biology accordingly. When deprived of exogenous time cues, many of these diurnal rhythms persist, indicating their generation by an endogenous biological circadian clock. Until recently, the molecular mechanisms by which organisms functioned in this fourth dimension, time, remained mysterious. However, over the last 30 or so years, the powerful approaches of molecular genetics have revealed the molecular underpinnings of a cellular circadian clockwork as complicated and as beautiful as the wonderful chronometers developed in the 18th century.

 

CHARACTERISTICS OF CIRCADIAN RHYTHMS

 

Circadian rhythms are the subset of biological rhythms with period, defined as the time to complete one cycle of~24 hours. This defining characteristic inspired Franz Halberg in 1959 to coin the term circadian, from the Latin words “circa“ (about) and “dies“ (day). A second defining attribute of circadian rhythms is that they are endogenously generated and self-sustaining, so they persist under constant environmental conditions, typically constant light (or dark) and constant temperature. Under these controlled conditions, the organism is deprived of external time cues, and the free-running period of ~24 h is observed. A third characteristic of all circadian rhythms is temperature compensation; the period remains relatively constant over a range of ambient temperatures. This is thought to be one facet of a general mechanism that buffers the clock against changes in cellular metabolism.

 

The first writings, at least in the western canon, to recognize diurnal rhythms come from the fourth century BCE. Androsthenes described the observation of daily leaf movements of the tamarind tree, Tamarindus indicus, that were observed on the island of Tylos (now Bahrein) in the Persian Gulf during the marches of Alexander the Great. There was no suggestion that the endogenous origin of these rhythms was suspected at the time, and it took more than two millennia for this to be experimentally tested. The scientific literature on circadian rhythms began in 1729 when the French astronomer de Mairan reported that the daily leaf movements of the sensitive heliotrope plant (probably Mimosa pudica) persisted in constant darkness, demonstrating their endogenous origin. Presciently, de Mairan suggested that these rhythms were related to the sleep rhythms of bedridden humans. It took 30 years before de Mairan’s observations were independently repeated. These studies excluded temperature variation as a possible zeitgeber driving the leaf movement rhythms.

 

The observation of a circadian or diurnal process in humans is mentioned in Chinese medical texts dated to around the 13th century, including the Noon and Midnight Manual and the Mnemonic Rhyme to Aid in the Selection of Acu-points According to the Diurnal Cycle, the Day of the Month and the Season of the Year. As early as 1880, Charles and Francis Darwin suggested the heritability of circadian rhythms, as opposed to the imprinting of a 24-hour period by exposure to diurnal cycles during development. This was initially explored in the 1930s by two strategies. In one, plants or animals were raised in constant conditions for multiple generations. One of the most grueling among such studies demonstrated the retention of stable rhythms among fruit flies reared in constant conditions for 700 generations. In a second strategy, seedlings or animals were exposed to cycles that differed from 24 hour in an effort to imprint novel periods; such studies could sometimes impose the novel period length during the novel cycles, but upon release into continuous conditions, the endogenous circadian period was restored. The inheritance of period length among progeny from crosses of parents with distinct period lengths was first reported in Phaseolus; hybrids had period length intermediates between those of the parents. In 1896, Patrick and Gilbert observed that during a prolonged period of sleep deprivation, sleepiness increases and decreases with a period of approximately 24 hours. In 1918, J.S. Szymanski showed that animals are capable of maintaining 24-hour activity patterns in the absence of external cues such as light and changes in temperature. In the early 20th century, circadian rhythms were noticed in the rhythmic feeding times of bees. Extensive experiments were done by Auguste Forel, Ingeborg Beling, and Oskar Wahl to see whether this rhythm was due to an endogenous clock. The existence of circadian rhythm was independently discovered in the fruit fly Drosophila melanogaster in 1935 by two German zoologists, Hans Kalmus and Erwin Bunning.In 1954, an important experiment was reported by Colin Pittendrigh who showed that eclosion (the process of pupa turning into adult) in D. pseudoobscura was a circadian behavior. He demonstrated that temperature played a vital role in eclosion rhythm, the period of eclosion was delayed but not stopped when temperature was decreased. It was an indication that circadian rhythm was controlled by an internal biological clock. The term circadian was coined by Franz Halberg in 1959. Genetic analysis identifying components of circadian clocks began in the 1970s. Although now it seems axiomatic that circadian clocks are composed of the products of genes, just how this might be so was the source of considerable controversy. It was argued that forward genetic efforts would be fruitless because clocks were sufficiently complex to reasonably be expected to exhibit polygenic inheritance and would not yield easily to standard genetic approaches. However, mutations conferring altered period length were identified and characterized in the fruit fly Drosophila melanogaster, the green alga Chlamydomonas reinhardtii, and the filamentous fungus N. crassa. It took more than a decade to clone the first clock gene, the Drosophila period (per) gene, and another 5 years to clone the second, the Neurospora frequency gene. However, the decade of the 1990s saw rapid progress toward the identification of clock components and the elucidation of oscillator mechanisms central to the circadian clock in a number of organisms, most notably Drosophila, Neurospora, and mice.

 

Ron Konopka and Seymour Benzer identified the first clock mutant in Drosophila in 1971 and called it “period“ (per) gene, the first discovered genetic determinant of behavioral rhythmicity per gene was isolated in 1984 by two teams of researchers. In 1977, the International Committee on Nomenclature of the International Society for Chronobiology formally adopted the definition, which states:

 

Circadian: relating to biologic variations or rhythms with a frequency of 1 cycle in 24 + 4 h; circa (about, approximately) and dies (day or 24 h). Note: term describes rhythms with an about 24-h cycle length, whether they are frequency-synchronized with (acceptable) or are desynchronized or free-running from the local environmental time scale, with periods of slightly yet consistently different from 24-h.

 

Joseph Takahashi discovered the first mammalian circadian clock mutation using mice in 1994. However, recent studies show that deletion of clock does not lead to a behavioral phenotype (the animals still have normal circadian rhythms), which questions its importance in rhythm generation. Konopka, Jeffrey Hall, Michael Roshbash and their team showed that per locus is the center of the circadian rhythm, and that loss of per stops circadian activity. At the same time, Michael W. Young’s team reported similar effects of per, and that the gene covers 7.1-kilobase (kb) interval on the X chromosome and encodes a 4.5-kb poly(A)+ RNA. They went on to discover the key genes and neurones in Drosophila circadian system, for which Hall, Rosbash and Young received the Nobel Prize in Physiology or Medicine 2017. Sources: nih.gov; Wikipedia

 

Migraines

The Head Ache, George Cruikshank (1819); Painting credit: George Cruikshank – http://metmuseum.org/art/collection/search/393320, Public Domain, https://commons.wikimedia.org/w/index.php?curid=251827

 

An early description consistent with migraines is contained in the Ebers papyrus, written around 1500 BCE in ancient Egypt. In 200 BCE, writings from the Hippocratic school of medicine described the visual aura that can precede the headache and a partial relief occurring through vomiting. A second-century description by Aretaeus of Cappadocia divided headaches into three types: cephalalgia, cephalea, and heterocrania. Galen of Pergamon used the term hemicrania (half-head), from which the word migraine was eventually derived. He also proposed that the pain arose from the meninges and blood vessels of the head. Migraines were first divided into the two now used types – migraine with aura (migraine ophthalmique) and migraine without aura (migraine vulgaire) in 1887 by Louis Hyacinthe Thomas, a French Librarian.

 

Trepanation, the deliberate drilling of holes into a skull, was practiced as early as 7,000 BCE. While sometimes people survived, many would have died from the procedure due to infection. It was believed to work via “letting evil spirits escape“. William Harvey recommended trepanation as a treatment for migraines in the 17th century. While many treatments for migraines have been attempted, it was not until 1868 that use of a substance which eventually turned out to be effective began. This substance was the fungus ergot from which ergotamine was isolated in 1918. Methysergide was developed in 1959 and the first triptan, sumatriptan, was developed in 1988. During the 20th century with better study design effective preventative measures were found and confirmed.

 

Reminiscences of Oliver Sacks MD

 

Oliver Sacks MD (1933 – 2015); Photo credit: Luigi Novi / Wikimedia Commons, CC BY 3.0,https://commons.wikimedia.org/w/index.php?curid=7815388

 

 

I have had migraines for most of my life; the first attack I remember occurred when I was 3 or 4 years old. I was playing in the garden when a brilliant, shimmering light appeared to my left – dazzlingly bright, almost as bright as the sun. It expanded, becoming an enormous shimmering semicircle stretching from the ground to the sky, with sharp zigzagging borders and brilliant blue and orange colors. Then, behind the brightness, came a blindness, an emptiness in my field of vision, and soon I could see almost nothing on my left side. I was terrified – what was happening? My sight returned to normal in a few minutes, but these were the longest minutes I had ever experienced. I told my mother what had happened, and she explained to me that what I had had was a migraine – she was a doctor, and she, too, was a migraineur. It was a “visual migraine,“ she said, or a migraine “aura.“ The zigzag shape, she would later tell me, resembled that of medieval forts, and was sometimes called a “fortification pattern.“ Many people, she explained, would get a terrible headache after seeing such a “fortification“ – but, if I were lucky, I would be one of those who got only the aura, without the headache. I was lucky here, and lucky, too, to have a mother who could reassure me that everything would be back to normal within a few minutes, and with whom, as I got older, I could share my migraine experiences. She explained that auras like mine were due to a sort of disturbance like a wave passing across the visual parts of the brain. A similar “wave“ might pass over other parts of the brain, she said, so one might get a strange feeling on one side of the body, or experience a funny smell, or find oneself temporarily unable to speak. A migraine might affect one’s perception of color, or depth, or movement, might make the whole visual world unintelligible for a few minutes. Then, if one were unlucky, the rest of the migraine might follow: violent headaches, often on one side, vomiting, painful sensitivity to light and noise, abdominal disturbances, and a host of other symptoms.

Source: Patterns by Oliver Sacks MD

 

On 30 September 1858, Sir John Herschel (1792-1871), mathematician, astronomer, chemist and photographer, gave a lecture on ‘Sensorial Vision’ to the gathered members of the Leeds Philosophical Society. He told how one morning, at his breakfast table, he had watched a ‘singular shadowy appearance’ at the outside corner of his left field of vision. The pattern appeared ‘in straight-lined angular forms, very much in general aspect like the drawing of a fortification, with salient and re-entering angles, bastions and ravelins, with some suspicion of faint lines of color between the dark lines’. Herschel was not alone in publicly recording such a personal experience. In 1865, David Brewster, natural philosopher and inventor of the kaleidoscope, discussed his experiences of ocular spectra, this time in the Philosophical Magazine as a way to contribute to theories about the structure of the optic nerve. Later the same year, the Astronomer Royal, George Biddell Airy, also published an account of his periodic attacks of hemiopsy. In these papers, these men of science did not tend to dwell on other forms of pain or suffering associated with their visual disturbance. Brewster, for example, noted that his attacks ‘were never accompanied either with headache or gastric disturbance’ while Airy, too, observed that ‘in general, I feel no further inconvenience from it’, although his friends found their attacks ‘followed by oppressive head-ache’. These men were less interested in illness per se than they were in using their personal experiences to advance discussions about optics, vision and light that got to the heart of their beliefs about scientific authority, and indeed the very possibilities of seeing accurately and objectively with the naked eye. Elizabeth Green Musselman has written in detail about the ill health suffered by nineteenth-century men and women of science. She argues that they found meaning in nervous illnesses and failings such as hemiopsy, color blindness and hallucinations by applying their ideas about refined, well-managed, efficient nervous systems to science and society in general. While these discussions about ?half-blindness’ contributed to contemporary scientific questions, their public airing also contributed to making these transitory visual attacks socially acceptable, even desirable.

 

In a subsequent essay in the Philosophical Transactions of the Royal Society of London (1870), Hubert Airy, the physician son of the Astronomer Royal, drew together these accounts to propose that all of them (including himself) had experienced a phenomenon that he termed ‘transient teichopsia’. Hubert did admit to suffering from terrible headache after the ?blindness’, but did not consider that these visual phenomena were ‘merely’ a disease: such disorders would be ‘hardly deserving of the attention of scientific men’. Rather, these functional disturbances should be regarded as ‘a veritable “Photograph” of a morbid process going on in the brain’, in which case, he thought, ‘their interest and importance cannot be too strongly insisted upon’. Airy accompanied his paper with arguably some of the most beautiful imagery in the history of medicine: a series of sketches illustrating his own experiences of the disturbance across his visual field . With ‘changing gleams’ of red, blue, yellow, green and orange, at its height the vision ‘seemed like a fortified town with bastions all round it’. The whole experience lasted half an hour. Despite the variety of words these men employed to discuss hemiopsy, it is important to emphasize that they made virtually no association between their discussions and any of the contemporary terms for migraine (including megrim, hemicrania, sick, nervous or bilious headache). A young Cambridge physician named Edward Liveing was in the audience for Airy’s presentation of his paper to the Royal Society. He was impressed by Airy’s careful observation and minute descriptions, as well as his ‘excellent’ drawings of the spectral appearances. For several years, Liveing had been collecting his own information and patient case notes about a group of ailments including sick, blind and bilious headaches, as well as hemicrania and hemiopsy (Liveing used the term ‘hemiopia’). Liveing believed that these were all closely related. This was not an entirely new argument, but Liveing felt that English physicians needed to better understand these disorders as a family in order to catch up with the more comprehensive knowledge enjoyed by their French and German peers. Liveing brought these disorders under the umbrella term of megrim, which he explained was part of a larger family of functional disorders that included epilepsy, asthma and angina pectoris, all of which were characterized by paroxysms or fits. Liveing’s book On Megrim, Sick Headache and Some Allied Disorders (1873) is now often considered to be a founding contribution to modern understandings of migraine in the English language. Memorably, Liveing proposed that attacks of megrim in all their manifestations were the result of an event in the body analogous to ‘nerve-storm’: a periodic dispersion of accumulated nervous energy. In the same year, another Cambridge physician, Peter W. Latham, published a second short book of two lectures On Nervous or Sick-Headache. In contrast to Liveing’s nerve-storm theory, Latham explained the cause of migraine’s visual aura as a contraction of the blood vessels of the brain, diminishing the blood supply and disturbing vision, followed by the vessels’ ?dilatation’ to bring on headache. Despite their theoretical differences, both books dedicated substantial space to reprinting the discussions that men of science had been having about visual disorders. In so doing, Liveing and Latham took migraine from the largely ignored realms of domestic recipe books, patent remedies and the classified pages of newspapers and infused it with a new authority, relevance and cachet through a firm connection with vision. These discussions were highly gendered and classed. ‘Perhaps in a University town’, Latham commented, ‘it may be more prevalent among males than in other places’. Liveing also explained the disorder as a result of strained intellectual faculties, attacking over-worked students, ‘literary men’ or men who entered ‘the more serious business of life’. Women were not exempt, but by contrast to men, Liveing’s kind of megrim seemed to attack seamstresses or ‘poor women exhausted from over-suckling’. Throughout the century, medical men had identified lower-class working women, their bodies and nerves broken down by exhaustion, overwork and poor diet, as the main sufferers of migrainous headaches, but megrim’s emerging social visibility and acceptability in the late nineteenth century derived in large part from the central role that new theories about nerves gave to the personal testimony of scientific men, not to the experiences of seamstresses and nurses. In the remaining decades of the nineteenth century physicians from Britain and beyond continued to elaborate their own experiences of migraine in medical journals on both sides of the Atlantic. Sources:nih.gov; Wikipedia

 

Virginia Apgar

Dr. Virginia Apgar Photograph from Public Information Department, The National Foundation (the March of Dimes). Forms part of New York World-Telegram and the Sun Newspaper Photograph Collection (Library of Congress). Photo credit: March of Dimes – Library of Congress, Public Domain, https://commons.wikimedia.org/w/index.php?curid=43770603

 

Virginia Apgar (June 7, 1909 – August 7, 1974) was an American obstetrical anesthesiologist, best known as the inventor of the Apgar score, a way to quickly assess the health of a newborn child immediately after birth. She was a leader in the fields of anesthesiology and teratology, and introduced obstetrical considerations to the established field of neonatology. The youngest of three children, Apgar was born and raised in Westfield, New Jersey to a musical family, the daughter of Helen May (Clarke) and Charles Emory Apgar. Her father was an insurance executive, and also an amateur inventor and astronomer. She graduated from Westfield High School in 1925, knowing that she wanted to be a doctor.

 

Apgar graduated from Mount Holyoke College in 1929, where she studied zoology with minors in physiology and chemistry. In 1933, she graduated fourth in her class from Columbia University College of Physicians and Surgeons (P&S) and completed a residency in surgery at P&S in 1937. She was discouraged by Dr. Allen Whipple, the chairman of surgery at Columbia-Presbyterian Medical Center, from continuing her career as a surgeon because he had seen many women attempt to be successful surgeons and ultimately fail. He instead encouraged her to practice anesthesiology because he felt that advancements in anesthesia were needed to further advance surgery and felt that she had the “energy and ability“ to make a significant contribution. Deciding to continue her career in anesthesiology, she trained for six months under Dr. Ralph Waters at the University of Wisconsin-Madison, where he had established the first anesthesiology department in the United States. She then studied for a further six months under Dr. Ernest Rovenstine in New York at Bellevue Hospital. She received a certification as an anesthesiologist in 1937, and returned to P&S in 1938 as director of the newly formed division of anesthesia. She later received a Master’s Degree in Public Health at Johns Hopkins School of Hygiene and Public Health, graduating in 1959.

 

As the first woman to head a specialty division at Columbia-Presbyterian Medical Center (now New York-Presbyterian Hospital) and Columbia University College of Physicians and Surgeons, Apgar faced many obstacles. In conjunction with Dr. Allen Whipple, she started P&S’s anesthesia division. Apgar was placed in charge of the division’s administrative duties and was also tasked with coordinating the staffing of the division and its work throughout the hospital. Throughout much of the 1940s, she was an administrator, teacher, recruiter, coordinator and practicing physician. It was often difficult to find residents for the program, as anesthesiology had only recently been converted from a nursing specialty to a physician specialty. New anesthesiologists also faced scrutiny from other physicians, specifically surgeons, who were not used to having an anesthesia-specialized MD in the operating room. These difficulties led to issues in gaining funding and support for the division. With America’s entrance into World War II in 1941, many medical professionals enlisted in the military to help the war effort, which created a serious staffing problem for domestic hospitals, including Apgar’s division. When the war ended in 1945, interest in anesthesiology was renewed in returning physicians, and the staffing problem for Apgar’s division was quickly resolved. The specialty’s growing popularity and Apgar’s development of its residency program prompted P&S to establish it as an official department in 1949. Due to her lack of research, Apgar was not made the head of the department as was expected and the job was given to her colleague, Dr. Emmanuel Papper. Apgar was given a faculty position at P&S. In 1949, Apgar became the first woman to become a full professor at P&S, where she remained until 1959. During this time, she also did clinical and research work at the affiliated Sloane Hospital for Women, still a division of New York-Presbyterian Hospital. In 1953, she introduced the first test, called the Apgar score, to assess the health of newborn babies.

 

Between the 1930s and the 1950s, the United States infant mortality rate decreased, but the number of infant deaths within the first 24 hours after birth remained constant. Apgar noticed this trend and began to investigate methods for decreasing the infant mortality rate specifically within the first 24 hours of the infant’s life. As an obstetric anesthesiologist, Apgar was able to document trends that could distinguish healthy infants from infants in trouble. This investigation led to a standardized scoring system used to assess a newborn’s health after birth, with the result referred to as the newborn’s “Apgar score“. Each newborn is given a score of 0, 1, or 2 (a score of 2 meaning the newborn is in optimal condition, 0 being in distress) in each of the following categories: heart rate, respiration, color, muscle tone, and reflex irritability. Compiled scores for each newborn can range between 0 and 10, with 10 being the best possible condition for a newborn. The scores were to be given to a newborn one minute after birth, and additional scores could be given in five-minute increments to guide treatment if the newborn’s condition did not sufficiently improve. By the 1960s, many hospitals in the United States were using the Apgar score consistently. Entering into the 21st century the score continues to be used to provide an accepted and convenient method for reporting the status of the newborn infant immediately after birth.

 

In 1959, Apgar left Columbia and earned a Master of Public Health degree from the Johns Hopkins School of Hygiene and Public Health. From 1959 until her death in 1974, Apgar worked for the March of Dimes Foundation, serving as vice president for Medical Affairs and directing its research program to prevent and treat birth defects. As gestational age is directly related to an infant’s Apgar score, Apgar was one of the first at the March of Dimes to bring attention to the problem of premature birth, now one of the March of Dimes’ top priorities. During this time, she wrote and lectured extensively, authoring articles in popular magazines as well as research work. In 1967, Apgar became vice president and director of basic research at The National Foundation-March of Dimes.

 

During the rubella pandemic of 1964-65, Apgar became an advocate for universal vaccination to prevent mother-to-child transmission of rubella. Rubella can cause serious congenital disorders if a woman becomes infected while pregnant. Between 1964 and 1965, the United States had an estimated 12.5 million rubella cases, which led to 11,000 miscarriages or therapeutic abortions and 20,000 cases of congenital rubella syndrome. These led to 2,100 deaths in infancy, 12,000 cases of deafness, 3,580 cases of blindness due to cataracts and/or microphthalmia, and 1,800 cases of intellectual disability. In New York City alone, congenital rubella affected 1% of all babies born at that time. Apgar also promoted effective use of Rh testing, which can identify women who are at risk for transmission of maternal antibodies across the placenta where they may subsequently bind with and destroy fetal red blood cells, resulting in fetal hydrops or even miscarriage.

 

Apgar traveled thousands of miles each year to speak to widely varied audiences about the importance of early detection of birth defects and the need for more research in this area. She proved an excellent ambassador for the National Foundation, and the annual income of that organization more than doubled during her tenure there. She also served the National Foundation as Director of Basic Medical Research (1967-1968) and Vice-President for Medical Affairs (1971-1974). Her concerns for the welfare of children and families were combined with her talent for teaching in the 1972 book Is My Baby All Right?, written with Joan Beck. Apgar was also a lecturer (1965-1971) and then clinical professor (1971-1974) of pediatrics at Cornell University School of Medicine, where she taught teratology (the study of birth defects). She was the first to hold a faculty position in this new area of pediatrics. In 1973, she was appointed a lecturer in medical genetics at the Johns Hopkins School of Public Health.

 

Apgar published over sixty scientific articles and numerous shorter essays for newspapers and magazines during her career, along with her book, Is My Baby All Right? She received many awards, including honorary doctorates from the Woman’s Medical College of Pennsylvania (1964) and Mount Holyoke College (1965), the Elizabeth Blackwell Award from the American Medical Women’s Association (1966), the Distinguished Service Award from the American Society of Anesthesiologists (1966), the Alumni Gold Medal for Distinguished Achievement from Columbia University College of Physicians and Surgeons (1973), and the Ralph M. Waters Award from the American Society of Anesthesiologists (1973). In 1973 she was also elected Woman of the Year in Science by the Ladies Home Journal. Apgar was equally at home speaking to teens as she was to the movers and shakers of society. She spoke at March of Dimes Youth Conferences about teen pregnancy and congenital disorders at a time when these topics were considered taboo.

 

Throughout her career, Apgar maintained that “women are liberated from the time they leave the womb“ and that being female had not imposed significant limitations on her medical career. She avoided women’s organizations and causes, for the most part. Though she sometimes privately expressed her frustration with gender inequalities (especially in the matter of salaries), she worked around these by consistently pushing into new fields where there was room to exercise her considerable energy and abilities. Apgar never married nor had children, and died of cirrhosis on August 7, 1974, at Columbia-Presbyterian Medical Center. She is buried at Fairview Cemetery in Westfield.

 

Music was an integral part of family life, with frequent family music sessions. Apgar played the violin and her brother played piano and organ. She traveled with her violin, often playing in amateur chamber quartets wherever she happened to be. During the 1950s a friend introduced her to instrument-making, and together they made two violins, a viola, and a cello. She was an enthusiastic gardener, and enjoyed fly-fishing, golfing, and stamp collecting. In her fifties, Apgar started taking flying lessons, stating that her goal was to someday fly under New York’s George Washington Bridge.

 

Apgar has continued to earn posthumous recognition for her contributions and achievements. In 1994, she was honored by the United States Postal Service with a 20 cent Great Americans series postage stamp. In November 1995 she was inducted into the National Women’s Hall of Fame in Seneca Falls, New York. In 1999, she was designated a Women’s History Month Honoree by the National Women’s History Project. On June 7, 2018, Google celebrated Apgar’s 109th birthday with a Google Doodle.

 

Tears

The Crying Boy, by Italian painter, Giovanni Bragolin

Source: Fair use, https://en.wikipedia.org/w/index.php?curid=53447130

 

The original Angel of Grief in Rome, a 1894 sculpture by William Wetmore Story which serves as the grave stone of the artist and his wife Emelyn at the Protestant Cemetery, Rome. Photo credit: LuciusCommons, Public domain. I, the copyright holder of this work, release this work into the public domain. This applies worldwide. I grant anyone the right to use this work for any purpose, without any conditions, unless such conditions are required by law.

 

Angel of Grief or the Weeping Angel is an 1894 sculpture by William Wetmore Story for the grave of his wife Emelyn Story at the Protestant Cemetery in Rome. Its full title bestowed by the creator was The Angel of Grief Weeping Over the Dismantled Altar of Life. This was Story’s last major work prior to his death, which happened one year after his wife. The statue’s creation was documented in an 1896 issue of Cosmopolitan Magazine: according to this account, his wife’s death so devastated Story that he lost interest in sculpture but was inspired to create the monument by his children, who recommended it as a means of memorializing the woman. Unlike the typical angelic grave art, “this dramatic life-size winged figure speaks more of the pain of those left behind“ by appearing “collapsed, weeping and draped over the tomb.“ The term is now used to describe multiple grave stones throughout the world erected in the style of the Story stone. A feature in The Guardian called the design “one of the most copied images in the world“. Story himself wrote that “It represents the angel of Grief, in utter abandonment, throwing herself with drooping wings and hidden face over a funeral altar. It represents what I feel. It represents Prostration. Yet to do it helps me.“

 

Humans are the only living creature that weeps.

In Hippocratic and medieval medicine, tears were associated with the bodily humors, and crying was seen as purgation of excess humors from the brain. William James thought of emotions as reflexes prior to rational thought, believing that the physiological response, as if to stress or irritation, is a precondition to cognitively becoming aware of emotions such as fear or anger. This connection between weeping and excretion was common in Europe in 1586, when the English clergyman and physician Timothie Bright wrote an influential Treatise of Melancholie, whose many readers probably included Shakespeare, which described tears as a ?kinde of excrement not much unlike’ urine. In a poem called ?A Lady Who P-st at the Tragedy of Cato’, Alexander Pope lampooned Joseph Addison’s celebrated play, Cato: A Tragedy (1712) by describing a woman who responds to the drama with copious urine rather than the expected tears:

 

While maudlin Whigs deplor’d their Cato’s Fate,

Still with dry Eyes the Tory Celia sate,

But while her Pride forbids her Tears to flow,

The gushing Waters find a Vent below:

Tho’ secret, yet with copious Grief she mourns,

Like twenty River-Gods with all their Urns.

Let others screw their Hypocritick Face,

She shews her Grief in a sincerer Place;

There Nature reigns, and Passion void of Art,

For that Road leads directly to the Heart.

 

This old idea has been reinforced by modern science in the last century and a half. In recent decades, the most widely quoted theorist of tears has been the American biochemist William H Frey II who, since the 1980s, has been arguing that the metaphor of weeping as excretion should be taken quite literally. In an interview with The New York Times in 1982, Frey claimed that crying is ?an exocrine process’ which, ?like exhaling, urinating, defecating and sweating’ releases toxic substances from the body – in this case, so-called ?stress hormones’.

 

An anonymous British pamphlet from 1755, Man: A Paper for Ennobling the Species, proposed a number of ideas for human improvement, and among them was the idea that something called “moral weeping“ would help:

 

We may properly distinguish weeping into two general kinds, genuine and counterfeit; or into physical crying and moral weeping. Physical crying, while there are no real corresponding ideas in the mind, nor any genuine sentimental feeling of the heart to produce it, depends upon the mechanism of the body: but moral weeping proceeds from, and is always attended with, such real sentiments of the mind, and feeling of the heart, as do honour to human nature; which false crying always debases.

 

In his “Confessions,“ St. Augustine implored God to explain “why weeping is sweet to the miserable.“ Religious traditions honor the gift of tears and have found ways to ritualize it. During the Passover Seder, when Jews remember their escape from Egypt, they bring salt water to their lips to symbolize the tears of bondage. In ancient times, when a person died, mourners put their tears in bottles and sometimes even wore them around their necks. Over the ages, the weeping of tears has been a sign of the mystical experiences of saints and repentant sinners. These transcendent moments go beyond what the mind can comprehend; tears are a response of the heart. The question had never been asked in quite that way. Charles Darwin, in “The Expression of the Emotions in Man and Animals“ (1872), speculated that we weep in times of mental distress because of an accidental mechanical relationship between the act of squalling and the production of tears. When an infant wails, Darwin argued, it causes such an engorgement of blood vessels and general pressure around the eye that the lachrymal glands are affected “by reflex action.“ Several millennia’s worth of babies later, “it has come to pass that suffering readily causes the secretion of tears, without being necessarily accompanied by any other action.“ Though Darwin was aware that emotional weeping produces a sense of solace, he believed the tears themselves to be “an incidental result; purposeless.“ Darwin noted that tears could not be neatly associated with any single kind of mental state. They can be secreted ?in sufficient abundance to roll down the cheeks’, he wrote, ?under the most opposite emotions, and under no emotion at all’. A tear on its own means nothing. A tear shed in a particular mental, social, and narrative context, can mean anything. ?Tears, idle tears,’ wrote the English poet, Alfred Tennyson, ?I know not what they mean.’ Yet he, and we, continue to feel compelled to interpret them, to try to distil their meaning. The intellectual climate was not conducive to further inquiry. Crying was, at least in part, a social behavior; and according to prevailing notions of cultural relativism then in vogue, it was therefore a matter of social conditioning. “One weeps,“ wrote French sociologist Emile Durkheim in 1915, “not simply because he is sad but because he is forced to weep. It is a ritual attitude he is forced to adopt out of respect for custom.“

 

There have been many attempts to differentiate between the two distinct types of crying: positive and negative. Different perspectives have been broken down into three dimensions to examine the emotions being felt and also to grasp the contrast between the two types. Spatial perspective explains sad crying as reaching out to be “there“, such as at home or with a person who may have just died. In contrast, joyful crying is acknowledging being “here.“ It emphasized the intense awareness of one’s location, such as at a relative’s wedding. Temporal perspective explains crying slightly differently. In temporal perspective, sorrowful crying is due to looking to the past with regret or to the future with dread. This illustrated crying as a result of losing someone and regretting not spending more time with them or being nervous about an upcoming event. Crying as a result of happiness would then be a response to a moment as if it is eternal; the person is frozen in a blissful, immortalized present. The last dimension is known as the public-private perspective. This describes two types of crying as ways to imply details about the self as known privately or one’s public identity. For example, crying due to a loss is a message to the outside world that pleads for help with coping with internal sufferings. Or, as Arthur Schopenhauer suggested, sorrowful crying is a method of self-pity or self-regard, a way one comforts oneself. Joyful crying, in contrast, is in recognition of beauty, glory, or wonderfulness.

 

Anthropologist Ashley Montagu found such explanations inadequate. As for Darwin’s squeeze-reflex theory, he wrote in 1959, the same evolutionary outcome “might have occurred in any number of other species possessing the necessary lacrimal and orbicular muscles. How, then, has it come about that weeping occurs in man alone?“ Montagu noted that “as is well known, human infants do not usually cry with tears until they are about six weeks of age. Weeping, then, would appear to be both phylogenetically and ontogenetically a late development in the human species“ — that is, it came about as late in our evolution as a species as it does in each individual’s growth. (Though well before laughter, which arrives at about five months.) That timing suggested that weeping was somehow an adaptive trait. Working from Darwin’s own notion of natural selection, Montagu then postulated an evolutionary argument: We cry now because our ancient forebears tended to live longer the more abundantly they wept. Babies breathe heavily when they cry, Montagu argued, and consequently “even a short session of tearless crying in a young infant is likely to dry out the mucous membranes of the nose and throat, rendering the child vulnerable to the invasion of harmful bacteria and, probably, viruses.“

 

Tears, however, contain an enzyme called lysozyme (discovered by Alexander Fleming in 1922), which within five or 10 minutes will destroy the cell walls of as much as 95% of those bacteria. And thanks to an intricate plumbing scheme, the liquid drains directly onto the imperiled membranes: From the glands under the upper eyelid, down into the canal at the inside corner of the eye, and thence into the nasolacrimal duct which empties into the nasal cavity. Thus, Montagu argued, those primordial infants who were least able to produce tears would have been the most prone to infection and early death, and therefore the least likely to pass on their genetic characteristics-leaving “the perpetuation of the species increasingly to those who could weep.“ Biochemist, William H. Frey PhD, who is director of the Dry Eye and Tear Research Center at the St. Paul-Ramsey Medical Center, first began studying tears in the 1980s. When he came across Montagu’s explanation, he wondered: If emotional weeping serves such an essential life-sustaining purpose, then “why hasn’t nature provided this protection during the first critical days and weeks of life?“ In addition, how would one explain the fact that humans frequently cry without an increase in breathing rate? Or that tears customarily precede gasping or sobbing? Frey had been intrigued by stress researcher Hans Selye’s notion of homeostasis — the process whereby the body attempts to maintain an internal biochemical equilibrium in the face of disruptive stimuli and hostile shocks. Might not tears serve this purpose by purging certain chemicals produced by emotional stress? In 1959, psychiatrist, Thomas Szasz MD postulated that weeping represented an unconscious regression to the prenatal state in which the body is bathed in amniotic fluid. Weeping, then, was a regressive fantasy of return to the saline wetness of the womb.

 

Emotional tears were chemically different, containing 21% more protein, among other substances. Since then, research has concentrated on three of them: Leucine-enkephalin, a brain chemical of the family called endorphines, which are thought to affect pain sensations; a pituitary hormone known as ACTH; and another pituitary hormone, prolactin, which stimulates milk production in mammals. Tears may also serve a therapeutic role, though researchers say the supposedly cathartic role of “a good cry“ has been overstated. Thirty years ago, biochemist Frey found that emotional tears carried more protein than non-emotional tears (say, from chopping an onion). The implication was that when you cry for emotional reasons, you are involved in a healing process.

 

The inability to cry:

Psychologists have also gleaned new insights into people who can’t produce tears at all – either emotional or the basal tears that keep eyes lubricated. Some say that ophthalmologists have typically treated ?dry eye’ as a medical issue, completely missing the fact that emotional communication is impaired when you lack tears. Patients with Sjogren’s syndrome, for example, have great difficulty producing tears. A study found that 22% of patients with the syndrome had significantly more difficulty identifying their own feelings than control participants did.

.

Sources: http://www.apa.orgwww.washingtonpost.comhttps://aeon.com;

www.Nih.gov; Wikipedia

 

For our readers:

 

For your listening pleasure, a work of pure brilliance:

 

An artist, beyond genius, enabling the mysteries of the universe to become manifest in glorious music, and through us, the listeners, complete a circuit, still not explained by neuroscience.

 

W.A. Mozart: Requim, Lacrimosa

Full Requiem Mozart REQUIEM KV 626 conducted by Leonard Bernstein in Salzburg, Germany

David Garrett, violin version – Lacrimosa

Mozart, Lacrimosa, Organ

Oskar Hertwig, Developmental Biologist

Oskar Hertwig: Photo credit: Erik Nordenskiold, The history of biology: a survey. Knopf, New York, 1935, S. 594. Online: archive.org, Public Domain, https://commons.wikimedia.org/w/index.php?curid=1020140

 

Oscar Hertwig (21 April 1849 – 25 October 1922) was a German zoologist and professor, who also wrote about the theory of evolution circa 1916, over 55 years after Charles Darwin’s book The Origin of Species. He was the elder brother of zoologist-professor Richard Hertwig (1850-1937). The Hertwig brothers were the most eminent scholars of Ernst Haeckel (and Carl Gegenbaur) from the University of Jena. They were independent of Haeckel’s philosophical speculations but took his ideas in a positive way to widen their concepts in zoology. Initially, between 1879-1883, they performed embryological studies, especially on the theory of the coelom (1881), the fluid-filled body cavity. These problems were based on the phylogenetic theorems of Haeckel, i.e. the biogenic theory (German = biogenetisches Grundgesetz), and the “gastraea theory”.

 

Within 10 years, the two brothers moved apart to the north and south of Germany. Oscar Hertwig later became a professor of anatomy in 1888 in Berlin; however, Richard Hertwig had moved 3 years prior, becoming a professor of zoology in Munich from 1885-1925, at Ludwig Maximilians Universitat, where he served the last 40 years of his 50-year career as a professor at 4 universities. Richard’s research focused on protists (the relationship between the nucleus and the plasm = “Kern-Plasma-Relation”), as well as on developmental physiological studies on sea urchins and frogs. He also wrote a leading Zoology textbook. He also discovered mitosis and meiosis.

 

Oscar Hertwig was a leader in the field of comparative and causal animal-developmental history. He also wrote a leading textbook. By studying sea urchins he proved that fertilization occurs due to the fusion of a sperm and egg cell. He recognized the role of the cell nucleus during inheritance and chromosome reduction during meiosis: in 1876, he published his findings that fertilization includes the penetration of a spermatozoon into an egg cell. Oscar Hertwig experiments with frog eggs revealed the ‘long axis rule’, or Hertwig rule. According to this rule cell divides along its long axis (1884). In 1885 Oscar wrote that nuclein (later called nucleic acid) is the substance responsible not only for fertilization but also for the transmission of hereditary characteristics. This early suggestion was proven correct much later in 1944 by the Avery – MacLeod – McCarty experiment which showed that this is indeed the role of the nucleic acid DNA. While Oscar was interested in developmental biology, he was opposed to chance as assumed in Charles Darwin?s theory. His most important theoretical book was: “Das Werden der Organismen, eine Widerlegung der Darwinschen Zufallslehre” (Jena, 1916) (translation: “The Origin of Organisms – a Refutation of Darwin’s Theory of Chance”).

 

Hertwig was elected a member of the Royal Swedish Academy of Sciences in 1903. Oscar Hertwig is known as Oscar Hedwig in the book “Who discovered what when” by David Ellyard. A history of the discovery of fertilization for mammals including scientists like Hertwig and other workers is given by the book “The Mammalian Egg” by Austin.

Patricia Bath MD, Inventor (1942 to Present)

Patricia Bath MD – Inventor of Laserphaco Probe – Photo credit: National Library of Medicine; www.nlm.nih.gov/changingthefaceofmedicine; Public Domain, Wikipedia Commons

 

Patricia Era Bath is an American ophthalmologist, inventor, and academic. She broke ground for women and African Americans in a number of areas. Bath was the first African American to serve as a resident in ophthalmology at New York University. She is also the first African American woman to serve on staff as a surgeon at the UCLA Medical Center. And finally, Bath is the first African-American woman physician to receive a patent for a medical purpose. The holder of four patents, she also founded the non-profit American Institute for the Prevention of Blindness in Washington, D.C.

 

Dr. Bath was born on November 4, 1942, in Harlem, Manhattan. Her father, an immigrant from Trinidad, was newspaper columnist, a merchant seaman and the first African American to work for the New York City Subway as a motorman. Her father inspired her love for culture and encouraged Bath to explore different cultures. Her mother descended from African slaves. It was evident by Bath’s teachers that she was a gifted student and pushed her to explore her strengths in school in science. With the help of a microscope set she was given as a young child, Bath knew she had a love for math and science. Bath attended Charles Evans Hughes High School where she excelled at such a rapid pace that she obtained her diploma in just two and a half years.

 

Inspired by Albert Schweitzer’s work in medicine, Bath applied for and won a National Science Foundation Scholarship while attending high school; this led her to a research project at Yeshiva University and Harlem Hospital Center on connection between cancer, nutrition and stress which helped her interest in science shift to medicine. The head of the researched program realized the significance to her findings during the research and published them in a scientific paper that he later presented. In 1960, still a teenager, Bath won the “Merit Award” of Mademoiselle magazine for her contribution to the project. Bath received her Bachelor of Arts in chemistry from Manhattan’s Hunter College in 1964 and relocated to Washington, D.C. to attend Howard University College of Medicine where she received her doctoral degree in 1968. During her time at Howard, she was President of the Student National Medical Association and received fellowships from the National Institutes of Health and the National Institute of Mental Health.

 

Bath interned at Harlem Hospital Center, subsequently serving as a fellow at Columbia University. Bath traveled to Yugoslavia in 1967 to study children’s health which caused her to become aware that the practice of eye care was uneven among racial minorities and poor populations, with much higher incidence of blindness among her African American and poor patients. She determined that, as a physician, she would help address this issue. It was also not easy for her to go to medical school since her family did not have the funds for it. She persuaded her professors from Columbia to operate on blind patients at Harlem Hospital Center, which had not previously offered eye surgery, at no cost. Bath pioneered the worldwide discipline of “community ophthalmology”, a volunteer-based outreach to bring necessary eye care to underserved populations.

 

After completing her education, Bath served briefly as an assistant professor at Jules Stein Eye Institute at UCLA and Charles R. Drew University of Medicine and Science before becoming the first woman on faculty at the Eye Institute. In 1978, Bath co-founded the American Institute for the Prevention of Blindness, for which she served as president. In 1983, she became the head of a residency in her field at Charles R. Drew, the first woman ever to head such a department. In 1993, she retired from UCLA, which subsequently elected her the first woman on its honorary staff. She served as a professor of Ophthalmology at Howard University’s School of Medicine and as a professor of Telemedicine and Ophthalmology at St. Georges University. She was among the co-founders of the King-Drew Medical Center ophthalmology training program.

 

In 1981, she conceived the Laserphaco Probe, a medical device that improves on the use of lasers to remove cataracts, and “for ablating and removing cataract lenses”. The device was completed in 1986 after Bath conducted research on lasers in Berlin and patented in 1988, making her the first African-American woman to receive a patent for a medical purpose. The device, which quickly and nearly painlessly dissolves the cataract with a laser, irrigates and cleans the eye and permits the easy insertion of a new lens, is used internationally to treat the disease. Bath has continued to improve the device and has successfully restored vision to people who have been unable to see for decades. Three of Bath’s four patents relate to the Laserphaco Probe. In 2000, she was granted a patent for a method she devised for using ultrasound technology to treat cataracts. Bath has been honored by two of her universities. Hunter College placed her in its “hall of fame” in 1988 and Howard University declared her a “Howard University Pioneer in Academic Medicine” in 1993. A children’s picture book on her life and science work, The Doctor with an Eye for Eyes: The Story of Dr. Patricia Bath (The Innovation Press, ISBN 9781943147311) was published in 2017, and was cited by both the National Science Teachers Association and the Chicago Public Library’s list of best kids books of the year.

 

Cataract Surgery

A cataract surgery. Dictionnaire Universel de Medecine (1746-1748).

Graphic credit: Robert James (1703-1776); Wikipedia Commons; This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author’s life plus 100 years or less. This file has been identified as being free of known restrictions under copyright law, including all related and neighboring rights.

 

 

Cataract surgery is one of the most frequently performed operations in the world. Recent advances in techniques and instrumentation have resulted in earlier intervention, improved surgical outcomes, and reduced dependence on spectacles.

 

The first record of cataract being surgically treated is by Susruta, who carried out the procedure in 600 BCE. Cataracts were treated using a technique known as couching, in which the opaque lens is pushed into the vitreous cavity to remove it from the visual axis. Couching is still performed in some parts of Africa and the Middle East. In 1753, Samuel Sharp performed the first intracapsular cataract extraction (ICCE) through a limbal incision. He used pressure from his thumb to extract the lens. In 1961, Polish surgeon Tadeusz Krwawicz developed a cryoprobe which could be used to grasp and extract cataracts during ICCE surgery. However, an aphakic spectacle correction was still required. When the first edition of the Community Eye Health Journal was published, ICCE was still the most widely practiced method of cataract extraction in low- and middle-income countries. However, in high-income countries, ICCE had been superseded by extracapsular surgery with an IOL implant.

 

Modern extracapsular cataract extraction (ECCE) gained acceptance in high-income countries after the introduction of operating microscopes during the 1970s and 1980s made it possible to perform microsurgery. The microscopes offered better intraocular visibility and the ability to safely place multiple corneal sutures. ECCE has the advantage of leaving the posterior capsule intact; this reduces the risk of potentially blinding complications and makes it possible to implant a lens in the posterior chamber. Phacoemulsification was introduced in 1967 by Dr Charles Kelman. Since then, there have been significant improvements in the fluidics, energy delivery, efficiency and safety of this procedure. Advantages include small incision size, faster recovery and a reduced risk of complications.

 

Manual small-incision cataract surgery (MSICS) is a small-incision form of ECCE with a self-sealing wound which is mainly used in low-resource settings. MSICS has several advantages over phacoemulsification, including shorter operative time, less need for technology and a lower cost. It is also very effective in dealing with advanced and hard cataracts. As with modern ECCE techniques, MSICS also allows for a lens to be implanted. A recent introduction is femtosecond laser-assisted cataract surgery, during which a laser is used to dissect tissue at a microscopic level. Initial results from the recent FEMCAT trial suggest little or no improvement in safety and accuracy compared to standard phacoemulsification, and the procedure brings with it new clinical and financial challenges. Today, although phacoemulsification is considered the gold standard for cataract removal in high-income countries, MSICS is hugely popular and practiced widely in many countries of the world because of its universal applicability, efficiency and low cost.

 

Over the three decades since the first issue of the Community Eye Health Journal was published, the availability of microsurgery and high-quality intraocular lenses (IOLs), at an acceptable cost, have made a positive global impact on visual results after cataract surgery. IOLs can be placed in the anterior chamber or posterior chamber, or be supported by the iris. The preferred location is the posterior chamber, where the posterior chamber IOL (or PCIOL) is supported by the residual lens capsule. Sir Harold Ridley is credited with the first intraocular lens implantation in 1949, using a material known as PMMA. Since then, numerous design and material modifications have been developed to make IOLs safer and more effective, and they have been in routine use in high-income countries since the 1980s. However, when the first edition of the CEHJ was published in 1988, an IOL cost approximately $200 and was far too expensive for widespread use in low- and middle-income countries. Thankfully, owing to the foresight and innovation of organizations such as the Fred Hollows Foundation and Aravind Eye Hospitals, IOLs are now produced at low cost in low- and middle-income countries and have become available to even the most disadvantaged patients.

 

With the introduction of the first multifocal and toric IOLs, the focus of IOL development has shifted toward improving refractive outcomes and reducing spectacle dependence. Toric lenses correct postoperative astigmatism, and multifocal lenses reduce dependency on spectacles for near vision. However, multifocal lenses may cause glare and reduced contrast sensitivity after surgery and should only be used in carefully selected patients. The accommodating lenses that are in current use are limited by their low and varied amplitude of accommodation. The light-adjustable lens is made of a photosensitive silicone material. Within two weeks of surgery, the residual refractive error (sphero-cylindrical errors as well as presbyopia) can be corrected by shining an ultraviolet light on the IOL through a dilated pupil to change the shape of the lens. Development of an intraocular lens (IOL) as a drug delivery device has been pursued for many years. Common postoperative conditions such as posterior capsular opacification (PCO), intraocular inflammation or endophthalmitis are potential therapeutic targets for a drug-eluting IOL.

Sources: British Council For Prevention of Blindness; Community Eye Health Journal is published by the International Centre for Eye Health, a research and education group based at the London School of Hygiene and Tropical Medicine (LSHTM), one of the leading Public Health training institutions in the world. Unless otherwise stated, all content is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

 

Professor Thomas Hunt Morgan, Geneticist

Thomas Hunt Morgan (September 25, 1866 – December 4, 1945) was an American evolutionary biologist, geneticist, embryologist, and science author who won the Nobel Prize in Physiology or Medicine in 1933 for discoveries elucidating the role that the chromosome plays in heredity. Photo credit: Unknown – http://wwwihm.nlm.nih.gov/, Public Domain, https://commons.wikimedia.org/w/index.php?curid=549067; This image is one of several created for the 1891 Johns Hopkins yearbook of 1891.

 

Thomas Hunt Morgan received his Ph.D. from Johns Hopkins University in zoology in 1890. Following the rediscovery of Mendelian inheritance in 1900, Morgan began to study the genetic characteristics of the fruit fly Drosophila melanogaster. In his famous Fly Room at Columbia University, Morgan demonstrated that genes are carried on chromosomes and are the mechanical basis of heredity. These discoveries formed the basis of the modern science of genetics. As a result of his work, Drosophila became a major model organism in contemporary genetics. The Division of Biology which he established at the California Institute of Technology has produced seven Nobel Prize winners.

 

Morgan was born in Lexington, Kentucky, to Charlton Hunt Morgan and Ellen Key Howard Morgan. Part of a line of Southern planter elite on his father’s side, Morgan was a nephew of Confederate General John Hunt Morgan and his great-grandfather John Wesley Hunt had been the first millionaire west of the Allegheny Mountains. Through his mother, he was the great-grandson of Francis Scott Key, the author of the “Star Spangled Banner“, and John Eager Howard, governor and senator from Maryland. Beginning at age 16, Morgan attended the State College of Kentucky (now the University of Kentucky). He focused on science and particularly enjoyed natural history. He worked with the U.S. Geological Survey in his summers and graduated as valedictorian in 1886 with a BS degree. Following a summer at the Marine Biology School in Annisquam, Massachusetts, Morgan began graduate studies in zoology at Johns Hopkins University. After two years of experimental work with morphologist William Keith Brooks, Morgan received a master of science degree from the State College of Kentucky in 1888. The college offered Morgan a full professorship; however, he chose to stay at Johns Hopkins and was awarded a relatively large fellowship to help him fund his studies. Under Brooks, Morgan completed his thesis work on the embryology of sea spiders, to determine their phylogenetic relationship with other arthropods. He concluded that with respect to embryology, they were more closely related to spiders than crustaceans. Based on the publication of this work, Morgan was awarded his Ph.D. from Johns Hopkins in 1890, and was also awarded the Bruce Fellowship in Research. He used the fellowship to travel to Jamaica, the Bahamas and to Europe to conduct further research. Nearly every summer from 1890 to 1942, Morgan returned to the Marine Biological Laboratory to conduct research. He became very involved in governance of the institution, including serving as an MBL trustee from 1897 to 1945.

 

In 1890, Morgan was appointed associate professor (and head of the biology department) at Johns Hopkins’ sister school Bryn Mawr College. During the first few years at Bryn Mawr, he produced descriptive studies of sea acorns, ascidian worms and frogs. In 1894 Morgan was granted a year’s absence to conduct research in the laboratories of Stazione Zoologica in Naples, where Wilson had worked two years earlier. There he worked with German biologist Hans Driesch, whose research in the experimental study of development piqued Morgan’s interest. Among other projects that year, Morgan completed an experimental study of ctenophore (commonly known as comb jellies, that live in marine waters worldwide. At the time, there was considerable scientific debate over the question of how an embryo developed. Following Wilhelm Roux’s mosaic theory of development, some believed that hereditary material was divided among embryonic cells, which were predestined to form particular parts of a mature organism. Driesch and others thought that development was due to epigenetic factors, where interactions between the protoplasm and the nucleus of the egg and the environment could affect development. Morgan was in the latter camp and his work with Driesch demonstrated that blastomeres isolated from sea urchin and ctenophore eggs could develop into complete larvae, contrary to the predictions (and experimental evidence) of Roux’s supporters.

 

When Morgan returned to Bryn Mawr in 1895, he was promoted to full professor. Morgan’s main lines of experimental work involved regeneration and larval development; in each case, his goal was to distinguish internal and external causes to shed light on the Roux-Driesch debate. He wrote his first book, The Development of the Frog’s Egg (1897). He began a series of studies on different organisms’ ability to regenerate. He looked at grafting and regeneration in tadpoles, fish and earthworms; in 1901 he published his research as Regeneration. Beginning in 1900, Morgan started working on the problem of sex determination, which he had previously dismissed when Nettie Stevens discovered the impact of the Y chromosome on gender. He also continued to study the evolutionary problems that had been the focus of his earliest work. In 1904, E. B. Wilson invited Morgan to join him at Columbia University. This move freed him to focus fully on experimental work. When Morgan took the professorship in experimental zoology, he became increasingly focused on the mechanisms of heredity and evolution. He had published Evolution and Adaptation (1903); like many biologists at the time, he saw evidence for biological evolution (as in the common descent of similar species) but rejected Darwin’s proposed mechanism of natural selection acting on small, constantly produced variations. Embryological development posed an additional problem in Morgan’s view, as selection could not act on the early, incomplete stages of highly complex organs such as the eye. The common solution of the Lamarckian mechanism of inheritance of acquired characters, which featured prominently in Darwin’s theory, was increasingly rejected by biologists. Around 1908 Morgan started working on the fruit fly Drosophila melanogaster, and encouraging students to do so as well. In a typical Drosophila genetics experiment, male and female flies with known phenotypes are put in a jar to mate; females must be virgins. Eggs are laid in porridge which the larva feed on; when the life cycle is complete, the progeny are scored for inheritance of the trait of interest. With Fernandus Payne, he mutated Drosophila through physical, chemical, and radiational means. Morgan began cross-breeding experiments to find heritable mutations, but they had no significant success for two years. Castle had also had difficulty identifying mutations in Drosophila, which were tiny. Finally, in 1909, a series of heritable mutants appeared, some of which displayed Mendelian inheritance patterns; in 1910 Morgan noticed a white-eyed mutant male among the red-eyed wild types. When white-eyed flies were bred with a red-eyed female, their progeny were all red-eyed. A second generation cross produced white-eyed males – a gender-linked recessive trait, the gene for which Morgan named white. Morgan also discovered a pink-eyed mutant that showed a different pattern of inheritance. In a paper published in Science in 1911, he concluded that (1) some traits were gender-linked, the trait was probably carried on one of the Y or X chromosomes, and (3) other genes were probably carried on specific chromosomes as well. Morgan proposed that the amount of crossing over between linked genes differs and that crossover frequency might indicate the distance separating genes on the chromosome. The later English geneticist J. B. S. Haldane suggested that the unit of measurement for linkage be called the morgan. Morgan’s student Alfred Sturtevant developed the first genetic map in 1913.

 

Morgan’s fly-room at Columbia became world-famous, and he found it easy to attract funding and visiting academics. In 1927 after 25 years at Columbia, and nearing the age of retirement, he received an offer from George Ellery Hale to establish a school of biology in California. Morgan moved to California to head the Division of Biology at the California Institute of Technology in 1928. In 1933 Morgan was awarded the Nobel Prize in Physiology or Medicine. As an acknowledgement of the group nature of his discovery he gave his prize money to Bridges’, Sturtevant’s and his own children. Morgan declined to attend the awards ceremony in 1933, instead attending in 1934. The 1933 rediscovery of the giant polytene chromosomes in the salivary gland of Drosophila may have influenced his choice. Until that point, the lab’s results had been inferred from phenotypic results, the visible polytene chromosome enabled them to confirm their results on a physical basis. Morgan’s Nobel acceptance speech entitled “The Contribution of Genetics to Physiology and Medicine“ downplayed the contribution genetics could make to medicine beyond genetic counselling. In 1939 he was awarded the Copley Medal by the Royal Society.

 

Morgan eventually retired in 1942, becoming professor and chairman emeritus. George Beadle returned to Caltech to replace Morgan as chairman of the department in 1946. Although he had retired, Morgan kept offices across the road from the Division and continued laboratory work. In his retirement, he returned to the questions of sexual differentiation, regeneration, and embryology. Morgan had throughout his life suffered with a chronic duodenal ulcer. In 1945, at age 79, he experienced a severe heart attack and died from a ruptured artery.

 

Below is Thomas Hunt Morgan’s Drosophila melanogaster genetic linkage map. This was the first successful gene mapping work and provides important evidence for the chromosome theory of inheritance. The map shows the relative positions of allelic characteristics on the second Drosophila chromosome. The distance between the genes (map units) are equal to the percentage of crossing-over events that occurs between different alleles.

 

Thomas Hunt Morgan’s Drosophila melanogaster genetic linkage map. This was the first successful gene mapping work and provides important evidence for the Boveri-Sutton chromosome theory of inheritance. The map shows the relative positions of allelic characteristics on the second Drosophila chromosome. The distance between the genes (map units) are equal to the percentage of crossing-over events that occurs between different alleles. This gene linkage map shows the relative positions of allelic characteristics on the second Drosophila chromosome. The alleles on the chromosome form a linkage group due to their tendency to form together into gametes. The distance between the genes (map units) are equal to the percentage of crossing-over events that occurs between different alleles. This diagram is also based on the findings of Thomas Hunt Morgan in his Drosophila cross. Graphic credit: Twaanders17 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=40694655

 

Source: https://www.ncbi.nlm.nih.gov; Wikipedia

 

Nori Seaweed

Toasting a sheet of nori. 1864, Japanese painting; Wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=40283081

 

Nori is the Japanese name for edible seaweed species of the red algae genus Pyropia, including P. yezoensis and P. tenera. It is used chiefly as an ingredient (wrap) of sushi. Finished products are made by a shredding and rack-drying process that resembles papermaking. Originally, the term nori was generic and referred to seaweeds, including hijiki. One of the oldest descriptions of nori is dated to around the 8th century. In the Taiho Code enacted in CA 701, when nori was already included in the form of taxation. Local people have been described as drying nori in Hitachi Province Fudoki (ca 721-721), and nori was harvested in Izumo Province Fudoki (ca 713-733), showing that nori was used as food from ancient times. In Utsubo Monogatari, written around 987, nori was recognized as a common food. Nori had been consumed as paste form until the sheet form was invented in Asakusa, Edo (contemporary Tokyo), around 1750 in the Edo period through the method of Japanese paper-making. The word “nori“ first appeared in an English-language publication in C.P. Thunberg’s Trav., published in 1796. It was used in conjugation as “Awa nori“, probably referring to what is now called aonori.

 

The Japanese nori industry was in decline after WW II, when Japan was in need of all food that could be produced. The decline was due to a lack of understanding of nori’s three-stage life cycle, such that local people did not understand why traditional cultivation methods were not effective. The industry was rescued by knowledge deriving from the work of British phycologist Kathleen Mary Drew-Baker, who had been researching the organism Porphyria umbilicalis, which grew in the seas around Wales and was harvested for food, as in Japan. Her work was discovered by Japanese scientists who applied it to artificial methods of seeding and growing the nori, rescuing the industry. Kathleen Baker was hailed as the “Mother of the Sea“ in Japan and a statue erected in her memory; she is still revered as the savior of the Japanese nori industry. In the 21st century, the Japanese nori industry faces a new decline due to increased competition from seaweed producers in China and Korea and domestic sales tax hikes.

 

The word nori started to be used widely in the United States, and the product (imported in dry form from Japan) became widely available at natural food stores and Asian-American grocery stores in the 1960s due to the macrobiotic movement and in the 1970s with the increase of sushi bars and Japanese restaurants. In one study by Jan-Hendrik Hehemann, subjects of Japanese descent have been shown to be able to digest the polysaccharide of the seaweed, after gut microbes developed the enzyme from marine bacteria. Gut microbes from the North American subjects lacked these enzymes.

 

Production and processing of nori is an advanced form of agriculture. The biology of Pyropia, although complicated, is well understood, and this knowledge is used to control the production process. Farming takes place in the sea where the Pyropia plants grow attached to nets suspended at the sea surface and where the farmers operate from boats. The plants grow rapidly, requiring about 45 days from “seeding“ until the first harvest. Multiple harvests can be taken from a single seeding, typically at about ten-day intervals. Harvesting is accomplished using mechanical harvesters of a variety of configurations. Processing of raw product is mostly accomplished by highly automated machines that accurately duplicate traditional manual processing steps, but with much improved efficiency and consistency. The final product is a paper-thin, black, dried sheet of approximately 18 cm x 20 cm (7 in x 8 in) and 3 grams (0.11 oz.) in weight. Several grades of nori are available in the United States. The most common, and least expensive, grades are imported from China, costing about six cents per sheet. At the high end, ranging up to 90 cents per sheet, are “delicate shin-nori“ (nori from the first of the year’s several harvests) cultivated in Ariake Sea, off the island of Kyushu in Japan. In Japan, over 600 square kilometres (230 sq mi) of coastal waters are given to producing 340,000 tons of nori, worth over a billion dollars. China produces about a third of this amount.

 

Nori is commonly used as a wrap for sushi and onigiri. It is also a garnish or flavoring in noodle preparations and soups. It is most typically toasted prior to consumption (yaki-nori). A common secondary product is toasted and flavored nori (ajitsuke-nori), in which a flavoring mixture (variable, but typically soy sauce, sugar, sake, mirin, and seasonings) is applied in combination with the toasting process. It is also eaten by making it into a soy sauce-flavored paste, nori no tsukudani. Nori is also sometimes used as a form of food decoration or garnish. A related product, prepared from the unrelated green algae Monostroma and Enteromorpha, is called aonori literally blue/green nori) and is used like herbs on everyday meals, such as okonomiyaki and yakisoba.

 

Since nori sheets easily absorb water from the air and degrade, a desiccant is indispensable when storing it for any significant time.

Harry Harlow PhD

Rhesus Macaque (Macaca mulatta). This file is licensed under the Creative Commons Attribution 2.0 Generic license.

 

Editor’s note: Much of Dr. Harry Harlow’s research has made an incredible impact in the world of infant and child psychology. The work can be shocking and we don’t condone these kinds of experiments. However, because of the profound effect, Harlow’s research has had on understanding early influences on children, our cultural mores, have changed.

 

The work of Harry Harlow and Abraham Maslow was highly influential regarding the importance of “touch“ and “rocking“ in normal child development. Equally profound are theories of the origin of violence, postulated as the lack of touching and rocking in early infant and child rearing. When we viewed the 1970 Time-Life film, describing in detail the monkey experiments of Harlow (hot link below), we found them shocking. Thankfully, animal rights groups have now outlawed such experimentation. However, keep in mind, the monkeys weren’t shocked or physically harmed in any way; no drugs were used. This film reveals only the deeply sad results of psychological deprivation. You could say that Harlow’s experiments gave even more credence to the theories of Sigmund Freud.

 

 

Harry Frederick Harlow (October 31, 1905 – December 6, 1981) was an American psychologist best known for his maternal-separation, dependency needs, and social isolation experiments on rhesus monkeys, which manifested the importance of caregiving and companionship to social and cognitive development. He conducted most of his research at the University of Wisconsin-Madison, where humanistic psychologist Abraham Maslow worked with him. Harlow’s experiments were controversial as they included creating inanimate surrogate mothers for the rhesus infants from wire and wood. Each infant became attached to its particular mother, recognizing its unique face and preferring it above all others. Harlow next chose to investigate if the infants had a preference for bare-wire mothers or cloth-covered mothers. For this experiment, he presented the infants with a clothed mother and a wire mother under two conditions. In one situation, the wire mother held a bottle with food, and the cloth mother held no food. In the other situation, the cloth mother held the bottle, and the wire mother had nothing. Later in his career, he cultivated infant monkeys in isolation chambers for up to 24 months, from which they emerged intensely disturbed. Some researchers cite the experiments as a factor in the rise of the animal liberation movement in the United States. A Review of General Psychology survey, published in 2002, ranked Harlow as the 26th most cited psychologist of the 20th century.

 

Harlow was born on October 31, 1905, to Mabel Rock and Alonzo Harlow Israel. Harlow was born and raised in Fairfield, Iowa. After a year at Reed College in Portland, Oregon, Harlow obtained admission to Stanford University through a special aptitude test, where he became a psychology major. Harlow attended Stanford in 1924, and subsequently became a graduate student in psychology, working directly under Calvin Perry Stone, a well-known animal behaviorist, and Walter Richard Miles, a vision expert, who were all supervised by Lewis Terman. Harlow studied largely under Terman, the developer of the Stanford-Binet IQ Test, and Terman helped shape Harlow’s future. After receiving a PhD in 1930, Harlow changed his name from Israel to Harlow. The change was made at Terman’s prompting for fear of the negative consequences of having a seemingly Jewish last name, even though his family was not Jewish.

 

After completing his doctoral dissertation, Harlow accepted a professorship at the University of Wisconsin-Madison. Harlow was unsuccessful in persuading the Department of Psychology to provide him with adequate laboratory space. As a result, Harlow acquired a vacant building down the street from the University, and, with the assistance of his graduate students, renovated the building into what later became known as the Primate Laboratory, one of the first of its kind in the world. Under Harlow’s direction, it became a place of cutting-edge research at which some 40 students earned their PhDs.

 

After obtaining his doctorate in 1930, at Stanford University, Harlow began his career with nonhuman primate research at the University of Wisconsin. He worked with the primates at Henry Vilas Zoo, where he developed the Wisconsin General Testing Apparatus (WGTA) to study learning, cognition, and memory. It was through these studies that Harlow discovered that the monkeys he worked with were developing strategies for his tests. What would later become known as learning sets, Harlow described as “learning to learn.“ Harlow exclusively used rhesus macaques in his experiments. In order to study the development of these learning sets, Harlow needed access to developing primates, so he established a breeding colony of rhesus macaques in 1932. Due to the nature of his study, Harlow needed regular access to infant primates and thus chose to rear them in a nursery setting, rather than with their protective mothers. This alternative rearing technique, also called maternal deprivation, is highly controversial to this day, and is used, in variants, as a model of early life adversity in primates. Research with and caring for infant rhesus monkeys further inspired Harlow, and ultimately led to some of his best-known experiments: the use of surrogate mothers. Although Harlow, his students, contemporaries, and associates soon learned how to care for the physical needs of their infant monkeys, the nursery-reared infants remained very different from their mother-reared peers. Psychologically speaking, these infants were slightly strange: they were reclusive, had definite social deficits, and clung to their cloth diapers. For instance, babies that had grown up with only a mother and no playmates showed signs of fear or aggressiveness. Noticing their attachment to the soft cloth of their diapers and the psychological changes that correlated with the absence of a maternal figure, Harlow sought to investigate the mother-infant bond. This relationship was under constant scrutiny in the early twentieth century, as B. F. Skinner and the behaviorists took on John Bowlby in a discussion of the mother’s importance in the development of the child, the nature of their relationship, and the impact of physical contact between mother and child.

 

The studies were motivated by John Bowlby’s World Health Organization-sponsored study and report, “Maternal Care and Mental Health“ in 1950, in which Bowlby reviewed previous studies on the effects of institutionalization on child development, and the distress experienced by children when separated from their mothers, such as Rene Spitz’s and his own surveys on children raised in a variety of settings. In 1953, his colleague, James Robertson, produced a short and controversial documentary film, titled A Two-Year-Old Goes to Hospital, demonstrating the almost-immediate effects of maternal separation. Bowlby’s report, coupled with Robertson’s film, demonstrated the importance of the primary caregiver in human and non-human primate development. Bowlby de-emphasized the mother’s role in feeding as a basis for the development of a strong mother-child relationship, but his conclusions generated much debate. It was the debate concerning the reasons behind the demonstrated need for maternal care that Harlow addressed in his studies with surrogates. Physical contact with infants was considered harmful to their development, and this view led to sterile, contact-less nurseries across the country. Bowlby disagreed, claiming that the mother provides much more than food to the infant, including a unique bond that positively influences the child’s development and mental health. To investigate the debate, Harlow created inanimate surrogate mothers for the rhesus infants from wire and wood. Each infant became attached to its particular mother, recognizing its unique face and preferring it above all others. Harlow next chose to investigate if the infants had a preference for bare-wire mothers or cloth-covered mothers. For this experiment, he presented the infants with a clothed mother and a wire mother under two conditions. In one situation, the wire mother held a bottle with food, and the cloth mother held no food. In the other situation, the cloth mother held the bottle, and the wire mother had nothing. Overwhelmingly, the infant macaques preferred spending their time clinging to the cloth mother. Even when only the wire mother could provide nourishment, the monkeys visited her only to feed. Harlow concluded that there was much more to the mother-infant relationship than milk, and that this “contact comfort“ was essential to the psychological development and health of infant monkeys and children. It was this research that gave strong, empirical support to Bowlby’s assertions on the importance of love and mother-child interaction.

 

Successive experiments concluded that infants used the surrogate as a base for exploration, and a source of comfort and protection in novel and even frightening situations. In an experiment called the “open-field test“, an infant was placed in a novel environment with novel objects. When the infant’s surrogate mother was present, it clung to her, but then began venturing off to explore. If frightened, the infant ran back to the surrogate mother and clung to her for a time before venturing out again. Without the surrogate mother’s presence, the monkeys were paralyzed with fear, huddling in a ball and sucking their thumbs. In the “fear test“, infants were presented with a fearful stimulus, often a noise-making teddy bear. Without the mother, the infants cowered and avoided the object. When the surrogate mother was present, however, the infant did not show great fearful responses and often contacted the device – exploring and attacking it. Another study looked at the differentiated effects of being raised with only either a wire-mother or a cloth-mother. Both groups gained weight at equal rates, but the monkeys raised on a wire-mother had softer stool and trouble digesting the milk, frequently suffering from diarrhea. Harlow’s interpretation of this behavior, which is still widely accepted, was that a lack of contact comfort is psychologically stressful to the monkeys, and the digestive problems are a physiological manifestation of that stress.

 

The importance of these findings is that they contradicted both the traditional pedagogic advice of limiting or avoiding bodily contact in an attempt to avoid spoiling children, and the insistence of the predominant behaviorist school of psychology that emotions were negligible. Feeding was thought to be the most important factor in the formation of a mother-child bond. Harlow concluded, however, that nursing strengthened the mother-child bond because of the intimate body contact that it provided. He described his experiments as a study of love. He also believed that contact comfort could be provided by either mother or father. Though widely accepted now, this idea was revolutionary at the time in provoking thoughts and values concerning the studies of love. Some of Harlow’s final experiments explored social deprivation in the quest to create an animal model for the study of depression. This study is the most controversial and involved isolation of infant and juvenile macaques for various periods of time. Monkeys placed in isolation exhibited social deficits when introduced or re-introduced into a peer group. They appeared unsure of how to interact with their conspecifics, and mostly stayed separate from the group, demonstrating the importance of social interaction and stimuli in forming the ability to interact with conspecifics in developing monkeys, and, comparatively, in children. Critics of Harlow’s research have observed that clinging is a matter of survival in young rhesus monkeys, but not in humans, and have suggested that his conclusions, when applied to humans, overestimate the importance of contact comfort and underestimate the importance of nursing.

 

Harlow first reported the results of these experiments in “The Nature of Love“, the title of his address to the sixty-sixth Annual Convention of the American Psychological Association in Washington, D.C., August 31, 1958. Beginning in 1959, Harlow and his students began publishing their observations on the effects of partial and total social isolation. Partial isolation involved raising monkeys in bare wire cages that allowed them to see, smell, and hear other monkeys, but provided no opportunity for physical contact. Total social isolation involved rearing monkeys in isolation chambers that precluded any and all contact with other monkeys. Harlow et al. reported that partial isolation resulted in various abnormalities such as blank staring, stereotyped repetitive circling in their cages, and self-mutilation. These monkeys were then observed in various settings. For the study, some of the monkeys were kept in solitary isolation for 15 years. In the total isolation experiments, baby monkeys would be left alone for three, six, 12, or 24 months of “total social deprivation“. The experiments produced monkeys that were severely psychologically disturbed. Harlow wrote:

 

No monkey has died during isolation. When initially removed from total social isolation, however, they usually go into a state of emotional shock, characterized by autistic self-clutching and rocking. One of six monkeys isolated for 3 months refused to eat after release and died 5 days later. The autopsy report attributed death to emotional anorexia. The effects of 6 months of total social isolation were so devastating and debilitating that we had assumed initially that 12 months of isolation would not produce any additional decrement. This assumption proved to be false; 12 months of isolation almost obliterated the animals socially.

 

Harlow tried to reintegrate the monkeys who had been isolated for six months by placing them with monkeys who had been raised normally. The rehabilitation attempts met with limited success. Harlow wrote that total social isolation for the first six months of life produced “severe deficits in virtually every aspect of social behavior.“ Isolates exposed to monkeys the same age who were reared normally “achieved only limited recovery of simple social responses.“ Some monkey mothers reared in isolation exhibited “acceptable maternal behavior when forced to accept infant contact over a period of months, but showed no further recovery.“ Isolates given to surrogate mothers developed “crude interactive patterns among themselves.“ In another trial, the surrogate mother was designed to ?reject’ the infant monkey. Rejection was demonstrated through strong jets of air or blunt spikes forcing the baby away. The reactions of the babies were quite amazing, after rejection. The monkeys would cling again to the mothers even tighter than they did before.These trials proved that nourishment is more than just feeding, and the bond between a mother and child is not solely because of feeding but because of the time spent with the child.

 

Since Harlow’s pioneering work on touch research in development, recent work in rats has found evidence that touch during infancy resulted in a decrease in corticosteroid, a steroid hormone involved in stress, and an increase in glucocorticoid receptors in many regions of the brain. Schanberg and Field found that even short-term interruption of mother-pup interaction in rats, markedly affected several biochemical processes in the developing pup: a reduction in ornithine decarboxylase (ODC) activity, a sensitive index of cell growth and differentiation; a reduction in growth hormone release (in all body organs, including the heart and liver, and throughout the brain, including the cerebrum, cerebellum, and brain stem); an increase in corticosterone secretion; and suppressed tissue ODC responsivity to administered growth hormone. Additionally, it was found that animals who are touch-deprived have weakened immune systems. Investigators have measured a direct, positive relationship between the amount of contact and grooming an infant monkey receives during its first six months of life, and its ability to produce antibody titer (IgG and IgM) in response to an antibody challenge (tetanus) at a little over one year of age. Trying to identify a mechanism for the “immunology of touch“, some investigators point to modulations of arousal and associated CNS-hormonal activity. Touch deprivation may cause stress-induced activation of the pituitary-adrenal system, which, in turn, leads to increased plasma cortisol and adrenocorticotropic hormone. Likewise, researchers suggest, regular and “natural“ stimulation of the skin may moderate these pituitary-adrenal responses in a positive and healthful way.

 

Harlow was well known for refusing to use conventional terminology, instead choosing deliberately outrageous terms for the experimental apparatus he devised. This came from an early conflict with the conventional psychological establishment in which Harlow used the term “love“ in place of the popular and archaically correct term, “attachment“. Such terms and respective devices included a forced-mating device he called the “rape rack“, tormenting surrogate-mother devices he called “Iron maidens“, and an isolation chamber he called the “pit of despair“, developed by him and a graduate student, Stephen Suomi. In the last of these devices, alternatively called the “well of despair“, baby monkeys were left alone in darkness for up to one year from birth, or repetitively separated from their peers and isolated in the chamber. These procedures quickly produced monkeys that were severely psychologically disturbed and used as models of human depression.

 

Many of Harlow’s experiments are now considered unethical – in their nature as well as Harlow’s descriptions of them – and they both contributed to heightened awareness of the treatment of laboratory animals and helped propel the creation of today’s ethics regulations. The monkeys in the experiment were deprived of maternal affection, potentially leading to what humans refer to as “panic disorders.“ University of Washington professor Gene Sackett, one of Harlow’s doctoral students, stated that Harlow’s experiments provided the impetus for the animal liberation movement in the U.S.

 

The monkeys used in these experiments eventually became mothers themselves and were observed to see the effect their ?childhood‘ had on them. All the mothers tended to be either indifferent towards their babies, or abusive. The indifferent mothers did not nurse, comfort, or protect their babies however they did not harm them either. The abusive mothers would violently bite, or otherwise injure their infants. Many of the babies from the abusive mothers died in this process. This proved that how you were mothered has a major impact on how you will be as a mother.

 

1970 Time-Life Documentary Examines the Theories and Experiments of Dr. Harry Harlow.

 

Sources: nih.gov; Wikipedia; http://sites.psu.edu/dps16/2016/03/03/harlows-monkeys/

 

Next Page →