Date:
June 23, 2015

Source:
Stanford University

Summary:
Scientists have invented a low-cost water splitter that uses a single catalyst to produce both hydrogen and oxygen gas 24 hours a day, seven days a week. The device could provide a renewable source of clean-burning hydrogen fuel for transportation and industry.

 

20150624-1

Stanford scientists have invented a device that produces clean-burning hydrogen from water 24 hours a day, seven days a week. Unlike conventional water splitters, the Stanford device uses a single low-cost catalyst to generate hydrogen bubbles on one electrode and oxygen bubbles on the other.
Credit: L.A. Cicero/Stanford News Service

 

 

Stanford University scientists have invented a low-cost water splitter that uses a single catalyst to produce both hydrogen and oxygen gas 24 hours a day, seven days a week.

The device, described in a study published June 23 in Nature Communications, could provide a renewable source of clean-burning hydrogen fuel for transportation and industry.

‘We have developed a low-voltage, single-catalyst water splitter that continuously generates hydrogen and oxygen for more than 200 hours, an exciting world-record performance,’ said study co-author Yi Cui, an associate professor of materials science and engineering at Stanford and of photon science at the SLAC National Accelerator Laboratory.

In an engineering first, Cui and his colleagues used lithium-ion battery technology to create one low-cost catalyst that is capable of driving the entire water-splitting reaction.

‘Our group has pioneered the idea of using lithium-ion batteries to search for catalysts,’ Cui said. ‘Our hope is that this technique will lead to the discovery of new catalysts for other reactions beyond water splitting.’

Clean hydrogen

Hydrogen has long been promoted as an emissions-free alternative to gasoline. Despite its sustainable reputation, most commercial-grade hydrogen is made from natural gas, a fossil fuel that contributes to global warming. As an alternative, scientists have been trying to develop a cheap and efficient way to extract pure hydrogen from water.

A conventional water-splitting device consists of two electrodes submerged in a water-based electrolyte. A low-voltage current applied to the electrodes drives a catalytic reaction that separates molecules of H2O, releasing bubbles of hydrogen on one electrode and oxygen on the other.

Each electrode is embedded with a different catalyst, typically platinum and iridium, two rare and costly metals. But in 2014, Stanford chemist Hongjie Dai developed a water splitter made of inexpensive nickel and iron that runs on an ordinary 1.5-volt battery.

Single catalyst

In the new study, Cui and his colleagues advanced that technology further.

‘Our water splitter is unique, because we only use one catalyst, nickel-iron oxide, for both electrodes,’ said graduate student Haotian Wang, lead author of the study. ‘This bifunctional catalyst can split water continuously for more than a week with a steady input of just 1.5 volts of electricity. That’s an unprecedented water-splitting efficiency of 82 percent at room temperature.’

In conventional water splitters, the hydrogen and oxygen catalysts often require different electrolytes with different pH — one acidic, one alkaline — to remain stable and active. ‘For practical water splitting, an expensive barrier is needed to separate the two electrolytes, adding to the cost of the device,’ Wang said. ‘But our single-catalyst water splitter operates efficiently in one electrolyte with a uniform pH.’

Wang and his colleagues discovered that nickel-iron oxide, which is cheap and easy to produce, is actually more stable than some commercial catalysts made of precious metals.

‘We built a conventional water splitter with two benchmark catalysts, one platinum and one iridium,’ Wang said. ‘At first the device only needed 1.56 volts of electricity to split water, but within 30 hours we had to increase the voltage nearly 40 percent. That’s a significant loss of efficiency.’

Marriage of batteries and catalysis

To find catalytic material suitable for both electrodes, the Stanford team borrowed a technique used in battery research called lithium-induced electrochemical tuning. The idea is to use lithium ions to chemically break the metal oxide catalyst into smaller and smaller pieces.

‘Breaking down metal oxide into tiny particles increases its surface area and exposes lots of ultra-small, interconnected grain boundaries that become active sites for the water-splitting catalytic reaction,’ Cui said. ‘This process creates tiny particles that are strongly connected, so the catalyst has very good electrical conductivity and stability.’

Wang used electrochemical tuning — putting lithium in, taking lithium out — to test the catalytic potential of several metal oxides.

‘Haotian eventually discovered that nickel-iron oxide is a world-record performing material that can catalyze both the hydrogen and the oxygen reaction,’ Cui said. ‘No other catalyst can do this with such great performance.’

Using one catalyst made of nickel and iron has significant implications in terms of cost, he added.

‘Not only are the materials cheaper, but having a single catalyst also reduces two sets of capital investment to one,’ Cui said. ‘We believe that electrochemical tuning can be used to find new catalysts for other chemical fuels beyond hydrogen. The technique has been used in battery research for many years, but it’s a new approach for catalysis. The marriage of these two fields is very powerful. ‘

Other Stanford co-authors of the study are postdoctoral scholar Hyun-Wook Lee, visiting student Zhiyi Lu, and graduate students Yong Deng, Po-Chun Hsu, Yayuan Liu and Dingchang Lin.

A video of the water-splitting device is available at:https://www.youtube.com/watch?v=wsWUoCxjXJQ


Story Source:

The above post is reprinted from materials provided by Stanford University. The original item was written by Mark Shwartz. Note: Materials may be edited for content and length.


Journal Reference:

  1. Haotian Wang, Hyun-Wook Lee, Yong Deng, Zhiyi Lu, Po-Chun Hsu, Yayuan Liu, Dingchang Lin, Yi Cui. Bifunctional non-noble metal oxide nanoparticle electrocatalysts through lithium-induced conversion for overall water splitting. Nature Communications, 2015; 6: 7261 DOI:10.1038/ncomms8261

 

Source: Stanford University. “Single-catalyst water splitter produces clean-burning hydrogen 24/7.” ScienceDaily. ScienceDaily, 23 June 2015. <www.sciencedaily.com/releases/2015/06/150623113836.htm>.

Date:
June 19, 2015

Source:
University of California – Los Angeles

Summary:
Over billions of years, the total carbon content of the outer part of the Earth — in its mantle lithosphere, crust, oceans, and atmospheres — has gradually increased, scientists say. The new analyses that represent an important advance in refining our understanding of Earth’s deep carbon cycle.

20150623-1

Major fluxes of carbon estimated by Craig Manning and Peter Kelemen.
Credit: Courtesy of Josh Wood

Over billions of years, the total carbon content of the outer part of the Earth — in its upper mantle, crust, oceans, and atmospheres — has gradually increased, scientists reported this month in the journal Proceedings of the National Academy of Sciences.

 

 

Craig Manning, a professor of geology and geochemistry at UCLA, and Peter Kelemen, a geochemistry professor at Columbia University, present new analyses that represent an important advance in refining our understanding of Earth’s deep carbon cycle.

Manning and Kelemen studied how carbon, the chemical basis of all known life, behaves in a variety of tectonic settings. They assessed, among other factors, how much carbon is added to Earth’s crust and how much carbon is released into the atmosphere. The new model combines measurements, predictions and calculations.

Their research includes analysis of existing data on samples taken at sites around the world as well as new data from Oman.

The carbon ‘budget’ near the Earth’s surface exerts important controls on global climate change and our energy resources, and has important implications for the origin and evolution of life, Manning said. Yet much more carbon is stored in the deep Earth. The surface carbon that is so important to us is made available chiefly by volcanic processes originating deep in the planet’s interior.

Today carbon can return to Earth’s deep interior only by subduction — the geologic process by which one tectonic plate moves under another tectonic plate and sinks into the Earth’s mantle. Previous research suggested that roughly half of the carbon stored in subducted oceanic mantle, crust and sediments makes it into the deep mantle. Kelemen and Manning’s new analysis suggests instead that subduction may return almost no carbon to the mantle, and that ‘exchange between the deep interior and surface reservoirs is in balance.’

Some carbon must make it past subduction zones. Diamonds form in the mantle both from carbon that has never traveled to Earth’s surface, known as primordial carbon, and from carbon that has cycled from the mantle to the surface and back again, known as recycled carbon. Manning and Kelemen corroborated their findings with a calculation based on the characteristics of diamonds, which form from carbon in the earth’s mantle.

Deep carbon is important because the carbon at the Earth’s surface, on which we depend, ‘exists only by permission of the deep Earth,’ Manning said, quoting a friend. At times in the Earth’s history, the planet has been warmer (in the Cretaceous period, for example), and shallow seas covered North America. The new research sheds light on the Earth’s climate over geologic time scales.


Story Source:

The above post is reprinted from materials provided by University of California – Los Angeles. Note: Materials may be edited for content and length.


Journal Reference:

  1. Peter B. Kelemen, Craig E. Manning. Reevaluating carbon fluxes in subduction zones, what goes down, mostly comes up. Proceedings of the National Academy of Sciences, 2015; 201507889 DOI:10.1073/pnas.1507889112

 

Source: University of California – Los Angeles. “Earth science: New estimates of deep carbon cycle.” ScienceDaily. ScienceDaily, 19 June 2015. <www.sciencedaily.com/releases/2015/06/150619103524.htm>.

 

Target Health Excels in the Orphan Drug/Product Space

 

Target Health excels in regulatory affairs and represents close to 40 clients at FDA from all over the world. The following is a list of Orphan Drug/Product Designation that we have obtained for our clients and 3 have ultimately reached the market. A 4th product was approved in Cystic Fibrosis but without the orphan designation. Congratulations to Glen Park PharmD and his team of experts in this area, Adam Harris and Tony Pinto:

 

Approvals (n=3): 1) Gaucher Disease (NDA Approved); 2) Hereditary Angioedema (NDA Approved); 3) Debridement in Hospitalized Patients with 3rd Degree Burns (EMA Approved)

 

Other designations include (n=13): 1) Alagille Syndrome; 2) Behcet’s Disease; 3) Burn Progression in Hospitalized Patients; 4) Caries Prevention, Head and Neck Cancer; 5) Cushing’s Syndrome Secondary to Ectopic ACTH Secretion; 6) Edema-Related Effects in Hospitalized patients with 3rd Degree Burns; 7) Hematopoietic Stem Cells for Aplastic Anemia; 8) Fibrolamellar Carcinoma (FLC); 9) Growth Hormone; 10) Multiple Myeloma; 11) Osteonecrosis of the Jaw; 12) Rabies and 13) Scleroderma

 

Pending submissions include (n=2): 1) Rete Syndrome; 2) Huntington disease

 

Birds of a Feather

 

Our dear friend and colleague Jonathan Kagan (NIH) guessed that the bird wading in the Central park reservoir “last week was a heron.“ Here is what Glen Park (Birdman of THI) had to say. “It is either a great egret or snowy egret. The difference is that the snowy egret has greenish-yellow feet, which are not visible in this picture. The great egret usually has a completely yellow bill and is the larger bird. The snowy egret usually has a more black bill. Either way Jonathan, All Egrets are Herons.

 

Another Masterpiece by James Farley

 

The world according to Farley: “Photography is often about persistence and opportunity. This Green Heron was so concentrated on getting a crayfish that – though they did see me – they cared very little for me! Any other time, the heron would have flown over to the other side of the pond! And, when I took this shot, I was only about 20-25 feet away with 400mm focal length! I have several shots and I must say it was very interesting to look through them and see how many times this heron flipped this crayfish and crushed it every which way until it swallowed it whole!

 

20150622-16

©JFarley Photography 2015

 

ON TARGET is the newsletter of Target Health Inc., a NYC-based, full-service, contract research organization (eCRO), providing strategic planning, regulatory affairs, clinical research, data management, biostatistics, medical writing and software services to the pharmaceutical and device industries, including the paperless clinical trial.

 

For more information about Target Health contact Warren Pearlson (212-681-2100 ext. 104). For additional information about software tools for paperless clinical trials, please also feel free to contact Dr. Jules T. Mitchel or Ms. Joyce Hays. The Target Health software tools are designed to partner with both CROs and Sponsors. Please visit the Target Health Website.

 

Joyce Hays, Founder and Editor in Chief of On Target

Jules Mitchel, Editor

 

QUIZ

Filed Under News | Leave a Comment

Gut Microbes and Brain Disorders

20150622-15

Gut bacteria influence anxiety-like behavior through alterations in the way the brain is wired

 

The community of microbes that inhabits the body, known as the microbiome, has a powerful influence on the brain and may offer a pathway to new therapies for psychiatric and neurological disorders, according to researchers. The trillions of microbes that inhabit the human body’s, 1) ___, are estimated to weigh two to six pounds — up to twice the weight of the average human brain. Most of them live in the gut and intestines, where they help us to digest food, synthesize vitamins and ward off infection. But recent research on the microbiome has shown that its influence extends far beyond the gut, all the way to the brain. Over the past 10 years, studies have linked the gut microbiome to a range of complex behaviors, such as mood and emotion, and appetite and satiety. Not only does the gut microbiome appear to help maintain 2) ___ function but it may also influence the risk of psychiatric and neurological disorders, including anxiety, depression and autism.

 

Three researchers at the forefront of this emerging field recently discussed the microbiome-brain connection with The Kavli Foundation. “The big question right now is how the microbiome exerts its effects on the brain,“ said Christopher Lowry, Associate Professor of Integrative Physiology at the University of Colorado, Boulder. Lowry is studying whether beneficial microbes can be used to treat or prevent stress-related psychiatric conditions, including anxiety and 3) ___. One surprising way in which the microbiome influences the brain is during development. Tracy Bale, Professor of Neuroscience at the School of Veterinary Medicine at the University of Pennsylvania, and her team have found that the microbiome in mice is sensitive to stress and that stress-induced changes to a mother’s microbiome are passed on to her baby and alter the way her baby’s brain develops. “There are key developmental windows when the brain is more vulnerable because it’s setting itself up to respond to the world around it,“ said Bale, who has done pioneering research into the effects of maternal stress on the brain. “So, if the 4) ___ microbial ecosystem changes — due to infection, stress or diet, for example — her newborn’s gut microbiome will change too, and that can have a lifetime effect.“ Sarkis Mazmanian, Louis & Nelly Soux Professor of Microbiology at the California Institute of Technology, is exploring the link between gut bacteria, gastrointestinal disease and autism, a neurodevelopmental disorder. He has discovered that the gut microbiome communicates with the brain via molecules that are produced by gut 5) ___ and then enter the bloodstream. These metabolites are powerful enough to change the behavior of mice. “We’ve shown, for example, that a metabolite produced by gut bacteria is sufficient to cause behavioral abnormalities associated with autism and with anxiety when it is injected into otherwise healthy mice,“ said Mazmanian.

 

The work of these three researchers raises the possibility that brain disorders, including anxiety, depression and autism, may be treated through the gut, which is a much easier target for drug delivery than the brain. But there is still much more research to be done to understand the gut-microbiome-brain connection, they said. Mazmanian’s lab is also exploring whether the microbiome plays a role in neurodegenerative diseases such as Alzheimer’s and 6) ___. “There are flash bulbs going off in the dark, suggesting that very complex neurodegenerative disorders may be linked to the microbiome. But once again this is very speculative. These seminal findings, the flash bulbs, are only just beginning to illuminate our vision of the gut-microbiome-brain connection,“ said Mazmanian. Researchers at McMaster University in Ontario, Canada, discovered that the “cross-talk“ between bacteria in our gut and our brain plays an important role in the development of psychiatric illness, intestinal diseases and probably other health problems as well including obesity. “The wave of the future is full of opportunity as we think about how microbiota or bacteria influence the brain and how the bi-directional communication of the 7) ___ and the brain influence metabolic disorders, such as obesity and diabetes,“ says Jane Foster, associate professor in the Department of Psychiatry and Behavioral Neurosciences of the Michael G. DeGroote School of Medicine. Using germ-free mice, Foster’s research shows gut bacteria influences how the brain is wired for learning and memory. The research paper was published in the science journal Neurogastroenterology and Motility. The study’s results show that 8) ___ linked to learning and memory are altered in germ-free mice and, in particular, they are altered in one of the key brain regions for learning and memory — the hippocampus. “The take-home message is that gut bacteria influences anxiety-like behavior through alterations in the way the brain is wired,“ said Foster. Foster’s laboratory is located in the Brain-Body Institute, a joint research initiative of McMaster University and St. Joseph’s Healthcare in Hamilton. The institute was created to advance understanding of the relationship between the brain, nervous system and bodily disorders. “We have a hypothesis in my lab that the state of your immune system and your gut bacteria — which are in constant communication — influences your personality,“ Foster said. She said psychiatrists, in particular, are interested in her research because of the problems of side effects with current 9) ___therapy. “The idea behind this research is to see if it’s possible to develop new therapies which could target the body, free of complications related to getting into the brain,“ Foster said. “We need novel targets that take a different approach than what is currently on the market for psychiatric illness. Those targets could be the 10) ___ system, your gut function, and we could even use the body to screen patients to say what drugs might work better in their brain.“ Sources: Kavli Foundation: “Could gut microbes help treat brain disorders?; McMaster University: K. M. Neufeld, N. Kang, J. Bienenstock, J. A. Foster. Reduced anxiety-like behavior and central neurochemical change in germ-free mice. Neurogastroenterology & Motility; ScienceDaily.com

 

ANSWERS: 1) microbiome; 2) brain; 3) depression; 4) mother’s; 5) bacteria; 6) Parkinson’s; 7) body; 8) genes; 9) drug; 10) immune

 

The Brain

20150622-13

Hieroglyphic for the word “brain“ (c.1700 BCE)

 

20150622-14

One of Leonardo da Vinci’s (1452-1519) sketches of the human skull

 

From the ancient Egyptian mummifications to 18th century scientific research on “globules“ and neurons, there is evidence of neuroscience practice throughout the early periods of history. The early civilizations lacked adequate means to obtain knowledge about the human brain. Their assumptions about the inner workings of the mind, therefore, were not accurate. Early views on the function of the brain regarded it to be a form of “cranial stuffing“ of sorts. In ancient Egypt, from the late Middle Kingdom onwards, in preparation for mummification, the brain was regularly removed, for it was the heart that was assumed to be the seat of intelligence. According to Herodotus, during the first step of mummification: “The most perfect practice is to extract as much of the brain as possible with an iron hook, and what the hook cannot reach is mixed with drugs.“ Over the next five thousand years, this view came to be reversed; the brain is now known to be the seat of intelligence, although colloquial variations of the former remain as in “memorizing something by heart“.

 

The Edwin Smith Surgical Papyrus, written in the 17th century BC, contains the earliest recorded reference to the brain. The hieroglyph for brain, occurring eight times in this papyrus, describes the symptoms, diagnosis, and prognosis of two patients, wounded in the head, who had compound fractures of the skull. The assessments of the author (a battlefield surgeon) of the papyrus allude to ancient Egyptians having a vague recognition of the effects of head trauma. While the symptoms are well written and detailed, the absence of a medical precedent is apparent. The author of the passage notes that “the pulsations of the exposed brain“ and compared the surface of the brain to the “rippling surface of copper slag“ (which indeed has a gyral-sulcal pattern). The laterality of injury was related to the laterality of symptom, and both aphasia (“he speaks not to thee“) and seizures (“he shutters exceedingly“) after head injury were described.“ Observations by ancient civilizations of the human brain suggest only a relative understanding of the basic mechanics and the importance of cranial security. Furthermore, considering the general consensus of medical practice pertaining to human anatomy was based on myths and superstition, the thoughts of the battlefield surgeon appear to be empirical and based on logical deduction and simple observation.

 

During the second half of the first millennium BCE, the Ancient Greeks developed differing views on the function of the brain. However, due to the fact that Hippocratic doctors did not practice dissection, because the human body was considered sacred, Greek views of brain function were generally uninformed by anatomical study. It is said that it was the Pythagorean Alcmaeon of Croton (6th and 5th centuries BCE) who first considered the brain to be the place where the mind was located. According to ancient authorities, “he believed the seat of sensations is in the brain. This contains the governing faculty. All the senses are connected in some way with the brain; consequently they are incapable of action if the brain is disturbed – the power of the brain to synthesize sensations makes it also the seat of thought: The storing up of perceptions gives memory and belief and when these are stabilized you get knowledge.“ In the 4th century BCE Hippocrates, believed the brain to be the seat of intelligence (based, among others before him, on Alcmaeon’s work). During this same period, Aristotle thought that, while the heart was the seat of intelligence, the brain was a cooling mechanism for the blood. He reasoned that humans are more rational than the beasts because, among other reasons, they have a larger brain to cool their hot-bloodedness. In contrast to Greek thought regarding the sanctity of the human body, the Egyptians had been embalming their dead for centuries, and went about the systematic study of the human body.

 

During the Hellenistic period, Herophilus of Chalcedon (c.335/330-280/250 BCE) and Erasistratus of Ceos (c. 300-240 BCE) made fundamental contributions not only to brain and nervous systems’ anatomy and physiology, but to many other fields of the bio-sciences. Herophilus not only distinguished the cerebrum and the cerebellum, but provided the first clear description of the ventricles. Erasistratus used practical application by experimenting on the living brain. Their works are now mostly lost, and we know about their achievements due mostly to secondary sources. Some of their discoveries had to be re-discovered a millennium after their death.

 

During the Roman Empire, the Greek anatomist Galen dissected the brains of sheep, monkeys, dogs, swine, among other non-human mammals. He concluded that, as the cerebellum was denser than the brain, it must control the muscles, while as the cerebrum was soft, it must be where the senses were processed. Galen further theorized that the brain functioned by movement of animal spirits through the ventricles. Further, his studies of the cranial nerves and spinal cord were outstanding. He noted that specific spinal nerves controlled specific muscles, and had the idea of the reciprocal action of muscles. For the next advance in understanding spinal function we must await Bell and Magendie in the 19th Century. Andreas Vesalius noted many structural characteristics of both the brain and general nervous system during his dissections of human cadavers. In addition to recording many anatomical features such as the putamen and corpus collusum, Vesalius proposed that the brain was made up of seven pairs of ?brain nerves’, each with a specialized function. Other scientists including Leonardo da Vinci furthered Vesalius’ work by adding their own detailed sketches of the human brain. Rene Descartes also studied the physiology of the brain, proposing the theory of dualism to tackle the issue of the brain’s relation to the mind. He suggested that the pineal gland was where the mind interacted with the body after recording the brain mechanisms responsible for circulating cerebrospinal fluid. Thomas Willis studied the brain, nerves, and behavior to develop neurologic treatments. He described in great detail the structure of the brainstem, the cerebellum, the ventricles, and the cerebral hemispheres.

 

The role of electricity in nerves was first observed in dissected frogs by Luigi Galvani in the second half of the 18th century. Richard Caton presented his findings in 1875 about electrical phenomena of the cerebral hemispheres of rabbits and monkeys. Studies of the brain became more sophisticated after the invention of the microscope and the development of a staining procedure by Camillo Golgi during the late 1890s that used a silver chromate salt to reveal the intricate structures of single neurons. His technique was used by Santiago Ramon y Cajal and led to the formation of the neuron doctrine, the hypothesis that the functional unit of the brain is the neuron. Golgi and Ramon y Cajal shared the Nobel Prize in Physiology or Medicine in 1906 for their extensive observations, descriptions and categorizations of neurons throughout the brain. The hypotheses of the neuron doctrine were supported by experiments following Galvani’s pioneering work in the electrical excitability of muscles and neurons. In the late 19th century, Emil du Bois-Reymond, Johannes Peter Muller, and Hermann von Helmholtz showed neurons were electrically excitable and that their activity predictably affected the electrical state of adjacent neurons. In parallel with this research, work with brain-damaged patients by Paul Broca suggested that certain regions of the brain were responsible for certain functions.

 

Neuroscience during the twentieth century began to be recognized as a distinct unified academic discipline, rather than studies of the nervous system being a factor of science belonging to a variety of disciplines. Broca’s hypothesis was supported by observations of epileptic patients conducted by John Hughlings Jackson, who correctly deduced the organization of motor cortex by watching the progression of seizures through the body. Carl Wernicke further developed the theory of the specialization of specific brain structures in language comprehension and production. Modern research still uses the Korbinian Brodmann’s cytoarchitectonic (referring to study of cell structure) anatomical definitions from this era in continuing to show that distinct areas of the cortex are activated in the execution of specific tasks. Eric Kandel and collaborators have cited David Rioch, Francis O. Schmitt, and Stephen Kuffler as having played critical roles in establishing the field. Rioch originated the integration of basic anatomical and physiological research with clinical psychiatry at the Walter Reed Army Institute of Research, starting in the 1950s. During the same period, Schmitt established a neuroscience research program within the Biology Department at the Massachusetts Institute of Technology, bringing together biology, chemistry, physics, and mathematics. Kuffler started the Department of Neuroscience at Harvard Medical School in 1966, the first such freestanding department. Eric Kandel (1929 to present) recipient of the 2000 Nobel Prize in Physiology or Medicine for his research on the physiological basis of memory storage in neurons, studied psychoanalysis, and wanted to understand how memory works. Today, Kandel is a professor of biochemistry and biophysics at the College of Physicians and Surgeons at Columbia University and is a Senior Investigator in the Howard Hughes Medical Institute. He was also the founding director of the Center for Neurobiology and Behavior, which is now the Department of Neuroscience at Columbia University.

 

Placenta-On-A-Chip

 

The placenta is a temporary organ that develops in pregnancy and is the major interface between mother and fetus. Among its many functions is to serve as a “crossing guard“ for substances traveling between mother and fetus. The placenta helps nutrients and oxygen move to the fetus and helps waste products move away. At the same time, the placenta tries to stop harmful environmental exposures, like bacteria, viruses and certain medications, from reaching the fetus. When the placenta doesn’t function correctly, the health of both mom and baby suffers. Scientists are trying to learn how the placenta manages all this traffic, transporting some substances and blocking others with the goal that one day help clinicians better assess placental health and ultimately improve pregnancy outcomes. However, studying the placenta in humans is challenging: it is time-consuming, subject to a great deal of variability and potentially risky for the fetus. For those reasons, previous studies on placental transport have relied largely on animal models and on laboratory-grown human cells. These methods have yielded helpful information, but are limited as to how well they can mimic physiological processes in humans.

 

According to a study published online in the Journal of Maternal-Fetal & Neonatal Medicine (18 June 2015), scientists at the NIH and their colleagues have developed a “placenta-on-a-chip“ to study the inner workings of the human placenta and its role in pregnancy. The device was designed to imitate, on a micro-level, the structure and function of the placenta and model the transfer of nutrients from mother to fetus. This prototype is one of the latest in a series of organ-on-a-chip technologies developed to accelerate biomedical advances. The study was conducted by an interdisciplinary team of researchers from the NIH’s Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the University of Pennsylvania, Wayne State University/Detroit Medical Center, Seoul National University and Asan Medical Center in South Korea.

 

The authors created the placenta-on-a-chip technology to address these challenges, using human cells in a structure that more closely resembles the placenta’s maternal-fetal barrier. The device consists of a semi-permeable membrane between two tiny chambers, one filled with maternal cells derived from a delivered placenta and the other filled with fetal cells derived from an umbilical cord. After designing the structure of the model, the authors tested its function by evaluating the transfer of glucose (a substance made by the body when converting carbohydrates to energy) from the maternal compartment to the fetal compartment. The successful transfer of glucose in the device mirrored what occurs in the body thus validating the technology According to the authors, the chip may allow the more efficient performance of experiments at a lower cost than animal studies, and with further improvements, this technology may lead to better understanding of normal placental processes and placental disorders.

 

VIROLOGY

Filed Under News | Leave a Comment

Monoclonal Antibody Protects Mice Against a Broad Range of Polioviruses

 

The worldwide campaign to stop the transmission of wild polioviruses may soon eliminate all sources of pathogenic virus except for an insufficiently characterized group of immunodeficient patients incapable of resolving poliovirus infection. While these patients are not only at significant risk of developing paralysis, but are also a source of virulent polioviruses that could restart virus circulation. No effective treatment is known that would help the patients to clear the infection, and therefore the search for new antiviral strategies is underway.

 

Results of a study published in the Journal of Clinical Virology(2015; 65:32-37) in transgenic mice by scientists at the U.S. Food and Drug Administration (FDA), suggest that hybrid human/chimpanzee monoclonal antibodies (mAb) that were previously isolated in collaboration with the NIH could be used in combination with drugs, a vaccine, or both, to improve their efficacy against polioviruses. The results of the study showed that a specific antibody called mAb A12 neutralized a broad range of type 1 and type 2 polioviruses, as well as vaccine-derived poliovirus. The antibody also protected susceptible mice against lethal doses of the poliovirus when they received it before being exposed to the virus by intramuscular injection. mAb A12 also prevented paralysis of some mice that were treated hours after exposure to the virus. The A12 antibody did not interfere with the ability of poliovirus to stimulate the immune system of vaccinated mice from developing antibodies that neutralized the virus. This suggests that treatment with antibody A12 in combination with polio vaccine might prevent development of polio symptoms until the immune system responds to either the vaccine or to contact with the wild poliovirus itself. The authors also showed that that mAb A12 can neutralize poliovirus strains that are resistant to the investigational anti-poliovirus drug, V-073.

 

While further study is needed, the results suggest that the antibody could be used to supplement antiviral therapy and prevent emergence of drug-resistant viruses. The authors hypothesized that the antibody could be used to treat chronically infected individuals to have them stop shedding the virus, and for emergency post-exposure prophylaxis; studies are currently underway to test this hypothesis. According to the authors, the findings of the study provide support for initiating clinical evaluations in people of antibodies like mAb A12 in the treatment and prevention of polio.

 

New Device Helps the Blind to Process Visual Signals Via the Tongue

 

According to the National Institutes of Health’s National Eye Institute (NEI), in 2010 more than 1.2 million people in the United States were blind. NEI projects that number of Americans who are blind will rise to 2.1 million by 2030 and 4.1 million by 2050.

 

The FDA has allowed marketing of a new device that when used along with other assistive devices, like a cane or guide dog, can help orient people who are blind by helping them process visual images with their tongues. The BrainPort V100 is a battery-powered device that includes a video camera mounted on a pair of glasses and a small, flat intra-oral device containing a series of electrodes that the user holds against their tongue. Software converts the image captured by the video camera into electrical signals that are then sent to the intra-oral device and perceived as vibrations or tingling on the user’s tongue. With training and experience, the user learns to interpret the signals to determine the location, position, size, and shape of objects, and to determine if objects are moving or stationary.

 

The FDA reviewed the data for the BrainPortV100 through the de novo premarket review pathway, a regulatory pathway for some low- to moderate-risk medical devices that are not substantially equivalent to an already legally-marketed device. Clinical data supporting the safety and effectiveness of the BrainPort V100 included several assessments, such as object recognition and word identification, as well as oral health exams to determine risks associated with holding the intra-oral device in the mouth. Studies showed that 69% of the 74 subjects who completed one year of training with the device were successful at the object recognition test. Some patients reported burning, stinging or metallic taste associated with the intra-oral device. There were no serious device-related adverse events. BrainPort is manufactured by Wicab, Inc., in Middleton, Wisconsin.

 

Rum Cake with Candied Chestnuts & Marzipan Sprinkles

20150622-1

Marzipan sprinkles on top.  Make the first part of your meal, low calorie so you can have this delicious cake for dessert ©Joyce Hays, Target Health Inc.

 

20150622-2

Here is the cake before adding the marzipan sprinkles.  It would be fine to serve, just like this, or garnish with chopped candied chestnuts and/or candied cherries.  ©Joyce Hays, Target Health Inc.

 

20150622-3

Moist, luscious treat ©Joyce Hays, ©Target Health Inc.

 

20150622-4

When Jules came back from his DIA presentation in DC, half a cake was gone.  I knew this would be the next recipe for the ON TARGET newsletter, and quickly made a second cake, on Friday. ©Joyce Hays, Target Health Inc.

 

20150622-5

 

Spongecake Ingredients

 

Butter, for preparing pan

3/4 cup almond flour, plus more to prepare pan

4 eggs, separated

3/4 cup plus 2 Tablespoons sugar

1/2 teaspoon vanilla extract

Dash salt

1 teaspoon baking powder

1/2 stick unsalted butter, melted, at room temperature, not on the stove over heat.

 

 

20150622-6

Cake ingredients in mixing bowl    ©Joyce Hays, Target Health Inc.

 

Directions to Make Spongecake

 

Heat the oven to 350 degrees. Butter and flour a 9-inch round cake pan.

Combine the egg yolks and 3/4 cup sugar in a mixing bowl, whisking until thick and light. Stir in the vanilla.

Beat the egg whites and salt until they form soft peaks. Add the remaining 2 Tablespoons of sugar and beat until glossy, 15 to 20 seconds.

Incorporate 1/3 of the egg whites into the yolk mixture. Sift 1/4 cup of the flour and all of the baking powder, over the mixture and slowly, fold it in. Repeat, folding in whites and flour, until the last flour is nearly incorporated. Fold in the melted butter, last.

Pour the batter into the pan. Bake just until the cake pulls away from the pan, 30 minutes. Cool slightly before removing it from the pan.

 

Items to Assemble

 

1 (9-inch) spongecake

1/4 cup light rum

1/2 cup chopped candied chestnuts & cherries

1 1/2 pounds ricotta cheese (2 1/2 cups)

1 cup sifted powdered sugar

1/2 teaspoon cinnamon

1/4 teaspoon almond extract

For garnish, more candied chestnuts for garnish along with candied cherries, sliced (optional)

Marzipan pieces, for garnish (optional).  Chop marzipan in food processor, if you plan to use these.

6 whole candied cherries, sliced, for garnish (optional)

 

 

20150622-7

After cake cooled, it was cut in two.  Before, I put the first layer on this serving dish, I poured some (additional) rum onto the dish and put this half of the cake on top of the rum, to soak it up.  ©Joyce Hays, Target Health Inc.

 

20150622-8

This will be the top layer.  ©Joyce Hays, Target Health Inc.

 

Directions on How to Put Together

 

Cut the spongecake into 2 layers.

Pour rum over the candied chestnuts and cherries.  You can do this the night before, so the rum gets well soaked up.

 

 

20150622-9

Beating the cheese, etc. to make the frosting and filling. Next, I will add the chopped chestnuts & cherries to this mixture. ©Joyce Hays, Target Health Inc.

 

Beat the cheese, powdered sugar, cinnamon and almond extract until creamy, about 2 minutes.

Drain the fruit and stir it in, blending well.  Save any of the rum, to use on individual servings.

Place 1 cake layer on a large cake plate and spread with 1/2 of the filling. Top with the second layer and spread the remaining filling over the top of the cake.

Garnish with more candied chestnuts and the cherries. Chill 1 hour.

 

 

20150622-10

Rum soaked, chopped candied chestnuts & cherries.  ©Joyce Hays, Target Health Inc.

 

20150622-11

Second cake had more rum and was even more fantastic than the first. However, if you don’t like added alcohol, be guided by the first cake. ©Joyce Hays, Target Health Inc.

 

20150622-12

He says, “tom a to“ and I say “tom ah to“; he drank red and I drank white- and, well, it’s our anniversary, and we’re having so much fun, we agreed to stay married for at least another 35 years.

 

The meal with the new cake recipe started out with two different wines the red and white (above) and our favorite salad, the avocado, tomato, cucumber salad with oil, garlic and lemon dressing. Then came eggplant parmesan, followed by his favorite casserole, tuna with green peas. We finished the wine before dessert. Last, course, was the new rum (ier) cake, which is so rich, we started with thin slivers.

 

My rule of thumb with desserts is no wine, if the dessert is sweeter than the wine. In that case, serve a liqueur or just coffee and/or tea.

 

We celebrated our anniversary all weekend, going one night, to the theater. (I beat him in Scrabble for the 5th weekend in a row)

 

Sunday night we had a new salmon recipe, which I will share soon in the newsletter. We had a fabulous weekend, and we hope you did too!

 

From Our Table to Yours!

 

Bon Appetit!

 

Date:
June 18, 2015

Source:
University of Colorado at Boulder

Summary:
A dramatic increase in the rate of earthquakes in the central and eastern US since 2009 is associated with fluid injection wells used in oil and gas development, says a new study.

 

20150619-1

A new study ties high-rate injection wells like this salt water disposal well in Colorado to enormous earthquake increase.
Credit: Bill Ellsworth, USGS

 

 

A dramatic increase in the rate of earthquakes in the central and eastern U.S. since 2009 is associated with fluid injection wells used in oil and gas development, says a new study by the University of Colorado Boulder and the U.S. Geological Survey.

The number of earthquakes associated with injection wells has skyrocketed from a handful per year in the 1970s to more than 650 in 2014, according to CU-Boulder doctoral student Matthew Weingarten, who led the study. The increase included several damaging quakes in 2011 and 2012 ranging between magnitudes 4.7 and 5.6 in Prague, Oklahoma; Trinidad, Colorado; Timpson, Texas; and Guy, Arkansas.

“This is the first study to look at correlations between injection wells and earthquakes on a broad, nearly national scale,” said Weingarten of CU-Boulder’s geological sciences department. “We saw an enormous increase in earthquakes associated with these high-rate injection wells, especially since 2009, and we think the evidence is convincing that the earthquakes we are seeing near injection sites are induced by oil and gas activity.”

A paper on the subject appears in the June 18 issue of Science.

The researchers found that “high-rate” injection wells — those pumping more than 300,000 barrels of wastewater a month into the ground — were much more likely to be associated with earthquakes than lower-rate injection wells. Injections are conducted either for enhanced oil recovery, which involves the pumping of fluid into depleted oil reservoirs to increase oil production, or for the disposal of salty fluids produced by oil and gas activity, said Weingarten.

Co-authors on the study include CU-Boulder Professor Shemin Ge of the geological sciences department and Jonathan Godt, Barbara Bekins and Justin Rubinstein of the U.S. Geological Survey (USGS). Godt is based in Denver and Bekins and Rubenstein are based in Menlo Park, California.

The team assembled a database of roughly 180,000 injection wells in the study area, which ranged from Colorado to the East Coast. More than 18,000 wells were associated with earthquakes — primarily in Oklahoma and Texas — and 77 percent of associated injection wells remain active, according to the study authors.

Of the wells associated with earthquakes, 66 percent were oil recovery wells, said Ge. But active saltwater disposal wells were 1.5 times as likely as oil recovery wells to be associated with earthquakes. “Oil recovery wells involve an input of fluid to ‘sweep’ oil toward a second well for removal, while wastewater injection wells only put fluid into the system, producing a larger pressure change in the reservoir,” Ge said.

Enhanced oil recovery wells differ from hydraulic fracturing, or fracking wells, in that they usually inject for years or decades and are operated in tandem with conventional oil production wells, said Weingarten. In contrast, fracking wells typically inject for just hours or days.

The team noted that thousands of injection wells have operated during the last few decades in the central and eastern U.S. without a ramp-up in seismic events. “It’s really the wells that have been operating for a relatively short period of time and injecting fluids at high rates that are strongly associated with earthquakes,” said Weingarten.

In addition to looking at injection rates of individual wells over the study area, the team also looked at other aspects of well operations including a well’s cumulative injected volume of fluid over time, the monthly injection pressure at individual wellheads, the injection depth, and their proximity to “basement rock” where earthquake faults may lie. None showed significant statistical correlation to seismic activity at a national level, according to the study.

Oklahoma had the most seismic activity of any state associated with wastewater injection wells. But parts of Colorado, west Texas, central Arkansas and southern Illinois also showed concentrations of earthquakes associated with such wells, said Weingarten.

In Colorado, the areas most affected by earthquakes associated with injection wells were the Raton Basin in the southern part of the state and near Greeley north of Denver.

“People can’t control the geology of a region or the scale of seismic stress,” Weingarten said. “But managing rates of fluid injection may help decrease the likelihood of induced earthquakes in the future.”

The study was supported by the USGS John Wesley Powell Center for Analysis and Synthesis, which provides opportunities for collaboration between government, academic and private sector scientists.


Story Source:

The above post is reprinted from materials provided by University of Colorado at Boulder. Note: Materials may be edited for content and length.


Journal Reference:

  1. M. Weingarten, S. Ge, J. W. Godt, B. A. Bekins, J. L. Rubinstein. High-rate injection is associated with the increase in U.S. mid-continent seismicity. Science, 2015 DOI: 10.1126/science.aab1345

 

Source: University of Colorado at Boulder. “US mid-continent seismicity linked to high-rate injection wells: Earthquake numbers skyrocket.” ScienceDaily. ScienceDaily, 18 June 2015. <www.sciencedaily.com/releases/2015/06/150618145901.htm>.

← Previous PageNext Page →