Date:
August 27, 2015

Source:
Howard Hughes Medical Institute (HHMI)

Summary:
Scientists can now watch dynamic biological processes with unprecedented clarity in living cells using new imaging techniques. The new methods dramatically improve on the spatial resolution provided by structured illumination microscopy, one of the best imaging methods for seeing inside living cells.

 

20150831-1

This is a still image from a video showing the interaction of filamentous actin (mApple-F-tractin, purple) with myosin IIA bipolar head groups (EGFP, myosin IIA, green) at 20-second intervals for 100 time points, as seen with high-NA TIRF-SIM.
Credit: Betzig Lab, HHMI/Janelia Research Campus

 

 

Scientists can now watch dynamic biological processes with unprecedented clarity in living cells using new imaging techniques developed by researchers at the Howard Hughes Medical Institute’s Janelia Research Campus. The new methods dramatically improve on the spatial resolution provided by structured illumination microscopy, one of the best imaging methods for seeing inside living cells.

The vibrant videos produced with the new technology show the movement and interactions of proteins as cells remodel their structural supports or reorganize their membranes to take up molecules from outside the cell. Janelia group leader Eric Betzig, postdoctoral fellow Dong Li and their colleagues have added the two new technologies — both variations on SIM — to the set of tools available for super-resolution imaging. Super-resolution optical microscopy produces images whose spatial resolution surpasses a theoretical limit imposed by the wavelength of light, offering extraordinary visual detail of structures inside cells. But until now, super-resolution methods have been impractical for use in imaging living cells.

“These methods set a new standard for how far you can push the speed and non-invasiveness of super-resolution imaging,” Betzig says of the techniques his team described in the August 28, 2015, issue of the journal Science. “This will bring super-resolution to live-cell imaging for real.”

In traditional SIM, the sample under the lens is observed while it is illuminated by a pattern of light (more like a bar code than the light from a lamp). Several different light patterns are applied, and the resulting moiré patterns are captured from several angles each time by a digital camera. Computer software then extracts the information in the moiré images and translates it into a three-dimensional, high-resolution reconstruction. The final reconstruction has twice the spatial resolution that can be obtained with traditional light microscopy.

Betzig was one of three scientists awarded the 2014 Nobel Prize in Chemistry for the development of super-resolved fluorescence microscopy. He says SIM has not received as much attention as other super-resolution methods largely because those other methods offer more dramatic gains in spatial resolution. But he notes that SIM has always offered two advantages over alternative super-resolution methods, including photoactivated localization microscopy (PALM), which he developed in 2006 with Janelia colleague Harald Hess.

Both PALM and stimulated emission depletion (STED) microscopy, the other super-resolution technique recognized with the 2014 Nobel Prize, illuminate samples with so much light that fluorescently labeled proteins fade and the sample is quickly damaged, making prolonged imaging impossible. SIM, however, is different. “I fell in love with SIM because of its speed and the fact that it took so much less light than the other methods,” Betzig says.

Betzig began working with SIM shortly after the death in 2011 of one of its pioneers, Mats Gustafsson, who was a group leader at Janelia. Betzig was already convinced that SIM had the potential to generate significant insights into the inner workings of cells, and he suspected that improving the technique’s spatial resolution would go a long way toward increasing its use by biologists.

Gustafsson and graduate student Hesper Rego had achieved higher-resolution SIM with a variation called saturated depletion non-linear SIM, but that method trades improvements in spatial resolution for harsher conditions and a loss of speed. Betzig saw a way around that trade-off.

Saturated depletion enhances the resolution of SIM images by taking advantage of fluorescent protein labels that can be switched on and off with light. To generate an image, all of the fluorescent labels in a protein are switched on, then a wave of light is used to deactivate most of them. After exposure to the deactivating light, only molecules at the darkest regions of the light wave continue to fluoresce. These provide higher frequency information and sharpen the resulting image. An image is captured and the cycle is repeated 25 times or more to generate data for the final image. The principle is very similar to the way super-resolution in achieved in STED or a related method called RESOLFT, Betzig says.

The method is not suited to live imaging, he says, because it takes too long to switch the photoactivatable molecules on and off. What’s more, the repeated light exposure damages cells and their fluorescent labels. “The problem with this approach is that you first turn on all the molecules, then you immediately turn off almost all the molecules. The molecules you’ve turned off don’t contribute anything to the image, but you’ve just fried them twice. You’re stressing the molecules, and it takes a lot of time, which you don’t have, because the cell is moving.”

The solution was simple, Betzig says: “Don’t turn on all of the molecules. There’s no need to do that.” Instead, the new method, called patterned photoactivation non-linear SIM, begins by switching on just a subset of fluorescent labels in a sample with a pattern of light. “The patterning of that gives you some high resolution information already,” he explains. A new pattern of light is used to deactivate molecules, and additional information is read out of their deactivation. The combined effect of those patterns leads to final images with 62-nanometer resolution — better than standard SIM and a three-fold improvement over the limits imposed by the wavelength of light.

“We can do it and we can do it fast,” he says. That’s important, he says, because for imaging dynamic processes, an increase in spatial resolution is meaningless without a corresponding increase in speed. “If something in the cell is moving at a micron a second and I have one micron resolution, I can take that image in a second. But if I have 1/10-micron resolution, I have to take the data in a tenth of a second, or else it will smear out,” he explains.

Patterned photoactivation non-linear SIM captures the 25 images that go into a final reconstruction in about one-third of a second. Because it does so efficiently, using low intensity light and gleaning information from every photon emitted from a sample’s fluorescent labels, labels are preserved so that the microscope can image longer, letting scientists watch more action unfold.

The team used patterned photoactivation non-linear SIM to produce videos showing structural proteins break down and reassemble themselves as cells move and change shape, as well as the dynamics of tiny pits on cell surfaces called caveolae.

Betzig’s team also reports in the Science paper that they can boost the spatial resolution of SIM to 84 nanometers by imaging with a commercially available microscope objective with an ultra-high numerical aperture. The aperture restricts light exposure to a very small fraction of a sample, limiting damage to cells and fluorescent molecules, and the method can be used to image multiple colors at the same time, so scientists can simultaneously track several different proteins.

Using the high numerical aperture approach, Betzig’s team was able to watch the movements and interactions of several structural proteins during the formation of focal adhesions, physical links between the interior and exterior of a cell. They also followed the growth and internalization of clathrin-coated pits, structures that facilitate the intake of molecules from outside of the cell. Their quantitative analysis answered several questions about the pits’ distribution and the relationship between pits’ size and lifespan that could not be addressed with previous imaging methods.

Finally, by combining the high numerical-aperture approach with patterned photoactivatable non-linear SIM, Betzig and his colleagues could follow two proteins at a time with higher resolution than the high numerical aperture approach offered on its own.

Betzig’s team is continuing to develop their SIM technologies, and say further improvements are likely. They are also eager to work with biologists to continue to explore potential applications and refine their techniques’ usability.

For now, scientists who want to experiment with the new SIM methods can arrange to do so through Janelia’s Advanced Imaging Center, which provides access to cutting-edge microscopy technology at no cost. Eventually, Betzig says, it should be fairly straightforward to make the SIM technologies accessible and affordable to other labs. “Most of the magic is in the software, not the hardware,” he says.


Story Source:

The above post is reprinted from materials provided by Howard Hughes Medical Institute (HHMI). Note: Materials may be edited for content and length.


Journal Reference:

  1. D. Li, L. Shao, B.-C. Chen, X. Zhang, M. Zhang, B. Moses, D. E. Milkie, J. R. Beach, J. A. Hammer, M. Pasham, T. Kirchhausen, M. A. Baird, M. W. Davidson, P. Xu, E. Betzig. Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics.Science, 2015; 349 (6251): aab3500 DOI: 10.1126/science.aab3500

 

Source: Howard Hughes Medical Institute (HHMI). “Imaging techniques set new standard for super-resolution in live cells.” ScienceDaily. ScienceDaily, 27 August 2015. <www.sciencedaily.com/releases/2015/08/150827143751.htm>.

Date:
August 26, 2015

Source:
The Lancet

Summary:
Global life expectancy has risen by more than six years since 1990 as healthy life expectancy grows; ischemic heart disease, lower respiratory infections, and stroke cause the most health loss around the world.

 

20150828-1

People around the world are living longer, even in some of the poorest countries, but a complex mix of fatal and nonfatal ailments causes a tremendous amount of health loss.
Credit: © Maksim Šmeljov / Fotolia

 

 

Global life expectancy has risen by more than six years since 1990 as healthy life expectancy grows; ischemic heart disease, lower respiratory infections, and stroke cause the most health loss around the world.

People around the world are living longer, even in some of the poorest countries, but a complex mix of fatal and nonfatal ailments causes a tremendous amount of health loss, according to a new analysis of all major diseases and injuries in 188 countries.

Thanks to marked declines in death and illness caused by HIV/AIDS and malaria in the past decade and significant advances made in addressing communicable, maternal, neonatal, and nutritional disorders, health has improved significantly around the world. Global life expectancy at birth for both sexes rose by 6.2 years (from 65.3 in 1990 to 71.5 in 2013), while healthy life expectancy, or HALE, at birth rose by 5.4 years (from 56.9 in 1990 to 62.3 in 2013).

Healthy life expectancy takes into account not just mortality but also the impact of nonfatal conditions and summarizes years lived with disability and years lost due to premature mortality. The increase in healthy life expectancy has not been as dramatic as the growth of life expectancy, and as a result, people are living more years with illness and disability.

“Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990-2013: quantifying the epidemiological transition” examines fatal and nonfatal health loss across countries. Published in The Lancet on August 27, the study was conducted by an international consortium of researchers working on the Global Burden of Disease study and led by the Institute for Health Metrics and Evaluation (IHME) at the University of Washington.

“The world has made great progress in health, but now the challenge is to invest in finding more effective ways of preventing or treating the major causes of illness and disability,” said Professor Theo Vos of IHME, the study’s lead author.

For most countries, changes in healthy life expectancy for males and females between 1990 and 2013 were significant and positive, but in dozens of countries including Botswana, Belize, and Syria healthy life expectancy in 2013 was not significantly higher than in 1990. In some of those countries, including South Africa, Paraguay, and Belarus, healthy life expectancy has actually dropped since 1990. People born in Lesotho and Swaziland in 2013 could expect to live at least 10 fewer years in good health than people born in those countries two decades earlier. People in countries such as Nicaragua and Cambodia have experienced dramatic increases in healthy life expectancy since 1990, 14.7 years and 13.9 years, respectively. The reverse was true for people in Botswana and Belize, which saw declines of 2 years and 1.3 years, respectively.

The differences between countries with the highest and lowest healthy life expectancies is stark. In 2013, Lesotho had the lowest, at 42 years, and Japan had the highest globally, at 73.4 years. Even regionally, there is significant variation. Cambodians and Laotians born in 2013 would have healthy life expectancies of only 57.5 years and 58.1 years, respectively, but people born in nearby Thailand and Vietnam could live nearly 67 years in good health.

As both life expectancy and healthy life expectancy increase, changes in rates of health loss become increasingly crucial. The study’s researchers use DALYs, or disability-adjusted life years, to compare the health of different populations and health conditions across time. One DALY equals one lost year of healthy life and is measured by the sum of years of life lost to early death and years lived with disability. The leading global causes of health loss, as measured by DALYs, in 2013 were ischemic heart disease, lower respiratory infections, stroke, low back and neck pain, and road injuries. These causes differed by gender: for males, road injuries were a top-five cause of health loss, but these were not in the top 10 for females, who lose substantially more health to depressive disorders than their male counterparts.

Ethiopia is one of several countries that have been rising to the challenge to ensure that people live lives that are both longer and healthier. In 1990, Ethiopians could expect to live 40.8 healthy years. But by 2013, the country saw an increase in healthy life expectancy of 13.5 years, more than double the global average, to 54.3 years.

“Ethiopia has made impressive gains in health over the past two decades, with significant decreases in rates of diarrheal disease, lower respiratory infection, and neonatal disorders,” said Dr. Tariku Jibat Beyene of Addis Ababa University. “But ailments such as heart disease, COPD, and stroke are causing an increasing amount of health loss. We must remain vigilant in addressing this new reality of Ethiopian health.”

The fastest-growing global cause of health loss between 1990 and 2013 was HIV/AIDS, which increased by 341.5%. But this dramatic rise masks progress in recent years; since 2005, health loss due to HIV/AIDS has diminished by 23.9% because of global focus on the disease. Ischemic heart disease, stroke, low back and neck pain, road injuries, and COPD have also caused an increasing amount of health loss since 1990.The impact of other ailments, such as diarrheal diseases, neonatal preterm birth complications, and lower respiratory infections, has significantly declined.

Across countries, patterns of health loss vary widely. The countries with the highest rates of DALYs are among the poorest in the world, and include several in sub-Saharan Africa: Lesotho, Swaziland, Central African Republic, Guinea-Bissau, and Zimbabwe. Countries with the lowest rates of health loss include Italy, Spain, Norway, Switzerland, and Israel.

Country-level variation also plays an important role in the changing disease burden, particularly for non-communicable diseases. For communicable, maternal, neonatal, and nutritional disorders, global DALY numbers and age-standardized rates declined between 1990 and 2013. While the number of DALYs for non-communicable diseases have increased during this period, age-standardized rates have declined.

The number of DALYs due to communicable, maternal, neonatal, and nutritional disorders has declined steadily, from 1.19 billion in 1990 to 769.3 million in 2013, while DALYs from non-communicable diseases have increased steadily, rising from 1.08 billion to 1.43 billion over the same period.

The study also examines the role that socio-demographic status — a combination of per capita income, population age, fertility rates, and years of schooling — plays in determining health loss. Researchers’ findings underscore that this accounts for more than half of the differences seen across countries and over time for certain leading causes of DALYs, including maternal and neonatal disorders. But the study notes that socio-demographic status is much less responsible for the variation seen for ailments including cardiovascular disease and diabetes.

“Factors including income and education have an important impact on health but don’t tell the full story,” said IHME Director Dr. Christopher Murray. “Looking at healthy life expectancy and health loss at the country level can help guide policies to ensure that people everywhere can have long and healthy lives no matter where they live.”

Countries with highest healthy life expectancy, both sexes, 2013

1 Japan

2 Singapore

3 Andorra

4 Iceland

5 Cyprus

6 Israel

7 France

8 Italy

9 South Korea

10 Canada

Countries with lowest healthy life expectancy, both sexes, 2013

1 Lesotho

2 Swaziland

3 Central African Republic

4 Guinea-Bissau

5 Zimbabwe

6 Mozambique

7 Afghanistan

8 Chad

9 South Sudan

10 Zambia

Leading causes of DALYs or health loss globally for both sexes, 2013

1 Ischemic heart disease

2 Lower respiratory infection

3 Stroke

4 Low back and neck pain

5 Road injuries

6 Diarrheal diseases

7 Chronic obstructive pulmonary disease

8 Neonatal preterm birth complications

9 HIV/AIDS

10 Malaria


Story Source:

The above post is reprinted from materials provided by The Lancet. Note: Materials may be edited for content and length.


Journal Reference:

  1. Murray CJL et al. Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990–2013: a systematic analysis for the Global Burden of Disease Study 2013. The Lancet, 2015 DOI:10.1016/S0140-6736(15)61340-X

 

Source: The Lancet. “Life expectancy climbs worldwide but people spend more years living with illness and disability.” ScienceDaily. ScienceDaily, 26 August 2015. <www.sciencedaily.com/releases/2015/08/150826204220.htm>.

Currents of semi-liquid rock key to seismicity away from tectonic plate boundaries

Date:
August 26, 2015

Source:
University of Southern California

Summary:
Scientists have discovered the mechanism that generates earthquakes that occur away from tectonic plate boundaries.

 

20150827-1

St. Louis skyline in State of Missouri. Seismicity on the North American plate occurs as far afield as southern Missouri, where earthquakes between 1811 and 1812 estimated at around magnitude 7 caused the Mississippi River to flow backward for hours.
Credit: © digidreamgrafix / Fotolia

 

 

It’s not a huge mystery why Los Angeles experiences earthquakes. The city sits near a boundary between two tectonic plates — they shift, we shake. But what about places that aren’t along tectonic plate boundaries?

For example, seismicity on the North American plate occurs as far afield as southern Missouri, where earthquakes between 1811 and 1812 estimated at around magnitude 7 caused the Mississippi River to flow backward for hours.

Until now, the cause of that seismicity has remained unclear.

While earthquakes along tectonic plate boundaries are caused by motion between the plates, earthquakes away from fault lines are primarily driven by motion beneath the plates, according to a new study published by USC scientist Thorsten Becker inNature on Aug. 27.

Just beneath the Earth’s crust is a layer of hot, semi-liquid rock that is continually flowing — heating up and rising, then cooling and sinking. That convective process, interacting with the ever-changing motion of the plates at the surface, is driving intraplate seismicity and determining in large part where those earthquakes occur. To a lesser extent, the structure of the crust above also influences the location, according to their models.

“This will not be the last word on the origin of strange earthquakes. However, our work shows how imaging advances in seismology can be combined with mantle flow modeling to probe the links between seismicity and mantle convection,” said Becker, lead author of the study and professor of Earth sciences at the USC Dornsife College of Letters, Arts and Sciences.

Becker and his team used an updated mantle flow model to study the motion beneath the mountain belt that cuts north to south through the interior of the Western United States.

The area is seismically active — the reason Yellowstone has geysers is that it sits atop a volcanic hotspot. Previously, scientists had suggested that the varying density of the plates was the main cause. (Imagine a mountain’s own weight causing it to want to flow apart and thin out.)

Instead, the team found that the small-scale convective currents beneath the plate correlated with seismic events above in a predictable way. They also tried using the varying plate density or “gravitational potential energy variations” to predict seismic events and found a much poorer correlation.

“This study shows a direct link between deep convection and shallow earthquakes that we didn’t anticipate, and it charts a course for improved seismic hazard mapping in plate interiors,” said Tony Lowry, co-author of the paper and associate professor of geophysics and geodynamics at Utah State University.


Story Source:

The above post is reprinted from materials provided byUniversity of Southern California. The original item was written by Robert Perkins. Note: Materials may be edited for content and length.


Journal Reference:

  1. Thorsten W. Becker, Anthony R. Lowry, Claudio Faccenna, Brandon Schmandt, Adrian Borsa, Chunquan Yu. Western US intermountain seismicity caused by changes in upper mantle flow. Nature, 2015; 524 (7566): 458 DOI:10.1038/nature14867

 

Source: University of Southern California. “Mechanism behind ‘strange’ earthquakes discovered: Currents of semi-liquid rock key to seismicity away from tectonic plate boundaries.” ScienceDaily. ScienceDaily, 26 August 2015. <www.sciencedaily.com/releases/2015/08/150826135726.htm>.

Date:
August 24, 2015

Source:
University of Toronto

Summary:
A team of physicists has taken a step toward making the essential building block of quantum computers out of pure light. Their advance has to do with logic gates that perform operations on input data to create new outputs.

 

20150826-1

This is an artist’s rendition of what occurs when one photon goes through a carefully prepared atomic medium at the same time as a pulse including many photons. Change in the colors, represents nonlinear phase shifts picked up by each pulse that is proportional to the number of photons in the other pulse. A measurable nonlinear phase shift caused by a single photon on a pulse with many photons can enable deterministic two-qubit gates, an important missing part of the optical quantum information processing hardware.
Credit: Amir Feizpour

 

 

A team of physicists at the University of Toronto (U of T) have taken a step toward making the essential building block of quantum computers out of pure light. Their advance, described in a paper published this week in Nature Physics, has to do with a specific part of computer circuitry known as a “logic gate.”

Logic gates perform operations on input data to create new outputs. In classical computers, logic gates take the form of diodes or transistors. But quantum computer components are made from individual atoms and subatomic particles. Information processing happens when the particles interact with one another according to the strange laws of quantum physics.

Light particles — known as “photons” — have many advantages in quantum computing, but it is notoriously difficult to get them to interact with one another in useful ways. This experiment demonstrates how to create such interactions.

“We’ve seen the effect of a single particle of light on another optical beam,” said Canadian Institute for Advanced Research (CIFAR) Senior Fellow Aephraim Steinberg, one of the paper’s authors and a researcher at U of T’s Centre for Quantum Information & Quantum Computing. “Normally light beams pass through each other with no effect at all. To build technologies like optical quantum computers, you want your beams to talk to one another. That’s never been done before using a single photon.”

The interaction was a two-step process. The researchers shot a single photon at rubidium atoms that they had cooled to a millionth of a degree above absolute zero. The photons became “entangled” with the atoms, which affected the way the rubidium interacted with a separate optical beam. The photon changes the atoms’ refractive index, which caused a tiny but measurable “phase shift” in the beam.

This process could be used as an all-optical quantum logic gate, allowing for inputs, information-processing and outputs.

“Quantum logic gates are the most obvious application of this advance,” said Steinberg. “But being able to see these interactions is the starting page of an entirely new field of optics. Most of what light does is so well understood that you wouldn’t think of it as a field of modern research. But two big exceptions are, ‘What happens when you deal with light one particle at a time?’ and ‘What happens when there are media like our cold atoms that allow different light beams to interact with each other?'”

Both questions have been studied, he says, but never together until now.


Story Source:

The above post is reprinted from materials provided byUniversity of Toronto. Note: Materials may be edited for content and length.


Journal Reference:

  1. Amir Feizpour, Matin Hallaji, Greg Dmochowski & Aephraim M. Steinberg. Observation of the nonlinear phase shift due to single post-selected photons. Nature Physics, 2015; DOI: 10.1038/nphys3433

 

Source: University of Toronto. “A little light interaction leaves quantum physicists beaming.” ScienceDaily. ScienceDaily, 24 August 2015. <www.sciencedaily.com/releases/2015/08/150824114255.htm>.

Date:
August 21, 2015

Source:
Oregon State University

Summary:
A recalculation of the dates at which boulders were uncovered by melting glaciers at the end of the last Ice Age has conclusively shown that the glacial retreat was due to rising levels of carbon dioxide and other greenhouse gases, as opposed to other types of forces. The data helps to confirm predictions of future glacial retreat, and that most of the world’s glaciers may disappear in the next few centuries.

 

20150824-1

A melting tongue of Exit Glacier near Seward, Alaska, continues to dwindle and pour water into streams below, as it has been doing for decades.
Credit: Photo courtesy of Oregon State University

 

 

A recalculation of the dates at which boulders were uncovered by melting glaciers at the end of the last Ice Age has conclusively shown that the glacial retreat was due to rising levels of carbon dioxide and other greenhouse gases, as opposed to other types of forces.

Carbon dioxide levels are now significantly higher than they were at that time, as a result of the Industrial Revolution and other human activities since then. Because of that, the study confirms predictions of future glacial retreat, and that most of the world’s glaciers may disappear in the next few centuries.

The findings were published today in Nature Communications by researchers from Oregon State University, Boston College and other institutions. They erase some of the uncertainties about glacial melting that had been due to a misinterpretation of data from some of these boulders, which were exposed to the atmosphere more than 11,500 years ago.

“This shows that at the end of the last Ice Age, it was only the increase in carbon dioxide and other greenhouse gases that could have caused the loss of glaciers around the world at the same time,” said Peter Clark, a professor in the OSU College of Earth, Ocean and Atmospheric Sciences, and co-author on the study.

“This study validates predictions that future glacial loss will occur due to the ongoing increase in greenhouse gas levels from human activities,” Clark said. “We could lose 80-90 percent of the world’s glaciers in the next several centuries if greenhouse gases continue to rise at the current rate.”

Glacial loss in the future will contribute to rising sea levels and, in some cases, have impacts on local water supplies.

As the last Ice Age ended during a period of about 7,000 years, starting around 19,000 years ago, the levels of carbon dioxide in the atmosphere increased from 180 parts per million to 280 parts per million. But just in the past 150 years, they have surged from 280 to about 400 parts per million, far higher than what was required to put an end to the last Ice Age.

The new findings, Clark said, were based on a recalculation of the ages at which more than 1,100 glacial boulders from 159 glacial moraines around the world were exposed to the atmosphere after being buried for thousands of years under ice.

The exposure of the boulders to cosmic rays produced cosmogenic nuclides, which had been previously measured and used to date the event. But advances have been made in how to calibrate ages based on that data. Based on the new calculations, the rise in carbon dioxide levels — determined from ancient ice cores -matches up nicely with the time at which glacial retreat took place.

“There had been a long-standing mystery about why these boulders were uncovered at the time they were, because it didn’t properly match the increase in greenhouse gases,” said Jeremy Shakun, a professor at Boston College and lead author on the study. “We found that the previous ages assigned to this event were inaccurate. The data now show that as soon as the greenhouse gas levels began to rise, the glaciers began to melt and retreat.”

There are other forces that can also cause glacial melting on a local or regional scale, the researchers noted, such as changes in the Earth’s orbit around the sun, or shifts in ocean heat distribution. These factors probably did have localized effects. But the scientists determined that only the change in greenhouse gas levels could have explained the broader global retreat of glaciers all at the same time.

In the study of climate change, glaciers have always been of considerable interest, because their long-term behavior is a more reliable barometer that helps sort out the ups-and-downs caused by year-to-year weather variability, including short-term shifts in temperature and precipitation.

Other collaborators on this research were from the University of Wisconsin, Purdue University, and the National Center for Atmospheric Research. The work was supported by the National Oceanic and Atmospheric Administration and the National Science Foundation.


Story Source:

The above post is reprinted from materials provided by Oregon State University. Note: Materials may be edited for content and length.


Journal Reference:

  1. Jeremy D. Shakun, Peter U. Clark, Feng He, Nathaniel A. Lifton, Zhengyu Liu, Bette L. Otto-Bliesner. Regional and global forcing of glacier retreat during the last deglaciation. Nature Communications, 2015; 6: 8059 DOI: 10.1038/ncomms9059

Source: Oregon State University. “Greenhouse gases caused glacial retreat during last Ice Age.” ScienceDaily. ScienceDaily, 21 August 2015. <www.sciencedaily.com/releases/2015/08/150821082727.htm>.

Scientists say increasing heat drives moisture from ground

Date:
August 20, 2015

Source:
The Earth Institute at Columbia University

Summary:
A new study says that global warming has measurably worsened the ongoing California drought. While scientists largely agree that natural weather variations have caused a lack of rain, an emerging consensus says that rising temperatures may be making things worse by driving moisture from plants and soil into the air. The new study is the first to estimate how much worse: as much as a quarter.

 

20150821-1

Drought in California.
Credit: © Tupungato / Fotolia

 

 

A new study says that global warming has measurably worsened the ongoing California drought. While scientists largely agree that natural weather variations have caused a lack of rain, an emerging consensus says that rising temperatures may be making things worse by driving moisture from plants and soil into the air. The new study is the first to estimate how much worse: as much as a quarter. The findings suggest that within a few decades, continually increasing temperatures and resulting moisture losses will push California into even more persistent aridity. The study appears this week in the journal Geophysical Research Letters.

“A lot of people think that the amount of rain that falls out the sky is the only thing that matters,” said lead author A. Park Williams, a bioclimatologist at Columbia University’s Lamont-Doherty Earth Observatory. “But warming changes the baseline amount of water that’s available to us, because it sends water back into the sky.”

The study adds to growing evidence that climate change is already bringing extreme weather to some regions. California is the world’s eighth-largest economy, ahead of most countries, but many scientists think that the nice weather it is famous for may now be in the process of going away. The record-breaking drought is now in its fourth year; it is drying up wells, affecting major produce growers and feeding wildfires now sweeping over vast areas.

The researchers analyzed multiple sets of month-by-month data from 1901 to 2014. They looked at precipitation, temperature, humidity, wind and other factors. They could find no long-term rainfall trend. But average temperatures have been creeping up–about 2.5 degrees Fahrenheit over the 114-year period, in step with building fossil-fuel emissions. Natural weather variations have made California unusually hot over the last several years; added to this was the background trend. Thus, when rainfall declined in 2012, the air sucked already scant moisture from soil, trees and crops harder than ever. The study did not look directly at snow, but in the past, gradual melting of the high-mountain winter snowpack has helped water the lowlands in warm months. Now, melting has accelerated, or the snowpack has not formed at all, helping make warm months even dryer according to other researchers.

Due to the complexity of the data, the scientists could put only a range, not a single number, on the proportion of the drought caused by global warming. The paper estimates 8 to 27 percent, but Williams said that somewhere in the middle–probably 15 to 20 percent–is most likely.

Last year, the U.S. National Oceanic and Atmospheric Administration sponsored a study that blamed the rain deficit on a persistent ridge of high-pressure air over the northeast Pacific, which has been blocking moisture-laden ocean air from reaching land. Lamont-Doherty climatologist Richard Seager, who led that study (and coauthored the new one), said the blockage probably has nothing to do with global warming; normal weather patterns will eventually push away the obstacle, and rainfall will return. In fact, most projections say that warming will eventually increase California’s rainfall a bit. But the new study says that evaporation will overpower any increase in rain, and then some. This means that by around the 2060s, more or less permanent drought will set in, interrupted only by the rainiest years. More intense rainfall is expected to come in short bursts, then disappear.

Many researchers believe that rain will resume as early as this winter. “When this happens, the danger is that it will lull people into thinking that everything is now OK, back to normal,” said Williams. “But as time goes on, precipitation will be less able to make up for the intensified warmth. People will have to adapt to a new normal.”

This study is not the first to make such assertions, but it is the most specific. A paper by scientists from Lamont-Doherty and Cornell University, published this February, warned that climate change will push much of the central and western United States into the driest period for at least 1,000 years. A March study out of Stanford University said that California droughts have been intensified by higher temperatures, and gives similar warnings for the future.

A further twist was introduced in a 2010 study by researchers at the NASA Goddard Institute for Space Studies. They showed that massive irrigation from underground aquifers has been offsetting global warming in some areas, because the water cools the air. The effect has been especially sharp in California’s heavily irrigated Central Valley–possibly up to 3.5 degrees Fahrenheit during some seasons. Now, aquifers are dropping fast, sending irrigation on a downward trajectory. If irrigation’s cooling effect declines, this will boost air temperatures even higher, which will dry aquifers further, and so on. Scientists call this process “positive feedback.”

Climatologist Noah Diffenbaugh, who led the earlier Stanford research, said the new study is an important step forward. It has “brought together the most comprehensive set of data for the current drought,” he said. “It supports the previous work showing that temperature makes it harder for drought to break, and increases the long-term risk.”

Jonathan Overpeck, co-director of the Institute of the Environment at the University of Arizona, said, “It’s important to have quantitative estimates of how much human-caused warming is already making droughts more severe.” But, he said, “it’s troubling to know that human influence will continue to make droughts more severe until greenhouse gas emissions are cut back in a big way.”


Story Source:

The above post is reprinted from materials provided by The Earth Institute at Columbia University. Note: Materials may be edited for content and length.


Journal Reference:

  1. A.P. Williams et al. Contribution of anthropogenic warming to California drought during 2012–2014. Geophysical Research Letters, 2015 DOI: 10.1002/2015GL064924

 

Source: The Earth Institute at Columbia University. “Warming climate is deepening California drought: Scientists say increasing heat drives moisture from ground.” ScienceDaily. ScienceDaily, 20 August 2015. <www.sciencedaily.com/releases/2015/08/150820090713.htm>.

Date:
August 19, 2015

Source:
University of East Anglia

Summary:
New research could one day help build computers from DNA. Scientists have found a way to ‘switch’ the structure of DNA using copper salts and EDTA (Ethylenediaminetetraacetic acid) — an agent commonly found in shampoo and other household products. The applications for this discovery include nanotechnology — where DNA is used to make tiny machines, and in DNA-based computing — where computers are built from DNA rather than silicon.

 

20150820-1

Scientists have found a way to “switch” the structure of DNA using copper salts and EDTA (Ethylenediaminetetraacetic acid) — an agent commonly found in shampoo and other household products. It was previously known that the structure of a piece of DNA could be changed using acid, which causes it to fold up into what is known as an “i-motif.” But new research published today in the journal Chemical Communications reveals that the structure can be switched a second time into a hair-pin structure using positively-charged copper (copper cations). This change can also be reversed using EDTA. The applications for this discovery include nanotechnology — where DNA is used to make tiny machines, and in DNA-based computing — where computers are built from DNA rather than silicon.
Credit: University of East Anglia

 

 

New research from the University of East Anglia could one day help build computers from DNA.

Scientists have found a way to ‘switch’ the structure of DNA using copper salts and EDTA (Ethylenediaminetetraacetic acid) — an agent commonly found in shampoo and other household products.

It was previously known that the structure of a piece of DNA could be changed using acid, which causes it to fold up into what is known as an ‘i-motif’.

But new research published today in the journal Chemical Communicationsreveals that the structure can be switched a second time into a hair-pin structure using positively-charged copper (copper cations). This change can also be reversed using EDTA.

The applications for this discovery include nanotechnology — where DNA is used to make tiny machines, and in DNA-based computing — where computers are built from DNA rather than silicon.

It could also be used for detecting the presence of copper cations, which are highly toxic to fish and other aquatic organisms, in water.

Lead researcher Dr Zoë Waller, from UEA’s school of Pharmacy, said: “Our research shows how the structure of our genetic material — DNA — can be changed and used in a way we didn’t realise.

“A single switch was possible before — but we show for the first time how the structure can be switched twice.

“A potential application of this finding could be to create logic gates for DNA based computing. Logic gates are an elementary building block of digital circuits — used in computers and other electronic equipment. They are traditionally made using diodes or transistors which act as electronic switches.

“This research expands how DNA could be used as a switching mechanism for a logic gate in DNA-based computing or in nano-technology.”


Story Source:

The above post is reprinted from materials provided by University of East Anglia. Note: Materials may be edited for content and length.


Journal Reference:

  1. Henry Albert Day, Elisé Patricia Wright, Colin John MacDonald, Andrew James Gates, Zoë Ann Ella Waller. Reversible DNA i-motif to hairpin switching induced by copper(ii) cations. Chem. Commun., 2015; DOI: 10.1039/C5CC05111H

 

Source: University of East Anglia. “Building computers from DNA?.” ScienceDaily. ScienceDaily, 19 August 2015. <www.sciencedaily.com/releases/2015/08/150819083421.htm>.

Date:
August 18, 2015

Source:
Brown University

Summary:
A cooling, drying climate over the last 40 million years turned North America from a warm and wooded place into the drier, open plains we know today. A new study shows how dogs evolved in response to those changes, demonstrating that predators are sensitive to climate change because it alters the hunting opportunities in their habitat.

 

20150819-1

Two early dogs, Hesperocyon, left and the later Sunkahetanka, were both ambush-style predators. As climate changes transformed their habitat, dogs evolved pursuit hunting styles and forelimb anatomy to match.
Credit: Mauricio Anton

 

 

Old dogs can teach humans new things about evolution. In Nature Communications a new study of North American dog fossils as old as 40 million years suggests that the evolutionary path of whole groups of predators can be a direct consequence of climate change.

“It’s reinforcing the idea that predators may be as directly sensitive to climate and habitat as herbivores,” said Christine Janis, professor of ecology and evolutionary biology at Brown University, who worked with lead author Borja Figueirido, a former Brown Fulbright postdoctoral researcher who is now a professor at the Universidad de Málaga in Spain. “Although this seems logical, it hadn’t been demonstrated before.”

The climate in North America’s heartland back around 40 million years ago was warm and wooded. Dogs are native to North America. The species of the time, fossils show, were small animals that would have looked more like mongooses than any dogs alive today and were well-adapted to that habitat. Their forelimbs were not specialized for running, retaining the flexibility to grapple with whatever meal unwittingly walked by.

But beginning just a few million years later, the global climate began cooling considerably and in North America the Rocky Mountains had reached a threshold of growth that made the continental interior much drier. The forests slowly gave way to open grasslands.

Pups of the plains

Did this transition affect the evolution of carnivores? To find out, Figueirido and the research team, including Jack Tseng of the American Museum of Natural History in New York, examined the elbows and teeth of 32 species of dogs spanning the period from ca. 40 million years ago to 2 million years ago. They saw clear patterns in those bones at the museum: At the same time that climate change was opening up the vegetation, dogs were evolving from ambushers to pursuit-pounce predators like modern coyotes or foxes — and ultimately to those dogged, follow-a-caribou-for-a-whole-day pursuers like wolves in the high latitudes.

“The elbow is a really good proxy for what carnivores are doing with their forelimbs, which tells their entire locomotion repertoire,” Janis said.

The telltale change in those elbows has to do with the structure of the base where the humerus articulates with the forearm, changing from one where the front paws could swivel (palms can be inward or down) for grabbing and wrestling prey to one with an always downward-facing structure specialized for endurance running. Modern cats still rely on ambush rather than the chase (cheetahs are the exception) and have the forelimbs to match, Janis said, but canines signed up for lengthier pursuits.

In addition, the dogs’ teeth trended toward greater durability, Figueirido’s team found, consistent perhaps with the need to chow down on prey that had been rolled around in the grit of the savannah, rather than a damp, leafy forest floor.

Not an ‘arms race’ of limbs

The study, with some of Janis’ prior research, suggests that predators do not merely evolve as an “arms race” response to their prey. They don’t develop forelimbs for speedy running just because the deer and the antelope run faster. While the herbivores of this time were evolving longer legs, the predator evolution evident in this study tracked in time directly with the climate-related changes to habitat rather than to the anatomy of their prey species.

After all, it wasn’t advantageous to operate as a pursuit-and-pounce predator until there was room to run.

“There’s no point in doing a dash and a pounce in a forest,” Janis quipped. “They’ll smack into a tree.”

If predators evolved with climate change over the last 40 million years, the authors argue, then they likely will have to continue in response to the human-created climate change underway now. The new results could help predict the effects we are setting in motion.

“Now we’re looking into the future at anthropogenic changes,” Janis said.


Story Source:

The above post is reprinted from materials provided by Brown University.Note: Materials may be edited for content and length.


Journal Reference:

  1. B. Figueirido, A. Martín-Serra, Z. J. Tseng, C. M. Janis. Habitat changes and changing predatory habits in North American fossil canids.Nature Communications, 2015; 6: 7976 DOI: 10.1038/ncomms8976

 

Source: Brown University. “Fossil study: Dogs evolved with climate change.” ScienceDaily. ScienceDaily, 18 August 2015. <www.sciencedaily.com/releases/2015/08/150818120539.htm>.

Date:
August 17, 2015

Source:
University of California, Irvine

Summary:
A woman’s weight at birth, education level and marital status pre-pregnancy can have repercussions for two generations, putting her children and grandchildren at higher risk of low birth weight, according to a new study . The findings are the first to tie social and biological factors together using population data in determining causes for low birth weight.

 

20150818-1

Education level pre-pregnancy can be transmitted from mothers to daughters across at least three generations, and this intergenerational transmission appears to affect birth weight of future generations, Kane said.
Credit: © Victoria Andreas / Fotolia

 

 

A woman’s weight at birth, education level and marital status pre-pregnancy can have repercussions for two generations, putting her children and grandchildren at higher risk of low birth weight, according to a new study by Jennifer B. Kane, assistant professor of sociology at the University of California, Irvine. The findings are the first to tie social and biological factors together using population data in determining causes for low birth weight.

“We know that low-birth-weight babies are more susceptible to later physical and cognitive difficulties and that these difficulties can sharpen the social divide in the U.S. But knowing more about what causes low birth weight can help alleviate the intergenerational perpetuation of social inequality through poor infant health,” said Kane, formerly a postdoctoral scholar at The University of North Carolina at Chapel Hill, where the research was conducted. She joined UCI in July.

Published in the June issue of the Journal of Health and Social Behavior, the study is based on both the National Longitudinal Survey of Youth 1979 and the Children of the National Longitudinal Survey of Youth 1979. The former yielded birth weights and pre-pregnancy physical and social health data on female respondents as well as social data on their mothers, while the latter captured this data on the previous survey participants’ daughters. In total, Kane looked at 1,580 mother-daughter pairs, focusing on their weight at birth, marital status and education level.

“The odds of having a low-birth-weight baby were one and a half to two times greater for mothers who themselves were born low birth weight compared to mothers who were not born low birth weight,” she said. “But also important are social factors, including education and marital status. Putting all of these factors — both intergenerational and intragenerational — together in a single model can tell us even more.”

For example, education level pre-pregnancy can be transmitted from mothers to daughters across at least three generations, and this intergenerational transmission appears to affect birth weight of future generations, Kane said.

“And knowing that biological factors perpetuate the cycle — being a low-birth-weight baby makes a woman more susceptible to delivering the same — we start to see that we can’t look at these two factors separately,” she said.

This means that causes of low birth weight extend much further back than the time frame that’s typically focused on: pregnancy. Kane’s work shows that key factors can be traced to the mother’s own early life experiences, in addition to factors dating back multiple generations.

“This really makes a difference in how we think about planning future population-level policies or programs that intend to reduce social inequalities in birth weight,” she said.


Story Source:

The above post is reprinted from materials provided byUniversity of California, Irvine. Note: Materials may be edited for content and length.


Journal Reference:

  1. J. B. Kane. An Integrative Model of Inter- and Intragenerational Preconception Processes Influencing Birthweight in the United States. Journal of Health and Social Behavior, 2015; 56 (2): 246 DOI:10.1177/0022146515582043

 

Source: University of California, Irvine. “Woman’s health, education and marital status pre-pregnancy affect birth weight of her daughters, granddaughters.” ScienceDaily. ScienceDaily, 17 August 2015. <www.sciencedaily.com/releases/2015/08/150817132707.htm>.

Target Health is a CRO Finalist for the SCRS Eagle Award

 

In terms of peer recognition, on August 13th, The Society for Clinical Research Sites (SCRS), an advocacy group for clinical sites, announced that Target Health is a CRO finalist for the 2015 SCRS Eagle Award. The SCRS Eagle Award recognizes outstanding worlwide leadership, professionalism, integrity, and dedication to advancing the clinical research profession through a strong site partnership. If you are a clinical research site, go to SCRS – Eagle Award Survey 2015 in order to vote.

 

The 2015 sponsor nominees are: Amgen, Astellas, AstraZeneca, Eli Lilly & Company, Ferring Pharmaceuticals Inc., Gilead, GlaxoSmithKline, Janssen Pharmaceuticals, Merck & Co., Nektar, Novavax, Novo Nordisk, and Pfizer. We are very pleased to announce that 2 of the sponsors mentioned above are using our eSource software fully integrated with Target e*CRF.

 

Conratulations to the other 2015 CRO finalists who are: INC Research, iVentiv Health, Medpace, PAREXEL International, PRA Health Sciences and Quintiles. Clearly, by looking at the competition, one does not have to be big to be one of the best.

 

Next Page →