By David Secko – The first inklings of genetic theory can be traced back to a common human experience: the recognition that a child has features similar to those of its parents. This ancient observation is actually one of the cornerstones of. genetics and its subsequent offspring, molecular biology.

For centuries there was little evidence beyond the anecdotal that transmitted inheritance was a reasonable theory. Though it seemed sensible that a child with the same appearance as its parents likely received these characteristics from them, little evidence supported the notion and instead a good deal of confusion surrounded it. Part of the confusion arose from years of traditional breeding programs that sought to improve the quality of domestic plants and animals. The results were often highly unpredictable, with traits like sterility and disease susceptibility arising in the offspring, apparently from nowhere. Where were these characteristics coming from?

The beginnings of an answer first appeared in a monastery garden located in Brno, Czechoslovakia. It was there that an Augustinian monk named Gregor Mendel (1822-1884) had placed a pea plant, Pisum sativum, with the intention of carrying out a painstakingly long breeding experiment1 (see Figure 1).

Figure 1. Mendel’s experiment

After analyzing 21,000 hybrid plants, Mendel conceived of the idea that individual units of inheritance existed, are discreet, and that two such units (one from the female parent and one form the male parent) combined to produce a characteristic of an offspring [1]. The concept of one unit of inheritance, later to be called a gene, was born.

Modern molecular biology has flowed from Mendel’s concept of transmissible genes. It was a starting point that led biologists to the identification of DNA as the primary genetic material, the uncovering of the biochemical structure of genes, an understanding of how DNA stores and regulates the flow of genetic material, and ultimately the development of techniques that allow for the manipulation of DNA.

Hunting for the Molecular Nature of Elusive Genes

While Mendel grew peas in his garden (around the year 1866) many biologists were focused instead on the use of a microscope to document the appearance of the smallest components of a living organism, the cell. A large variety of cells from different organisms were examined and in each case a similar morphological region called the nucleus was seen. Interestingly, certain dyes were found to stain small discreet bodies in all the different nuclei. These small bodies became known as chromosomes (meaning ‘coloured body’). By the turn of the century the insights Mendel had provided concerning transmissible units of inheritance began to be appreciated, and a fascinating possibility arose: could genes be located on the chromosomes that resided in the nucleus of a cell and somehow be transmitted to the next generation?

A few eyebrows were raised by the question, as it suddenly seemed possible that Mendel’s transmissible genes were actually cellular structures, which meant that they were both physically identifiable and could be subject to experimentation. Understandably, excitement about the subject grew. The wait for further breakthroughs was not long, as discoveries began to trickle out of Columbia University (USA) between 1905-1915. There, careful microscopic observation detected chromosomal differences between the sexes: the presence of two X chromosomes in cells from a female, and one X chromosome and one smaller chromosome shaped like a Y in the cells of a male [2]. Not only was it found that these chromosomes determined sex (XX = female, XY = male), but it was also shown that certain traits (and thus their genes) were transmitted only with the X chromosome [3] (see Figure 2).

Figure 2. T.H. Morgan’s Experiment

With chromosomes implicated in carrying the genes responsible for inheritance the question crossing every scientists mind was what were chromosomes made of?

It was here that a turning point was achieved. As biologists turned to the nucleus, trying to define the molecular nature of the chromosome, a shift in the field of biology occurred. In the years to come, the molecular biologish would take centre stage as the hunt for the molecular structure of a gene continued.

Methods available at the time made it very difficult to obtain a pure preparation of chromosomes, they were always contaminated with other cellular components. Nevertheless, it was discovered that chromosomes contained two components: (1) deoxyribonucleic acid, or DNA (commonly abbreviated to nucleic acid), and (2) basic proteins called histones. Even though DNA was present in much higher quantities than protein in these preparations, it was hotly debated whether or not the DNA or histones carried the genes biologists were looking for. A crucial point that kept biological circles divided was the relative structural simplicity of DNA, which was made up of four building blocks, as compared to the complexity of proteins, which were made up of 20 building blocks. Scientific opinion differed on what kind of structural complexity genes would require to dictate the intricacies of a cell; thus whether the cell used a “genetic protein” or “genetic DNA.” It took significant effort to resolve this debate, but in 1952 (many years after the 1915 studies on chromosomes) Alfred Hershey and Martha Chase [4] were able to use different radioisotopes to label proteins (35S) and DNA (32P). This technique allowed them to reveal that bacterial viruses, which were composed only of protein and DNA, reproduced themselves within bacteria by using only their DNA component. Thus, the debate was resolved: genes were made of DNA.

The Biochemical Structure of DNA is Unraveled

While the debate fumed over “genetic protein” versus “genetic DNA” in the 1920s, much about the chemical nature of nucleic acid was elucidated [5]. It was found to be composed of regularly repeating subunits called nucleotides. Only a limited number of nucleotides were found to exist in nature and all contained three elements: (i) a phosphate group(s) linked to a (ii) sugar, which was joined to a (iii) flat ring molecule commonly called a base (see Figure 3). The limited number of natural nucleotides is partially a result of the fact that only five types of natural bases exist: guanine (G), adenine (A), cytosine (C), thymine (T), and uracil (U). Each nucleotide was found to possess the ability to link to others to form chains. Surprisingly, only two similar types of chains existed, DNA and RNA. The most obvious difference between the two types was that the base uracil was only found in RNA, while the base thymine was found only in DNA.

Figure 3. The DNA double helix.

With the relatively simple chemical composition of DNA understood, a more philosophical question still remained: how did DNA govern and dictate the natural variety of life on Earth? This question boiled down to a crucial missing link, how the chemical structure of DNA, essentially a chain of nucleotides linked together, enabled it to act as a carrier of inheritance.

As this question was being asked, another interesting fact was obtained about the chemical characteristics of nucleotides; their bases (G, A, C, T, U) could chemically bind to each other. Not only that, they did so in an exceptionally specific manner. Adenine bound only to thymine in DNA (and uracil in RNA), while guanine bound only cytosine. As a consequence of this the amount of adenine equaled the amount of thymine, while the amount of guanine equaled the amount of cytosine in a DNA molecule [5]. This was later known as Chargaff’s rule, after the Austrian chemist Erwin Chargaff.

Soon after this was discovered, X-ray diffraction studies showed that DNA adopted a regular and precise helical structure. Enough data was now in place for a famous leap of scientific faith to be taken. In 1953, Watson and Crick [6] correctly deduced that DNA forms a double helix with two strands of nucleotides wrapped around each other (see Figure 3). The binding rules for nucleotides ensured that each strand was a complementary copy of the other (for example an adenine in one strand was always bound to thymine in the other strand). Thus, the two strands were complementary anti-parallel chains of nucleotides wound around each other to form a double helix. Our understanding of the molecular nature of inheritance took a step forward with Watson and Crick’s leap of logic, for it was immediately understood that such a structure would provide DNA with a simple mechanism to accurately reproduce itself: just pull the two strands apart and use one strand to create a complementary copy of the other using the nucleotide binding rules [7]. If done for both strands, two exact copies of the original DNA molecule would be created, a process eventually shown to be exactly the way DNA is copied in a cell.

The realization that DNA formed a double helix solved a large part of the question of how DNA was involved in all lifeforms by revealing how to make endless copies of a DNA molecule. But it would still be years before the molecular mechanism of inheritance was fully understood.

Cracking the Genetic Code

At the turn of the 20th century many scientists were beginning to turn their attention to the biochemical basis of heredity. Investigators in many disciplines wanted to understand the underlying biochemistry that dictated the physical appearance of an organism. The study of proteins took center stage. Of particular interest was an important class of proteins, termed enzymes. These proteins were able to catalyze biochemical reactions and were soon found to be responsible for biochemical function. During this period it was determined that proteins were composed of 20 naturally occurring amino acids linked together in a chain (called a polypeptide), much like DNA.

Even before the composition of genes was known, a link between proteins and genes was evident from the study of diseases in which cells fail to perform known biochemical reactions. An example was alkaptonuria, a rare genetic disease resulting from a failure to correctly breakdown two amino acids (phenylalanine and tyrosine) found in a regular diet. The build up of by-products from this blocked pathway produces the black urine characteristic of a patient with the disease. In 1908, Archibold Garrod [8] correctly surmised that the absence or deficiency of a given enzyme required for normal cellular biochemistry, in the case of alkaptonuria it was an absence of an enzyme required to break down amino acids, resulted in a metabolic disease. Since such defects in proteins could be inherited, it appeared that genes could dictate the production of the proteins in an organism. It took almost three decades for adequate knowledge to be gained about cellular metabolism to determine if Archibold Garrod’s link between genes and proteins was true. In the end, it was once again studies of defects in well-known metabolic reactions that showed that a gene directed the production of a single protein, a fact which is now generalized as the “one gene = one protein” rule.

The “one gene = one protein” rule and the understanding that genetic information is specified in the four nucleotide bases of DNA (A, C, G and T) led to a period of scientific excitement in which scientists wondered how four bases of DNA could encode the 20 known amino acids that make up proteins? By the 1950’s, scientists simply assumed the linear sequence of nucleotides in a DNA strand corresponded to the linear sequence of amino acids in a protein polypeptide. The first biochemical evidence of this assumption was that the position of mutations in a protein correlated to the position of mutations is a gene (i.e. they appeared in the same relative places in the molecules). A co-linear relationship seemed to exist between the two. A quick mathematical calculation determined that at least three nucleotides would be required to specify each of the 20 natural amino acids, since blocks of two nucleotides could only be combined to code for 16 of the 20 known amino acids and single nucleotides could only code for four. Random combination of four nucleotides produces 64 possible triplets, but it was not clear at the time how these 64 combinations would code for 20 amino acids.

In 1961, a historic set of experiments were begun by Marshall Nirenberg and Heinrich Matthaei [9] that solved the above question and marked the beginning of modern molecular biological techniques. They were able to create a synthetic nucleotide chain composed only of uracil (i.e. UUUUU), which they went on to add to cells that had been broken apart. When they did this they witnessed the production of a polypeptide chain. Even more interesting, it was composed of only a single amino acid, phenylalanine. Next, they began adding defined lengths of uracil chains to the extracts. By doing this they found that only multiplies of three nucleotides produced amino acid chains. For example, a chain of three uracils (UUU) gave a single phenylalanine, similarly a chain of four or five uracils (UUU-U or UUU-UU) also produced only one phenylalanine, while a chain of six (UUU-UUU) produced two phenylalanines linked together. Hence it was determined that a triplet of uracils in a gene coded for the amino acid phenylalanine in a protein. Production of all the possible combinations of three nucleotides (called a codon) soon revealed which triplets coded for which amino acids. It was found that 61 combinations coded for the 20 amino acids, while the three remaining codons were used as “stop” signals for the end of a protein.

The genetic code was now broken. Scientists understood how a protein was encoded in the molecular structure of DNA. With this information it was not long before the underlying mechanisms of how a cell used DNA to make protein became clear.

Producing Genetic Messages from DNA

All living organisms depend upon the production of proteins encoded by the information held within their DNA. Despite the variations that exist between organisms, it was soon found that all cells make use of the same general mechanism for decoding the information in DNA into proteins, termed gene expression. Even though the amino acid sequence of proteins is dictated by the nucleotide sequence of many genes, proteins are not directly synthesized from DNA. Instead, genes produce proteins in two discreet stages, which involve many different types of enzymes, proteins and RNA molecules.

The first stage is called transcription (see Figure 4), in which an RNA copy (or transcript) of a specific gene is produced. This RNA copy of the gene is called a messenger RNA (mRNA), since it is the genetic message that will produce a protein. Production of mRNA requires an enzyme called RNA polymerase. It begins the process by binding to specific nucleotide sequences in the DNA (called a promoter), located just up from the gene that specifies a protein. A complex process unwinds the DNA in this area so that the polymerase can begin to move along the DNA strand like a train along a rail. As the RNA polymerase moves along it synthesizes an RNA copy according to the nucleotide sequence it encounters (done by pairing a new nucleotide to a complementary nucleotide in the DNA using the base pairing rules, followed by linking them together). This procedure continues until the polymerase hits a defined sequence called a terminator, which causes the polymerase to fall off the gene and release the mRNA. Once released, an mRNA is free to float through the cell bearing its genetic message and ultimately engages in the second stage of producing a protein from a gene.

Figure 4. DNA transcription and translation.

ecoding Genetic Messages

The process of decoding mRNA transcripts is termed translation and once again involves many types of proteins and RNA molecules. In particular, translation requires two types of RNA termed ribosomal RNA (rRNA) and transfer RNA (tRNA). Ribosomal RNA is intimately involved in the synthesis of proteins through the interaction of various types of rRNA to form a complex called a ribosome. This is the cellular machine that creates proteins from mRNA. A ribosome forms a donut structure with the mRNA passing through its center; a specific site within the mRNA (called the ribosome binding site or RBS) then binds to the ribosome, causing the second type of RNA, the tRNA, to spring into action. tRNAs are universal adaptor molecules that carry amino acids and a complementary triplet of nucleotides called an anti-codon that recognizes each codon in an mRNA. As mentioned before, codon is the name given to triplets of nucleotides in mRNA that code for particular amino acids. An anti-codon is just a complement of a codon. Through this method each triplet in an mRNA molecule will bind to a tRNA that bears its complementary triplet codon. For example, a string of A-U-G nucleotides in an mRNA will bind a tRNA that has a U-A-C triplet, all of which is based on the nucleotide binding rules. Since each tRNA molecule carries an amino acid, the triplet codon will result in a specific amino acid being brought to the ribosome. At the ribosome these amino acids are bound together into a polypeptide chain (i.e. a protein) in exactly the linear order that the mRNA dictates, a sequential process that will produce a protein corresponding directly to the nucleotides in the mRNA.

Central Dogma of Molecular Biology

Over a century of work has gone into the current framework of molecular biology, but at its core, our understanding can be broken down to what has become known as “the central dogma” of molecular biology: DNA is copied into genetic messages, which are then translated into proteins that go on to perform the underlying biochemistry of an organism.

The hunt to understand inheritance has been an important scientific journey. It has required a broad range of disciplines, from chemistry to microbiology to zoology, as well as the development of many new and exciting technologies. This integration of disciplines, combined with the resolution of many engineering difficulties, eventually resulted in the emergence of the field of biotechnology,an exciting enterprise that is bringing molecular biology to the forefront of society and causing us to reshape the way we see the world.

Shareholder Rebellion Ruffling Feathers at Exxon Mobil

By Clifford Krauss, May 27, 2008, Houston, The New York Times — The Rockefeller family built one of the great American fortunes by supplying the nation with oil. Now history has come full circle: some family members say it is time to start moving beyond the oil age.

The family members have thrown their support behind a shareholder rebellion that is ruffling feathers at Exxon Mobil, the giant oil company descended from John D. Rockefeller’s Standard Oil Trust.

Three of the resolutions, to be voted on at the company’s shareholder meeting on Wednesday, are considered unlikely to pass, even with Rockefeller family support.

The resolutions ask Exxon to take the threat of global warming more seriously and look for alternatives to spewing greenhouse gases into the air.

One resolution would urge the company to study the impact of global warming on poor countries, another would encourage Exxon to reduce its emissions and a third would encourage it to do more research on renewable energy sources like solar panels and wind turbines.

A fourth resolution, which the Rockefellers are most united in supporting, is considered more likely to pass. It would strip Rex W. Tillerson of his position as chairman of Exxon’s board, forcing the company to separate that job from the chief executive’s job.

A shareholder vote in favor of that idea would be a rebuke of Mr. Tillerson, who is widely perceived as more resistant than other oil chieftains to investing in alternative energy.

The Rockefellers say they are not trying to embarrass Mr. Tillerson, also Exxon’s chief executive, but think it is time for the company to spend more of its funds helping the nation chart a new energy future.

“Exxon Mobil needs to reconnect with the forward-looking and entrepreneurial vision of my great-grandfather,” Neva Rockefeller Goodwin, a Tufts University economist, said in a statement to reporters.

“The truth is that Exxon Mobil is profiting in the short term from investments and decisions made many years ago, and by focusing on a narrow path that ignores the rapidly shifting energy landscape around the world,” she added.

The resolution on Exxon’s chairmanship was offered for several years before the Rockefellers became publicly involved and last year was supported by 40 percent of shareholders who voted. Royal Dutch Shell and BP already separate the positions of chairman and chief executive, as do many other companies.

“You need a board asking the tough questions,” Peter O’Neill, a private equity investor and great-great-grandson of John D. Rockefeller, said in an interview. “We expect the company to figure out how in this changing world to adjust.”

Kenneth P. Cohen, vice president for public affairs at Exxon, said the shareholders pushing the resolutions were “starting from a false premise.” He added that the company was already concerned about “how to provide the world the energy it needs while at the same time reducing fossil fuel use and greenhouse gas emissions.”

Fifteen members of the family are sponsoring or co-sponsoring the four resolutions, but it appears that some have much more solid support in the sprawling family than others.

Mr. O’Neill said that 73 out of 78 adult descendants of John D. Rockefeller were supporting the family effort to divide the chief executive and chairman positions. The goal of that resolution is to improve the management of the company, which could strengthen its environmental policies and improve more traditional pursuits like exploring more aggressively for new oil reserves.

David Rockefeller, retired chairman of Chase Manhattan Bank and patriarch of the family, issued a statement saying, “I support my family’s efforts to sharpen Exxon Mobil’s focus on the environmental crisis facing all of us.”

The Rockefeller family has always been identified with oil and the legacy of Standard Oil, but for several generations, it has also been active in environmental causes and acquiring land for preservation. John D. Rockefeller’s grandsons devoted themselves to conservation issues, and Rockefeller charitable organizations have long promoted efforts to fight pollution.

Ms. Goodwin, one of the most vocal Rockefellers on the environment today, is co-director of the Global Development and Environment Institute at Tufts.

In recent years, family members have quietly encouraged Exxon executives to take global warming seriously, but their private efforts did not go far. Until now, they have avoided publicity in their efforts, and the youngest Rockefeller generations have generally shunned attention.

Exxon executives said the company spent $2 billion over the last five years on programs to reduce emissions and improve efficiencies and had plans to spend $800 million on similar initiatives over the next three years. They said the company reduced the release of greenhouse gases from its operations last year by 3 percent, and it was working with Stanford to research biofuels and solar and hydrogen energy.

Since taking over the company two years ago, Mr. Tillerson has gradually shifted the company’s positions away from those of his predecessor, Lee R. Raymond, who was considered a skeptic on the science of global warming.

But with gasoline prices soaring and concern growing over global warming, Exxon, the biggest of the investor-owned oil companies, is a target for politicians and environmentalists. Chevron, BP and Shell, Exxon’s largest competitors, have given their investments in renewable fuels a much higher profile.

Similar or identical environmental proposals have not passed at previous Exxon shareholder meetings, but the public support of the Rockefeller family has given old efforts new energy.

The involvement of the Rockefellers, said Robert A. G. Monks, a shareholder who has been urging a separation of the chairman and chief executive jobs for years, shows that “this is not just a matter of the self-appointed good guys against the cavemen, but also a matter of the capitalists wanting to make money.”

Nineteen institutional investors with 91 million shares announced last week that they would support resolutions asking Exxon to separate the top executive positions and tackle global warming. They included the California Public Employees’ Retirement System, the California State Teachers’ Retirement System and the New York City Employees’ Retirement System.

California’s treasurer, Bill Lockyer, who serves on the boards of the two California funds, said the company’s “go-slow approach” on global warming “places long-term shareholder value at risk.”

Under Exxon’s rules, a shareholder proposal that passes is not binding without the support of the board. But Andrew Logan, director of the oil program at Ceres, a coalition of institutional investors and environmentalists, said, “boards tend to strongly consider proposals that get significant support.”

Paul Sankey, an oil analyst at Deutsche Bank, said that he thought a separation of the chief executive and chairman jobs might be a good management move and that “we might see a mild benefit to Exxon’s public image.” But he added, “On balance, we wouldn’t expect any change in strategy.”

The Fraternal Order of Police, which represents public safety officers, whose pensions are invested in Exxon, has publicly opposed the shareholder effort to change company policy.

“The Rockefeller resolution threatens to degrade the value of Exxon Mobil,” the organization wrote in a letter to Mr. Tillerson that criticized the splitting of the top executive jobs.

A new material could increase the availability of corneal transplants

Seeing clearly: This hydrogel-based artificial cornea developed by researchers at Stanford University contains microscopic pores that were patterned using photolithography. Once implanted in a patient, cells migrate through the pores and help integrate the artificial cornea with the surrounding tissue.
Credit: David Myung

By Alexandra M. Goho, May 22, 2008, MIT Technology Review – Millions of people around the world are blind due to corneal disease or damage. In hopes of making corneal transplants more widely available, researchers have designed an artificial cornea made from a water-filled polymer that closely resembles the eye’s natural cornea. Compared with existing commercially available artificial corneas, the new implant could reduce the likelihood of infection and other complications that arise from surgery.

Approximately 40,000 patients undergo corneal transplant surgery in the United States every year. The vast majority of these people receive a replacement cornea from a human donor. Although the surgery has a high success rate, the supply of donor tissue is limited, and wait lists can be long. In the developing world, access to donor tissue is even more difficult. And yet “most cases of corneal blindness are in developing countries,” says Tueng Shen, an expert in cornea and refractive surgery at the University of Washington Medical Center, in Seattle.

To overcome this problem, researchers have been developing artificial corneas using synthetic materials. The most successful of these to date is the Dolhman-Doane keratoprosthesis, which received approval from the U.S. Food and Drug Administration in 1992 and has been used in hundreds of patients. It consists of a hard, clear plastic core surrounded by human donor tissue to help attach the cornea to the eye.

However, because the implant is prone to infection and other complications, patients must take a lifelong course of antibiotics. As a result, the artificial cornea is used only as a last resort in patients who have repeatedly rejected natural donor tissue or who are otherwise not eligible for such transplant surgery.

Instead of using hard plastic, Stanford University chemical engineer Curtis Frank and former graduate student David Myung have created an artificial cornea based on a soft hydrogel. The water-swollen gel is made of a mesh of two polymer networks. The first network is made of polyethylene glycol, the second of polyacrylic acid. “It’s like filling up the holes in the sponge with a second material,” says Frank. “You can’t separate one from the other. They become inextricably intertwined.”

The resulting clear material is mechanically robust, despite being 80 percent water. The high water content, explains Stanford ophthalmologist Christopher Ta, is critical for allowing glucose and other nutrients to diffuse through the cornea and encourage the growth of epithelial cells on the implant’s surface. “We think this is important for minimizing risk of infection,” says Ta. “In the natural cornea, the epithelial layer is very important for protection.”

For instance, one type of artificial cornea currently marketed under the name AlphaCor is also based on a hydrogel. Yet the material contains only half the amount of water as the Stanford implant. As a result, it can’t support the growth of epithelial cells, which many researchers say could explain AlphaCor’s high failure rate.

Because the Stanford hydrogel is inert, cells don’t normally stick to it. So, with the help of Stanford bioengineer Jennifer Cochran, the researchers devised a way of tethering collagen to the artificial cornea’s surface. The collagen, in turn, binds to the epithelial cells. Cochran is working on incorporating growth factors and other components of the cell’s natural environment into the material.

Using photolithography, Frank’s team can also create patterns of microscopic pores around the edges of the implant. That way, he says, when the cornea is implanted in the patient’s eye, cells will migrate through the pores, anchor the cornea, and help integrate the material with the native tissue. This will also reduce the number of sutures required to keep the artificial cornea in place, says Frank.

Shen, who was not involved in the Stanford effort, says that the development of new artificial corneas will be important for solving a critical health problem. However, she wonders whether the design of these new implants is well suited for use in the developing world. For instance, hydrogel-based implants might require relatively complicated surgery. “That could be difficult in terms of training surgeons abroad,” Shen says. She is also concerned about the potentially high cost of the materials, whether they can be applied to large populations, and whether they will require a lot of follow-up care.

So far, the Stanford group has shown that the diffusion of glucose across the material is equal to that of the human cornea, and preliminary studies in rabbits show that implants can support the growth of epithelial cells. The researchers say that studies in human patients are still several years away.

Pipeline on slider supports where it crosses the Denali fault.
Map of the pipeline

Special Alaska Oil Pipeline Bridge – The Pipeline Crossing the Tanana River
A caribou walks next to a section of the pipeline north of the Brooks Range.


By Louise Story, May 21, 2008, The New York Times – Arjun N. Murti remembers the pain of the oil shocks of the 1970s. But he is bracing for something far worse now: He foresees a “super spike” — a price surge that will soon drive crude oil to $200 a barrel.

Mr. Murti, who has a bit of a green streak, is not bothered much by the prospect of even higher oil prices, figuring it might finally prompt America to become more energy efficient.

An analyst at Goldman Sachs, Mr. Murti has become the talk of the oil market by issuing one sensational forecast after another. A few years ago, rivals scoffed when he predicted oil would breach $100 a barrel. Few are laughing now. Oil shattered yet another record on Tuesday, touching $129.60 on the New York Mercantile Exchange. Gas at $4 a gallon is arriving just in time for those long summer drives.

Mr. Murti, 39, argues that the world’s seemingly unquenchable thirst for oil means prices will keep rising from here and stay above $100 into 2011. Others disagree, arguing that prices could abruptly tumble if speculators in the market rush for the exits. But the grim calculus of Mr. Murti’s prediction, issued in March and reconfirmed two weeks ago, is enough to give anyone pause: in an America of $200 oil, gasoline could cost more than $6 a gallon.

That would be fine with Mr. Murti, who owns not one but two hybrid cars. “I’m actually fairly anti-oil,” says Mr. Murti, who grew up in New Jersey. “One of the biggest challenges our country faces is our addiction to oil.”

Alaskan Oil Pipeline

Mr. Murti is hardly alone in predicting higher oil prices. Boone Pickens, the oilman turned corporate raider, said Tuesday that crude would hit $150 this year. But many analysts are no longer so sure where oil is going, at least in the short term. Some say prices will fall as low as $70 a barrel by year-end, according to Thomson Financial.

Experts disagree over the supply of oil, the demand for it and whether recent speculation in the commodities markets has artificially raised prices. As an energy analyst at Citigroup, Tim Evans, reportedly put it, trading commodities these days is like “sticking your hand in a blender.”

Whatever the case, oil analysts like Mr. Murti have suddenly taken on the aura that enveloped technology analysts in the 1990s.

“It’s become a very fashionable area to write about,” said Kevin Norrish, a commodity analyst at Barclays Capital, which began predicting high oil prices around the same time as Goldman. “And to try to get attention from people, people are coming out with all sorts of numbers.”

This was not always the case. In the 1990s, oil research was a sleepy area at banks. Many analysts assumed oil prices would hover near $15 to $20 a barrel forever. If prices rose much above those levels, they figured, consumers would start conserving, suppliers would raise production, or both, causing prices to decline.

But around the turn of the century, oil company after oil company started missing predicted production. Mr. Murti, who covers oil companies like ConocoPhillips and Valero Energy, decided to study the oil spikes of the 1970s.

Since starting his career at Petrie Parkman & Company, a Denver-based investment firm acquired by Merrill Lynch in 2006, he had been conservative in his calls on oil. But by 2004, he concluded the world was headed for a long supply shock that would push prices through the roof. That summer, as oil traded for about $40 a barrel, Mr. Murti coined what has become his signature phrase: super spike.

The following March, he drew attention by predicting prices would soar to $105, sending shock waves through the market. Angry investors questioned whether Goldman’s own oil traders benefited from the prediction. At Goldman’s annual meeting, Henry M. Paulson Jr., then the bank’s chief executive and now Treasury secretary, found himself defending Mr. Murti.

“Our traders were as surprised as everyone else was,” Mr. Paulson reportedly said. “Our research department is totally independent. Our trading departments have no say about this.”

Over time, Mr. Murti was proved right again. Oil crossed $100 in February. Mr. Murti’s forecasts now feed into many of Goldman’s economic and corporate forecasts, affecting research of companies like Ford and Procter & Gamble. His research is distributed widely among investors.

“Even if you disagree with their views, the problem is that Goldman does carry so much credibility,” said Nauman Barakat, senior vice president for global energy futures at Macquarie Futures USA. “There are a lot of traders who are going to buy based on their reports.”

His sudden fame unsettles Mr. Murti. He rarely grants interviews, citing concerns about privacy, and he declined to be photographed for this article. He is not the bank’s only gas prognosticator: Jeffrey R. Currie predicts oil prices out of London.

Mr. Murti, for his part, discounts suggestions that his reports affect market prices. “Whenever an analyst upgrades a stock or downgrades a stock, sometimes you get a reaction that day, but beyond a day, fundamentals win out,” he said.

Mr. Murti falls into the camp of oil analysts who believe that supply is likely to remain tight because of geopolitical factors. These analysts predict higher prices because production is declining in non-OPEC countries like Britain, Norway and Mexico.

The analysts who predict lower prices say there are supplies of oil that the bullish analysts are missing. “This year will be a year in which supply will be put into the market by stealth by OPEC and by countries we call black-hole countries,” said Edward L. Morse, chief energy economist at Lehman Brothers. China is one example, he said.

But while oil and gas prices have been rising for a while now, Americans have only just begun to reduce gasoline consumption, so their efforts to conserve have not dragged down oil prices.

“The fact that the U.S. gasoline demand can be down and that the U.S. gasoline consumer is no longer driving world oil prices is a monumental event,” Mr. Murti says. He spends most of his time talking to money managers and analysts, many of whom keep asking him if oil prices will stay high if speculators abandon the market, and says he applauds investors for driving up oil prices, since that will spur investment in alternative sources of energy.

High prices, he says, “send a message to consumers that you should try your best to buy fuel-efficient cars or otherwise conserve on energy.” Washington should create tax incentives to encourage people to buy hybrid cars and develop more nuclear energy, he said.

Of course, if lawmakers heed his advice, oil analysts like him might one day be a thing of the past. That’s fine with Mr. Murti.

“The greatest thing in the world would be if in 15 years we no longer needed oil analysts,” he says.

Hong Kong, May 21, 2008, — Share indexes in Hong Kong and Shangahi closed higher on Wednesday, as rumors that Beijing might soon implement a policy to assist the country’s oil refiners sparked a late rally in that sector.

The Hang Seng index closed up by 1.2%, or 290.83 points, to 25,460.29, supported mainly by speculation that China’s central government will help struggling oil companies by permitting higher prices for refined products or reducing windfall taxes on their operations. H-shares of PetroChina rose 2.2%, to 11.36 Hong Kong dollars ($1.46), while CNOOC soared 5.9%, to 15.90 Hong Kong dollars ($2.04). The country’s biggest refiner, China Petroleum and Chemical Corp., commonly known as Sinopec, surged 4.3%, to 7.57 Hong Kong dollars (97 cents).

China’s stock markets also staged a late rebound, with the Shanghai Composite index closing 2.9% higher, at 3,544.19. The average had been in negative territory at midday.

Other Asian equity markets finished lower Wednesday, as record oil prices and data showing rising prices in the United States discouraged investors.

A rise in crude oil futures to a record of $129.07 overnight in New York sent the Dow down 199.48 points, to 12,828.68. The U.S. Labor Department’s latest report on inflation provided another cause for investors to abandon stocks, as the core Producer Price Index, which excludes food and energy prices, ticked 0.4% higher, prompting a fresh round of inflation fears. (See: “Producer Inflation Stays Strong — Or Maybe Not”)

The drop on Wall Street darkened the mood along the Pacific Rim. The S&P/ASX200 index in Australia was down 0.9%, to 5,853.90, while the broader All Ordinaries fell 0.8%, to 5,945.10. Mining stocks led the way to the down side in Australia, after Morgan Stanley predicted that weaker Chinese demand may lead to softer metals prices in coming weeks. BHP Billiton fell 3.6%, to 46.87 Australian dollars ($45.00), and Rio Tinto fell 3.2%, to 150.05 Australian dollars ($144.05). On the up side, though, Macarthur Coal soared 14.1%, to 20.98 Australian dollars ($19.96), after Arcelor Mittal, the world’s largest steel maker, took a 14.9% stake in the company.

Australian financial stocks also fell, with Insurance Australia Group sinking 5.7%, to 3.99 Australian dollars ($3.83), after QBE Insurance Group abandoned its $8.3 billion takeover bid for the nation’s largest auto and home insurer. (See: “Insurance Australia Plays Hard To Get”) QBE Insurance shares were down nearly 2%, to 25.16 Australian dollars ($24.15). Australia’s largest brokerage, Macquarie Group, lost 4.5%, to 58.50 Australian dollars ($56.16), after it warned Tuesday that the challenging credit market would make it difficult to maintain its earnings this year, despite posting 23% growth in post-tax profit, to 1.8 billion Australian dollars ($1.7 billion), for the previous year.

In Japan, the Nikkei 225 index toppled by 1.7%, to close at 13,926.30. The broader Topix was off 2.1%, at 1,370.09. Exporters saw strong selling, with the yen strengthening to 103.31 against the dollar on Wednesday morning. Toyota Motor went down by 3.3%, to 5,240 yen ($50.68), while Canon slipped 3.5%, to 5,640 yen ($54.65).

Banking stocks fell in Tokyo, after Mitsubishi UFJ Financial Group, the country’s largest lender, forecast flat profits this year, although a turnaround in its consumer credit unit boosted the bank’s fourth-quarter profit by 71%. Mitsubishi UFJ fell 4.5%, to 1,008 yen ($9.76). No. 2 bank Mizuho Financial Group slid 5.0%, to 514,000 yen ($4,976), and Sumitomo Mitsui Financial Group fell 4.4%, to 821,000 yen ($7,949).

Elsewhere in Asia, South Korea’s KOSPI index fell 1.4%, to 1,847.51. The Straits Times index in Singapore inched down 0.1%, to 3,196.90. Taiwan’s Taiex weighted index ended the day 0.6% lower, at 9,015.57.

by Charley Blaine

I’m really not here to scare you, but, get ready, I AM going to scare you.

The news got lots of attention: Goldman Sachs analyst Arjun Murti predicted Tuesday that the price of crude oil could hit $150 to $200 a barrel in six to 24 months.

Crude oil in New York promptly jumped to as high as $122.73 a barrel in New York before closing at $121.84. And, as I write this, crude was trading slightly lower in electronic trading. But it also had the perverse effect of pushing the stock market higher. Indeed, the biggest winners in Tuesday’s stock market were oil and gas production companies, natural gas companies. (But not refiners; crude oil is rising faster than refiners can push their prices up.)

So, if crude jumps to $150 or $200, how does that translate into prices at the gas pump. Here’s the scary part.

If crude hits $150 a barrel, we could be looking at $5 a gallon or so for the retail price of gasoline. That’s based on Tuesday’s $3.61-a-gallon national average and the rule of thumb that, for every $1 increase in crude oil, the pump price rises 5 cents a gallon.

If crude hits $200, the retail price of gas jumps to $7.52 a gallon. (Plus or minus a few cents) To fill the 10-gallon gas tank on my Honda Civic would cost $75.20, probably more because I live in Washington state, which has relatively high gasoline taxes.

Sure, one could say, well, Murti is a nut, but, as Barry Ritholtz noted on The Big Picture, Murti did suggest in 2005 that crude would hit $105 a barrel.

Gasoline at $7.50 a gallon is something nobody should go into denial over because there are going to be big problems from prices at levels I’ve suggested, including:

Will there be any U.S.-based auto manufacturers left? The answer depends entirely on how fast they can transform their product lines. Chrysler is in deep trouble already. That probably means more stress for the Midwest.

Will there be any domestic airlines left? The so-called legacy airlines (American, United, Northwest, Delta and Continental) would either try to combine into one big carrier or simply disappear. They’re having serious troubles surviving as it is. This means big troubles for cities where these airlines operate hubs that generate thousands of jobs like Atlanta, Cleveland, Newark, Houston, Chicago, Denver, Dallas, Memphis and Minneapolis-St. Paul.

How will big convention cities survive? Places like Las Vegas, New Orleans, Atlanta, Chicago, New York, San Francisco and Houston have thriving convention industries, all built around the capacity of airlines to transport conventioneers to and from the destinations relatively cheaply. Emphasis on the word “cheaply.”

How will tourist destinations like Florida or Hawaii cope? Add to that places like, say, Williamstown, Mass., whose Williamstown Theater Festival is a big draw, or Ashland, Ore., home of the Oregon Shakespeare Festival. They’re not close to major cities.

Although as Douglas McIntyre noted on Blogging Stocks, gasoline at $3.50 a gallon has not cut demand enough to force prices lower, there are signs that adjustments are being made. Sales of big, gas-guzzling SUVs and pickups are slumping. Consumption of gasoline in California fell 4.5% in January from a year ago.

The Department of Energy believes that domestic consumption is likely to fall more steeply than expected this year, the New York Times reported Tuesday. It is forecasting that domestic gasoline consumption will fall slightly this year from 9.29 million barrels a day in 2007 to 9.23 million barrels a day this year. (That’s about 140 billion gallons a year, enough to fill my Honda for, well, a very long time.)

Sales of homes in outer suburbs are falling and not just because of the credit crunch and the subprime mortgage mess. Look at the stock prices of U.S. airlines, down 90% in the last 10 years.

Many commentators have wondered at the ability of Americans to grin and bear higher gas prices. But grinning and bearing it is losing any sense of fun. It’s just gotten expensive: Over the first four months of 2008, as Peter Beutel of Cameron-Hanover noted this week, gasoline has cost the United States $757.24 million a day more than in the first four months of 2002.

That’s more than the estimated $720 million a day spent in Iraq.

In Parts of California Gas is Now Over $5.40 Per Gallon


Think the prices at your local pump are high? If you aren’t in California, don’t feel so bad. Sure, you may be paying $4 per gallon, but whatever. Because as our auto-loving friends on the Cali coast know, yes, it really does cost $5.40 per gallon. You non-Americans may scoff, what with European prices being around eight thousand dollars per gallon, but for us this cost is simply outrageous. Don’t oil companies know this is America? We’ll never stand for such prices. Or at least we’ll just sit here in our cars and wait it out. [CNN]

-1.jpgAlthough AAA of California is reporting some drivers are now paying $4 a gallon for regular unleaded gasoline, local Cali station KSBW found gas stations in Gorda, south of Big Sur, currently charging $5 per gallon for gas. While that’s obviously an isolated occurrence, the average price is getting pretty high up the cost meter. For instance in Salinas, AAA recorded an average of $3.39 per gallon. Santa Cruz is at an average of $3.37 per gallon. Yes, the gas prices in California are always higher than elsewhere in the country, but this is getting ridiculous. Something must be done! Oh, Toyota — please come and save us. If they only could get every individual in the United States to drive a Prius.

C.J. Gunther for The New York Times
MUSIC MAN Dr. Claudius Conrad has studied how the mechanisms of Mozart’s music seemed to ease the pain of some patients.
C.J. Gunther for The New York Times
IN TUNE Dr. Conrad, a pianist and surgeon, says that he works better when he listens to music and that music is helpful to patients.

A Musician Who Performs With a Scalpell

By David Dobbs, May 20. 2008, The New York Times – For Claudius Conrad, a 30-year-old surgeon who has played the piano seriously since he was 5, music and medicine are entwined — from the academic realm down to the level of the fine-fingered dexterity required at the piano bench and the operating table.

“If I don’t play for a couple of days,” said Dr. Conrad, a third-year surgical resident at Harvard Medical School who also holds doctorates in stem cell biology and music philosophy, “I cannot feel things as well in surgery. My hands are not as tender with the tissue. They are not as sensitive to the feedback that the tissue gives you.”

Like many surgeons, Dr. Conrad says he works better when he listens to music. And he cites studies, including some of his own, showing that music is helpful to patients as well — bringing relaxation and reducing blood pressure, heart rate, stress hormones, pain and the need for pain medication.

But to the extent that music heals, how does it heal? The physiological pathways responsible have remained obscure, and the search for an underlying mechanism has moved slowly.

Now Dr. Conrad is trying to change that. He recently published a provocative paper suggesting that music may exert healing and sedative effects partly through a paradoxical stimulation of a growth hormone generally associated with stress rather than healing.

This jump in growth hormone, said Dr. John Morley, an endocrinologist at St. Louis University Medical Center who was not involved with the study, “is not what you’d expect, and it’s not precisely clear what it means.”

But he said it raised “some wonderful new possibilities about the physiology of healing,” and added: “And of course it has a nice sort of metaphorical ring. We used to talk about the neuroendocrine system being a sort of neuronal orchestra conductor directing the immune system. Here we have music stimulating this conductor to get the healing process started.”

Born in Munich, Dr. Conrad took up the piano when he was 5 and trained in elite music schools in Munich, Augsburg and Salzburg, Austria. After high school he served his obligatory military service as a sniper in the German Army’s mountain corps, where his commander found every opportunity to fly him out of the Alps for some piano time.

After his service he decided to pursue medicine while continuing to study music. He earned a bachelor’s degree at the University of Munich and then, more or less simultaneously, two doctorates and a medical degree.

Dr. Conrad’s music dissertation examined why and how Mozart’s music seemed to ease the pain of intensive-care patients. He concentrated not on physiological mechanisms but on mechanisms within Mozart’s music.

“It is still a controversial idea,” he said recently, “whether Mozart has more of this sort of effect than other composers. But as a musician I wanted to look at how it might.”

Dr. Conrad noted that Mozart used distinctive phrases that are fairly short, often only four or even two measures long, and then repeated these phrases to build larger sections. Yet he changed these figures often in ways the listener may not notice — a change in left-hand arpeggios or chord structures, for instance, that slips by unremarked while the ear attends the right hand’s melody, which itself may be slightly embellished.

These intricate variations are absorbed as part of a melodic accessibility so well organized that even a sonata for two pianos never feels crowded in the ear, even when it grows dense on the page. The melody lulls and delights while the underlying complexity stimulates.

But even if this explains the music’s power to stimulate and relax, “an obvious question that comes up,” Dr. Conrad said, “is why Mozart would write music that is so soothing.”

Mozart’s letters and biographies, Dr. Conrad said, portray a man almost constantly sick, constantly fending off one infection or ailment after another.

“Whether he did it intentionally or not,” Dr. Conrad said, “I think he composed music the way he did partly because it made him feel better.”

Recently, Dr. Conrad has focused on specific mechanisms that may help explain music’s effects on the body.

In a paper published last December in the journal Critical Care Medicine, he and colleagues revealed an unexpected element in distressed patients’ physiological response to music: a jump in pituitary growth hormone, which is known to be crucial in healing. “It’s a sort of quickening,” he said, “that produces a calming effect.” Accelerando produces tranquillo.

The study itself was fairly simple. The researchers fitted 10 postsurgical intensive-care patients with headphones, and in the hour just after the patients’ sedation was lifted, 5 were treated to gentle Mozart piano music while 5 heard nothing.

The patients listening to music showed several responses that Dr. Conrad expected, based on other studies: reduced blood pressure and heart rate, less need for pain medication and a 20 percent drop in two important stress hormones, epinephrine and interleukin-6, or IL-6. Amid these expected responses was the study’s new finding: a 50 percent jump in pituitary growth hormone.

No one conducting these studies had yet measured growth hormone, whose work includes driving growth, responding to threats to the immune system and promoting healing. Dr. Conrad included it because research over the last five years has shown that growth hormone generally rises with stress and falls with relaxation.

“This means you would expect G.H., like epinephrine and IL-6, to go down in this case,” Dr. Morley, of St. Louis University, said of growth hormone. “Yet here it goes up.”

He added, “The question is whether the jump in growth hormone actually drives the sedative effect or is part of something else going on.”

Dr. Conrad argues that the growth hormone does have a sedative effect. In his paper he cites a 2005 study showing that growth hormone releasing factor, a chemical messenger that essentially calls growth hormone to duty, reduced activity of interleukin-6. This suggests, he said, that growth hormone itself may reduce the interleukin-6 and epinephrine levels that produce inflammation that in turn causes pain and raises blood pressure and the heart rate.

This explanation gets a mixed reception among stress researchers. “The two dynamics aren’t necessarily the same,” said Dr. Keith W. Kelley, an endocrinologist at the University of Illinois at Urbana-Champaign and an expert on inflammatory responses. “I personally don’t buy the particular cellular mechanism he’s proposing.”

Yet Dr. Kelley and other stress-response experts, including Dr. Morley and Dr. Bruce S. McEwen of Rockefeller University in New York, say Dr. Conrad’s study clearly suggests that a rise in growth hormone may somehow dampen inflammation and stress responses.

“This is a really intriguing possibility that bears a closer look,” Dr. McEwen said.

For Dr. Conrad, the finding offers a sort of scientifico-musical elegance: Here, it seems, may be a hormonal parallel to music’s power to simultaneously rouse and soothe.

He hopes to expand his study of music’s effects on growth hormone in intensive-care patients. He is also planning roughly similar studies of how music affects a surgeon’s performance. That line of study goes way back — at least to 1914, when The Journal of the American Medical Association published “The Phonograph in the Operating Room,” by E. Kane.

More recent studies have shown that surgeons perform math calculations faster and more accurately when they listen to music they like. Dr. Conrad hopes to find neurophysiological dynamics related to this performance enhancer.

In short, he will continue to carry his study of music into the operating room — along with his music itself.

“When I was a resident, you just picked a radio station,” said Dr. Randall Gaz, an attending surgeon at Massachusetts General Hospital who is one of Dr. Conrad’s teachers in the operating room, and an amateur pianist, oboist and church organist as well.

“This new wave of surgeons bring their iPods,” he continued. “They bring whole mixes. It’s like they have the whole thing choreographed.”

When Dr. Conrad operates, he brings an iPod stocked not just with Mozart, Liszt and Scarlatti but also with gigabytes of European techno-rap bands his colleagues have never heard of (and cannot understand), including Klee, M.C. Solaar and Armin van Buuren.

Asked if he could actually work with that kind of music, he replied, slightly sheepishly: “Well, that’s not the music you want when you’re in the middle of a delicate procedure. But once you’re through that part and you’re closing up” — he shrugged — “it’s a good time to liven things up.”

Occasionally, his operating room colleagues do give him grief. Then, he said with a grin, “I remind them that there is only one person in the room with a doctorate in music philosophy, so if you don’t like the music, the expertise is on my side.”

← Previous PageNext Page →