FORBES.com, September 8, 2010  —  A trio of studies published in the American Journal of Hypertension offers new insight into the field of personalized medicine, challenging conventional approaches to treating the hypertension suffered by one-in-three U.S. adults.

The studies show that some drug combinations work better than others among certain populations.  This raises the possibility that measuring blood levels of renin, a hormone secreted by the kidneys, might help optimize the treatment of hypertension in some people.

Surprisingly, the research showed that taking a drug that’s a poor match to renin may not only fail to decrease a patient’s blood pressure, it might even raise it.

“The idea that one size fits all doesn’t make a lot of sense,” Dr. Michael Alderman of New York’s Albert Einstein College of Medicine told the Associated Press (AP).

Although Dr. Alderman supports the renin blood test approach, it is not likely such a test will become routine anytime soon. 

Many physicians remain skeptical because previous research a few decades ago did not show a clear benefit to the test, hypertension specialist Dr. Ernesto Schiffrin of McGill University in Canada told AP.

“The reality is that trial and error is to some degree what has to be done because patients are different and some patients develop adverse effects with one agent and others don’t,” Schiffrin added.

However, given the reliability improvements in blood testing, some experts believe it is time for further studies to resolve the matter.

“The time for action is long overdue. We must redirect our efforts away from the strategy of treating hypertension as one condition (the way our colleagues treated fever centuries ago) toward building a treatment algorithm that incorporates the varying pathophysiologies of hypertension,” wrote Dr. Curt Furberg, a public health specialist with Wake Forest University, in an editorial accompanying the new research.

Cases of high blood pressure are on the rise in the U.S., as the population ages, becomes more sedentary and overweight.  The condition is one of the leading causes of strokes, heart attacks and kidney failure.

Nevertheless, only about a half of all patients suffering from hypertension have their condition properly under control.  Searching for the ideal combination of drug therapies to manage high blood pressure is among the reasons behind this figure.

Virtually everyone with hypertension begins treatment with a diuretic — a time-tested, low-cost class of drugs that reduces fluid in the body.  Other medications that work in different ways are then added as needed to manage the condition.

Most patients ultimately end up taking two or more drugs.

But many doctors are hesitant to prescribe two- and three-drug combinations until they find the right mix of treatments.

Blood pressure is a balance between the amount of fluid in the arteries and how tight or relaxed those arteries are.  The amount of renin in a person’s blood helps doctors determine whether their hypertension is related more to fluid volume or to constricted arteries, said Dr. Alderman.

The three studies examined differences in treatment outcomes given variations in renin blood levels, race and ethnicity. 

In one study of 954 people prescribed a single drug, the researchers found that those with low levels of renin responded best to a diuretic.  However, those with high renin levels responded better to medicines such as ACE inhibitors, which target an artery-narrowing substance triggered by the renin, Dr. Alderman said.

Roughly 8 percent of patients had an increase in blood pressure of at least 10 points after beginning medication, Dr. Alderman found.  Those most at risk were participants with low renin levels but prescribed anti-renin drugs such as ACE inhibitors or beta-blockers.

When doctors observe that particular side effect, “we always assume it’s the patient’s fault” — that they did not take their pills or consumed too much salt, Dr. Alderman told AP.

“It may not be.”

In a different study, research conducted by Dr. Stephen Turner of the Mayo Clinic revealed that blood levels of renin after beginning an initial medication helped predict which additional drug was best to add for further treatment.

The scientists also found racial and ethnic variations in response to blood pressure treatments.   For instance, black patients had a tendency toward lower renin levels than whites. Indeed, doctors have long known that blacks fare better that whites with a diuretic as an initial treatment than a beta-blocker.

A British study examined two-drug combinations and found blacks fared worse than whites when combining calcium channel blocker with an ACE inhibitor.   However, that combination worked far better for south Asian patients, for reasons that are not yet clear.

Other experts believe that the best approach to treating hypertension is to initially have all patients begin a two-drug combination to target the condition from all directions.

A Canadian study conducted last year appears to support this approach, finding in favor of beginning treatment with a low-dose combination of diuretic plus an ACE inhibitor.

Dr. Turner said that while renin plays a small role in patient variability, he is still searching for underlying genetic variations that might one day offer a better guide to selecting optimal hypertension treatments.

The three studies were published in the American Journal of Hypertension. 

http://www.nature.com/ajh/index.html

Am Journal of Hypertension

An array of 17 purpose-built oxygen-depleted titanium dioxide memristors built at HP Labs, imaged by an atomic force microscope. The wires are about 50 nm, or 150 atoms, wide. Electric current through the memristors shifts the oxygen vacancies, causing a gradual and persistent change in electrical resistance.

The New York Times, August/September 2010, by John Markoff — Scientists at Rice University and Hewlett-Packard are reporting this week that they can overcome a fundamental barrier to the continued rapid miniaturization of computer memory that has been the basis for the consumer electronics revolution.

In recent years the limits of physics and finance faced by chip makers had loomed so large that experts feared a slowdown in the pace of miniaturization that would act like a brake on the ability to pack ever more power into ever smaller devices like laptops, smartphones and digital cameras.

But the new announcements, along with competing technologies being pursued by companies like IBM and Intel, offer hope that the brake will not be applied any time soon.

In one of the two new developments, Rice researchers are reporting in Nano Letters, a journal of the American Chemical Society, that they have succeeded in building reliable small digital switches — an essential part of computer memory — that could shrink to a significantly smaller scale than is possible using conventional methods.

More important, the advance is based on silicon oxide, one of the basic building blocks of today’s chip industry, thus easing a move toward commercialization. The scientists said that PrivaTran, a Texas startup company, has made experimental chips using the technique that can store and retrieve information.

These chips store only 1,000 bits of data, but if the new technology fulfills the promise its inventors see, single chips that store as much as today’s highest capacity disk drives could be possible in five years. The new method involves filaments as thin as five nanometers in width — thinner than what the industry hopes to achieve by the end of the decade using standard techniques. The initial discovery was made by Jun Yao, a graduate researcher at Rice. Mr. Yao said he stumbled on the switch by accident.

Separately, H.P. is to announce on Tuesday that it will enter into a commercial partnership with a major semiconductor company to produce a related technology that also has the potential of pushing computer data storage to astronomical densities in the next decade. H.P. and the Rice scientists are making what are called memristors, or memory resistors, switches that retain information without a source of power.

“There are a lot of new technologies pawing for attention,” said Richard Doherty, president of the Envisioneering Group, a consumer electronics market research company in Seaford, N.Y. “When you get down to these scales, you’re talking about the ability to store hundreds of movies on a single chip.”

The announcements are significant in part because they indicate that the chip industry may find a way to preserve the validity of Moore’s Law. Formulated in 1965 by Gordon Moore, a co-founder of Intel, the law is an observation that the industry has the ability to roughly double the number of transistors that can be printed on a wafer of silicon every 18 months.

That has been the basis for vast improvements in technological and economic capacities in the past four and a half decades. But industry consensus had shifted in recent years to a widespread belief that the end of physical progress in shrinking the size modern semiconductors was imminent. Chip makers are now confronted by such severe physical and financial challenges that they are spending $4 billion or more for each new advanced chip-making factory.

I.B.M., Intel and other companies are already pursuing a competing technology called phase-change memory, which uses heat to transform a glassy material from an amorphous state to a crystalline one and back.

Phase-change memory has been the most promising technology for so-called flash chips, which retain information after power is switched off.

The flash memory industry has used a number of approaches to keep up with Moore’s law without having a new technology. But it is as if the industry has been speeding toward a wall, without a way to get over it.

To keep up speed on the way to the wall, the industry has begun building three-dimensional chips by stacking circuits on top of one another to increase densities. It has also found ways to get single transistors to store more information. But these methods would not be enough in the long run.

The new technology being pursued by H.P. and Rice is thought to be a dark horse by industry powerhouses like Intel, I.B.M., Numonyx and Samsung. Researchers at those competing companies said that the phenomenon exploited by the Rice scientists had been seen in the literature as early as the 1960s.

“This is something that I.B.M. studied before and which is still in the research stage,” said Charles Lam, an I.B.M. specialist in semiconductor memories.

H.P. has for several years been making claims that its memristor technology can compete with traditional transistors, but the company will report this week that it is now more confident that its technology can compete commercially in the future.

In contrast, the Rice advance must still be proved. Acknowledging that researchers must overcome skepticism because silicon oxide has been known as an insulator by the industry until now, Jim Tour, a nanomaterials specialist at Rice said he believed the industry would have to look seriously at the research team’s new approach.

“It’s a hard sell, because at first it’s obvious it won’t work,” he said. “But my hope is that this is so simple they will have to put it in their portfolio to explore.”

More chip news……….

A Computer Chip Based on Probability Not Binary (video)

Lyric’s tiny chip is full of possibilities.

SingularityHub.com, September 8, 2010, by Aaron Saenz — Traditional computer processing is based on 1 and 0, yes and no, but Lyric Semiconductor wants us to consider the power of ’maybe’. The Cambridge, Massachusetts startup recently came out of stealth to announce the development of their new computer chip that calculates using probabilities. Lyric has used $20 million in DARPA and venture funding to rethink the way we process problems, from the basic architecture of its circuits all the way up to its software language . Everything is grey, not black and white. This new approach to computing has led to a new kind of chip that can handle probability based decisions quicker, using less space, and less energy. Instead of just cramming more gates on an integrated circuit like other computer chip designers, Lyric may have found a way to make those elements work harder. Check out a brief example of the chip’s power in the video below.

Spam filtering, product suggestions, identity verification…a large portion of modern computer processing power is spent on problems that rely on computers to analyze the probability of a situation. The Lyric approach uses probability natively, allowing for a quicker solution to these problems. In the video below, a program attempts to determine how many users are typing on a keyboard, and in what order. Instead of trying to find the definite solution, it seeks the most likely solution…and ends up with the right answer. Pretty cool.

(wait for the delayed voice-over)

A big application for Lyric’s new technology will be error correction. 30 nm NAND flash memory will typically have 1 bit wrong per 1000. As we reach to build smaller and smaller chips, that error rate is likely to increase. Lyric Error Correction (LEC) uses their probability processing to counter for mistakes in memory processing. LEC gets the same results as traditional binary chips but in an area 30 times as small, and with only 10% of the power.

Lyric Error Correction cleans up mistakes in stored memory. LEC allows cheap flash drives (with higher error rates) to be used in small portable systems like phones.

While still built on silicon, Lyric’s probability chip uses a completely new architecture for gates. The chip doesn’t process a long series of opens and closed connections as ones and zeros. Instead, there’s a great connectivity between nodes, variables talk to each other, creating a highly parallel processing method. Instead of Boolean logic (And, Or, Not) the chip relies on Bayesian probability logic. At every step in the process, Lyric had to rethink how computer processing was done. That means their approach has the capability to produce a revolutionary advance in computing.

It also means that everything they do is relatively untested. What kind of artifacts does probability processing introduce into computing? Are there strange limits on power consumption, environmental sensitivities, or long term failure concerns? We don’t even know if they’ll be able to scale up production to meet demand. So while Lyric’s potential for more powerful processing has generated a lot of buzz, we need to be cautious in our expectations. It may take a long time before we know if this technology is truly viable.

That being said, I love to consider what probability based processing could mean. It could let us solve problems in a way akin to how the universe views the basic physical interactions between its smallest particles. Perhaps it will have an impact in how we model biological systems. Maybe it will affect how we simulate the brain. We’ll have to wait and see what Lyric can accomplish in the years ahead.

Personally, I’m hoping for an Infinite Improbability Drive.

Moore’s Law

Moore‘s law describes a long-term trend in the history of computing hardware. The number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years. The trend has continued for more than half a century and is not expected to stop until 2015 or later.

The capabilities of many digital electronic devices are strongly linked to Moore’s law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. All of these are improving at (roughly) exponential rates as well. This has dramatically increased the usefulness of digital electronics in nearly every segment of the world economy. Moore’s law precisely describes a driving force of technological and social change in the late 20th and early 21st centuries.

The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper. The paper noted that number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue “for at least ten years”. His prediction has proved to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development. This fact would support an alternative view that the “law” unfolds as a self-fulfilling prophecy, where the goal set by the prediction charts the course for realized capability.

The term “Moore’s law” was coined around 1970 by the Caltech professor, VLSI pioneer, and entrepreneur, Carver Mead. Predictions of similar increases in computer power had existed years prior. Alan Turing in a 1950 paper had predicted that by the turn of the millennium, computers would have a billion words of memory. Moore may have heard Douglas Engelbart, a co-inventor of today’s mechanical computer mouse, discuss the projected downscaling of integrated circuit size in a 1960 lecture. A New York Times article published August 31, 2009, credits Engelbart as having made the prediction in 1959.

Moore’s original statement that transistor counts had doubled every year can be found in his publication “Cramming more components onto integrated circuits“, Electronics Magazine 19 April 1965:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year… Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.

Moore slightly altered the formulation of the law over time, in retrospect bolstering the perceived accuracy of his law . Most notably, in 1975, Moore altered his projection to a doubling every two years. Despite popular misconception, he is adamant that he did not predict a doubling “every 18 months”. However, David House, an Intel colleague, had factored in the increasing performance of transistors to conclude that integrated circuits would double in performance every 18 months.

In April 2005, Intel offered US$10,000 to purchase a copy of the original Electronics Magazine. David Clark, an engineer living in the United Kingdom, was the first to find a copy and offer it to Intel.
Future trends
Computer industry technology “road maps” predict (as of 2001) that Moore’s law will continue for several chip generations. Depending on and after the doubling time used in the calculations, this could mean up to a hundredfold increase in transistor count per chip within a decade. The semiconductor industry technology roadmap uses a three-year doubling time for microprocessors, leading to a tenfold increase in the next decade. Intel was reported in 2005 as stating that the downsizing of silicon chips with good economics can continue during the next decade and in 2008 as predicting the trend through 2029.

Some of the new directions in research that may allow Moore’s law to continue are:

  • Researchers from IBM and Georgia Tech created a new speed record when they ran a silicon/germanium helium supercooled transistor at 500 gigahertz (GHz).The transistor operated above 500 GHz at 4.5 K (−451 °F/−268.65 °C) and simulations showed that it could likely run at 1 THz (1,000 GHz). However, this trial only tested a single transistor.
  • In early 2006, IBM researchers announced that they had developed a technique to print circuitry only 29.9 nm wide using deep-ultraviolet (DUV, 193-nanometer) optical lithography. IBM claims that this technique may allow chipmakers to use then-current methods for seven more years while continuing to achieve results forecast by Moore’s law. New methods that can achieve smaller circuits are expected to be substantially more expensive.
  • In April 2008, researchers at HP Labs announced the creation of a working “memristor“: a fourth basic passive circuit element whose existence had previously only been theorized. The memristor’s unique properties allow for the creation of smaller and better-performing electronic devices.[42] This memristor bears some resemblance to resistive memory (CBRAM or RRAM) developed independently and recently by other groups for non-volatile memory applications.
  • In February 2010, Researchers at the Tyndall National Institute in Cork, Ireland announced a breakthrough in transistors with the design and fabrication of the world’s first junctionless transistor. The research led by Professor Jean-Pierre Colinge was published in Nature Nanotechnology and describes a control gate around a silicon nanowire that can tighten around the wire to the point of closing down the passage of electrons without the use of junctions or doping. The researchers claim that the new junctionless transistors can be produced at 10-nanometer scale using existing fabrication techniques.

The trend of scaling for NAND flash memory allows doubling of components manufactured in the same wafer area in less than 18 months.

Atomisic simulation result for formation of inversion channel (electron density) and attainment of threshold voltage (IV) in a nanowire MOSFET. Note that the threshold voltage for this device lies around 0.45 V. Nanowire MOSFETs lie towards the end of ITRS. roadmap for scaling devices below 10 nm gate lengths

On 13 April 2005, Gordon Moore stated in an interview that the law cannot be sustained indefinitely: “It can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.” He also noted that transistors would eventually reach the limits of miniaturization at atomic levels:

In terms of size [of transistors] you can see that we’re approaching the size of atoms which is a fundamental barrier, but it’ll be two or three generations before we get that far—but that’s as far out as we’ve ever been able to see. We have another 10 to 20 years before we reach a fundamental limit. By then they’ll be able to make bigger chips and have transistor budgets in the billions.

In January 1995, the Digital Alpha 21164 microprocessor had 9.3 million transistors. This 64-bit processor was a technological spearhead at the time, even if the circuit’s market share remained average. Six years later, a state of the art microprocessor contained more than 40 million transistors. It is theorised that with further miniaturisation, by 2015 these processors should contain more than 15 billion transistors, and by 2020 will be in molecular scale production, where each molecule can be individually positioned.

In 2003 Intel predicted the end would come between 2013 and 2018 with 16 nanometer manufacturing processes and 5 nanometer gates, due to quantum tunnelling, although others suggested chips could just get bigger, or become layered. In 2008 it was noted that for the last 30 years it has been predicted that Moore’s law would last at least another decade.

Some see the limits of the law as being far in the distant future. Lawrence Krauss and Glenn D. Starkman announced an ultimate limit of around 600 years in their paper, based on rigorous estimation of total information-processing capacity of any system in the Universe.

Then again, the law has often met obstacles that first appeared insurmountable but were indeed surmounted before long. In that sense, Moore says he now sees his law as more beautiful than he had realized: “Moore’s law is a violation of Murphy’s law. Everything gets better and better.”

Futurists and Moore’s law

Kurzweil’s extension of Moore’s law from integrated circuits to earlier transistors, vacuum tubes, relays and electromechanical computers.

Futurists such as Ray Kurzweil, Bruce Sterling, and Vernor Vinge believe that the exponential improvement described by Moore’s law will ultimately lead to a technological singularity: a period where progress in technology occurs almost instantly.

Although Kurzweil agrees that by 2019 the current strategy of ever-finer photolithography will have run its course, he speculates that this does not mean the end of Moore’s law:

Moore’s law of Integrated Circuits was not the first, but the fifth paradigm to forecast accelerating price-performance ratios. Computing devices have been consistently multiplying in power (per unit of time) from the mechanical calculating devices used in the 1890 U.S. Census, to [Newman‘s] relay-based “[Heath] Robinson” machine that cracked the Lorenz cipher, to the CBS vacuum tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer.

Kurzweil speculates that it is likely that some new type of technology (possibly optical or quantum computers) will replace current integrated-circuit technology, and that Moore’s Law will hold true long after 2020.

Lloyd shows how the potential computing capacity of a kilogram of matter equals pi times energy divided by Planck’s constant. Since the energy is such a large number and Plancks’s constant is so small, this equation generates an extremely large number: about 5.0 * 1050 operations per second.

He believes that the exponential growth of Moore’s law will continue beyond the use of integrated circuits into technologies that will lead to the technological singularity. The Law of Accelerating Returns described by Ray Kurzweil has in many ways altered the public’s perception of Moore’s Law. It is a common (but mistaken) belief that Moore’s Law makes predictions regarding all forms of technology, when it has only actually been demonstrated clearly for semiconductor circuits. However many people including Richard Dawkins have observed that Moore’s law will apply – at least by inference – to any problem that can be attacked by digital computers and is in it essence also a digital problem. Therefore progress in genetics where the coding is digital ‘the genetic coding of GATC’ may also advance at a Moore’s law rate. Many futurists still use the term “Moore’s law” in this broader sense to describe ideas like those put forth by Kurzweil but do not fully understand the difference between linear problems and digital problems.

Moore himself, who never intended his eponymous law to be interpreted so broadly, has quipped:

“Moore’s law has been the name given to everything that changes exponentially. I say, if Gore invented the Internet, I invented the exponential.”

Consequences and limitations

Transistor count versus computing performance

The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical CPU performance. For example, the higher transistor density in multi-core CPUs doesn’t greatly increase speed on many consumer applications that are not parallelized. There are cases where a roughly 45% increase in processor transistors have translated to roughly 10–20% increase in processing power. Viewed even more broadly, the speed of a system is often limited by factors other than processor speed, such as internal bandwidth and storage speed, and one can judge a system’s overall performance based on factors other than speed, like cost efficiency or electrical efficiency.

Importance of non-CPU bottlenecks

As CPU speeds and memory capacities have increased, other aspects of performance like memory and disk access speeds have failed to keep up. As a result, those access latencies are more and more often a bottleneck in system performance, and high-performance hardware and software have to be designed to reduce their impact.

In processor design, out-of-order execution and on-chip caching and prefetching reduce the impact of memory latency at the cost of using more transistors and increasing processor complexity. In software, operating systems and databases have their own finely tuned caching and prefetching systems to minimize the number of disk seeks, including systems like ReadyBoost that use low-latency flash memory. Some databases can compress indexes and data, reducing the amount of data read from disk at the cost of using CPU time for compression and decompression. The increasing relative cost of disk seeks also makes the high access speeds provided by solid state disks more attractive for some applications.

Parallelism and Moore’s law

Parallel computation has recently become necessary to take full advantage of the gains allowed by Moore’s law. For years, processor makers consistently delivered increases in clock rates and instruction-level parallelism, so that single-threaded code executed faster on newer processors with no modification. Now, to manage CPU power dissipation, processor makers favor multi-core chip designs, and software has to be written in a multi-threaded or multi-process manner to take full advantage of the hardware.

Obsolescence

A negative implication of Moore’s Law is obsolescence, that is, as technologies continue to rapidly “improve”, these improvements can be significant enough to rapidly obsolete predecessor technologies. In situations in which security and survivability of hardware and/or data are paramount, or in which resources are limited, rapid obsolescence can pose obstacles to smooth or continued operations.

IBM Chip

The Distant Future
Josephson Junctions

In this close-up of a Josephson junction chip, the junctions themselves lie beneath the four circles in the brown regions. Ultrafast switches, they can be turned on in as little as six trillionths of a second and are made of lead or niobium – both semiconductors – separated by a thin layer of insulating oxide. The narrowest lines in this photo are about 0.00001 inches wide. Actual size of the portion show here: 0.001 x 0.00112 inches

Source: Stan Augarten – The National Museum of American History and the Smithsonian Institution
Computer scientists are obsessed with speed, their obsession motivated by the harsh economics of the computer business as much as by personal and financial ambition. Computer manufacturers are constantly striving to shave billionths of a second off the performance times of their products – long-term efforts that lead some companies to spend tens of millions of dollars. For the faster a computer, the more tasks it can execute in a given period, and the better it can earn its keep.

Of course, most computers are perfectly adequate for the vast majority of their users; it doesn’t matter to most personal computer owners whether their machines need two millionths of a second to multiply two numbers or only half that. But there’s a certain rarefied class of operators – NASA, the military, the National Weather Bureau – for whom no machine is ever fast enough. These are the people who buy supercomputers like the Cray-1, built by Cray Research Inc. of Minnesota, which can carry out more than a hundred million operations a second. Supercomputers are used for such highly complex chores as weather predictions and airplane air-flow analysis.

For all their phenomenal speed, supercomputers still require hours to perform some calculations, and it may be impossible to boost their speed significantly with conventional semiconductor technology. High-speed chips generate excessive amounts of heat, particularly if they’re packed closely together. Scientists at IBM have therefore been experimenting with an exotic class of ICs called Josephson junctions, which are designed to operate in tubs of liquid helium at temperatures only a few degrees above absolute zero.

As envisioned by IBM, a Josephson junction computer would consist of a central core of about fifty to a hundred chips, all packed tightly in a cube about two inches to a side and immersed in liquid helium. The entire apparatus, core and tub, would probably be about the size of a refrigerator. By cooling the circuitry to almost absolute zero (-459.7º F) and reducing the lengths of the connecting wires to a bare minimum, a Josephson junction computer might attain speeds of one billionth of a second per operation, or less, ten times faster than today’s quickest computers. A full-scale Josephson junction device has yet to be made, but IBM expects to have one by the early 1990s, if not sooner.