Health IT’s Billion-Dollar Man

MIT Technology Review, November/December 2009, by David Talbot  —  By one estimate, only 17 percent of U.S. doctors use electronic records. But the federal government has ambitious plans to create a network in which patient information is shared electronically among medical institutions. As National Coördinator for Health Information Technology, David Blumenthal is writing the rules under which the federal government will spend more than $21 billion in stimulus funds to get the job done. Blumenthal, previously a practicing physician at Massachusetts General Hospital in Boston, spoke with David Talbot, Technology Review‘s chief correspondent.

TR: How long will it take to create a national health-information network?

David Blumenthal: The president has said that everyone will have an electronic health record by 2014. That is the goal we are working toward right now. We are trying to make the network available as fast as we can.

TR: Can health IT reduce the skyrocketing U.S. health-care costs?

DB: The Congressional Budget Office projected dollar savings from the [stimulus] legislation at about $12 billion over 10 years. I expect that the actual savings will far exceed that amount.

TR: How do we get around the potential problems with electronic systems–such as overwhelming physicians with data or actually causing medical errors?

DB: Electronic health records and other forms of health IT can certainly be improved, and there are examples of bad implementation and other problems. I still think that on the whole, across the country we’d be better off with universal availability of electronic health records. We’d have fewer errors, fewer missed diagnoses, less duplication of tests, and fewer adverse drug events.

TR: If health-IT systems reduce such errors and lead to fewer needless procedures, why haven’t the insurance companies stampeded to get them installed?

DB: The insurance companies have been able to pass along the costs of waste in our health-care system to their clients.

TR: You are setting the definitions of “meaningful use”–the criteria hospitals and physicians must meet to collect their cash incentives for installing IT. What will be in these definitions?

DB: I can’t speak to the specific criteria at this point. We are in the middle of writing the regulations, and the initial release is anticipated in December.

TR: You’re giving out $564 million for states to form health-information exchanges among medical providers. Why don’t even the most electronically progressive hospitals–including your own Mass General–already share their data?

DB: There has never been a business case for health-information exchange. As a matter of fact, there has been a negative case: if you give away your information, you may lose it. You may lose the patient.

ZDNet.com, November 1, 2009, by Chris Jablonski  —  Implantable organ and tissue “scaffolds” are currently in the spotlight for regenerative medicine, and may allow for the replacement of most body parts that flounder with age within 30-50 years, according to a report from BBC.

That means future centenarians born today could have a “physical” age of 50 at a calendar age of 100.


A “scaffolding” technique developed at Leeds University allows for transplantable tissues, and eventually organs, that the body can make its own. Once the scaffold has been transplanted, the body takes over and repopulates it with cells without any fear of rejection – the main reason why normal transplants wear out and fail .

Using this technique, a research team at Leeds has managed to make fully functioning heart valves, which involves taking a healthy donor heart valve – from a human or a suitable animal, such as a pig – and gently stripping away its cells using a cocktail of enzymes and detergents. The inert scaffold left can be transplanted into the patient, writes the BBC.  According to Eileen Ingha, a professor at the university’s Institute of Medical and Biological Engineering, trials in animals and on 40 patients in Brazil have shown promising results.

Across the continent, another approach to scaffolding is underway at Tel Aviv University’s Department of Biomedical Engineering. There, professor Meital Zilberman has developed an artificial biologically active scaffold made from soluble fibers, which may help humans replace lost or missing bone.

Her flexible scaffolding connects tissues together as it releases growth-stimulating drugs to the place where new bone or tissue is needed – like the scaffolding that surrounds an existing building when additions to that building are made.

“The bioactive agents that spur bone and tissue to regenerate are available to us. The problem is that no technology has been able to effectively deliver them to the tissue surrounding that missing bone,” says Zilberman.

The invention could be used to restore missing bone in a limb lost in an accident, or repair receded jawbones necessary to secure dental implants, says Zilberman. (Recently, Columbia University researchers used adult stem cells to create a jaw bone.) The scaffold can be shaped so the bone will grow into the proper form. After a period of time, the fibers can be programmed to dissolve, leaving no trace.


Composite drug-releasing fibers used as basic elements of scaffolding for tissue and bone regeneration. (Credit: AFTAU)


“The fibers not only support body parts like bones and arteries. They’re also specially developed to release drugs and proteins in a controlled manner. Our special 3-D matrix can hold together drugs that are particularly vulnerable to breaking down easily. The matrix gives the body shape and form, coaxing it to re-grow and strengthen missing parts,” she says.

According to Zilberman, until now in vitro results on bone have been good, and some basic unpublished results from animal models have shown excellent promise for bone regeneration. “It sounds simple, but it’s not. It’s quite difficult to develop a process for scaffold formation for bone growth. It’s a delicate balance to apply only mild conditions that will not destroy the activity of the growth factor molecules,” she says.

With more research, Zilberman says, it could also serve as the basic technology for regenerating other types of human tissues, including muscle, arteries, and skin.

After the Singularity: A Talk with Ray Kurzweil

by Ray Kurzweil 

John Brockman, editor of Edge.org, recently interviewed Ray Kurzweil on the Singularity and its ramifications. According to Ray, “We are entering a new era. I call it ‘the Singularity.’ It’s a merger between human intelligence and machine intelligence that is going to create something bigger than itself. It’s the cutting edge of evolution on our planet. One can make a strong case that it’s actually the cutting edge of the evolution of intelligence in general, because there’s no indication that it’s occurred anywhere else. To me that is what human civilization is all about. It is part of our destiny and part of the destiny of evolution to continue to progress ever faster, and to grow the power of intelligence exponentially. To contemplate stopping that–to think human beings are fine the way they are–is a misplaced fond remembrance of what human beings used to be. What human beings are is a species that has undergone a cultural and technological evolution, and it’s the nature of evolution that it accelerates, and that its powers grow exponentially, and that’s what we’re talking about. The next stage of this will be to amplify our own intellectual powers with the results of our technology.” 

Originally published on Edge.com.

RAY KURZWEIL: My interest in the future really stems from my interest in being an inventor. I’ve had the idea of being an inventor since I was five years old, and I quickly realized that you had to have a good idea of the future if you’re going to succeed as an inventor. It’s a little bit like surfing; you have to catch a wave at the right time. I quickly realized the world quickly becomes a different place than it was when you started by the time you finally get something done. Most inventors fail not because they can’t get something to work, but because all the market’s enabling forces are not in place at the right time.

So I became a student of technology trends, and have developed mathematical models about how technology evolves in different areas like computers, electronics in general, communication storage devices, biological technologies like genetic scanning, reverse engineering of the human brain, miniaturization, the size of technology, and the pace of paradigm shifts. This helped guide me as an entrepreneur and as a technology creator so that I could catch the wave at the right time.

This interest in technology trends took on a life of its own, and I began to project some of them using what I call the law of accelerating returns, which I believe underlies technology evolution to future periods. I did that in a book I wrote in the 1980s, which had a road map of what the 1990s and the early 2000’s would be like, and that worked out quite well. I’ve now refined these mathematical models, and have begun to really examine what the 21st century would be like. It allows me to be inventive with the technologies of the 21st century, because I have a conception of what technology, communications, the size of technology, and our knowledge of the human brain will be like in 2010, 2020, or 2030. If I can come up with scenarios using those technologies, I can be inventive with the technologies of the future. I can’t actually create these technologies yet, but I can write about them.

One thing I’d say is that if anything the future will be more remarkable than any of us can imagine, because although any of us can only apply so much imagination, there’ll be thousands or millions of people using their imaginations to create new capabilities with these future technology powers. I’ve come to a view of the future that really doesn’t stem from a preconceived notion, but really falls out of these models, which I believe are valid both for theoretical reasons and because they also match the empirical data of the 20th century.

One thing that observers don’t fully recognize, and that a lot of otherwise thoughtful people fail to take into consideration adequately, is the fact that the pace of change itself has accelerated. Centuries ago people didn’t think that the world was changing at all. Their grandparents had the same lives that they did, and they expected their grandchildren would do the same, and that expectation was largely fulfilled.

Today it’s an axiom that life is changing and that technology is affecting the nature of society. But what’s not fully understood is that the pace of change is itself accelerating, and the last 20 years are not a good guide to the next 20 years. We’re doubling the paradigm shift rate, the rate of progress, every decade. So this will actually match the amount of progress we made in the whole 20th century, because we’ve been accelerating up to this point. The 20th century was like 25 years of change at today’s rate of change. In the next 25 years we’ll make four times the progress you saw in the 20th century. And we’ll make 20,000 years of progress in the 21st century, which is almost a thousand times more technical change than we saw in the 20th century.

Specifically, computation is growing exponentially. The one exponential trend that people are aware of is called Moore’s Law. But Moore’s Law itself is just one method for bringing exponential growth to computers. People are aware that we’re doubling the power of computation every 12 months because we can put twice as many transistors on an integrated circuit every two years. But in fact, they run twice as fast and double both the capacity and the speed, which means that the power quadruples.

What’s not fully realized is that Moore’s Law was not the first but the fifth paradigm to bring exponential growth to computers. We had electro-mechanical calculators, relay-based computers, vacuum tubes, and transistors. Every time one paradigm ran out of steam another took over. For a while there were shrinking vacuum tubes, and finally they couldn’t make them any smaller and still keep the vacuum, so a whole different method came along. They weren’t just tiny vacuum tubes, but transistors, which constitute a whole different approach. There’s been a lot of discussion about Moore’s Law running out of steam in about 12 years because by that time the transistors will only be a few atoms in width and we won’t be able to shrink them any more. And that’s true, so that particular paradigm will run out of steam.

We’ll then go to the sixth paradigm, which is massively parallel computing in three dimensions. We live in a 3-dimensional world, and our brains organize in three dimensions, so we might as well compute in three dimensions. The brain processes information using an electrochemical method that’s ten million times slower than electronics. But it makes up for this by being three-dimensional. Every intra-neural connection computes simultaneously, so you have a hundred trillion things going on at the same time. And that’s the direction we’re going to go in. Right now, chips, even though they’re very dense, are flat. Fifteen or twenty years from now computers will be massively parallel and will be based on biologically inspired models, which we will devise largely by understanding how the brain works.

We’re already being significantly influenced by it. It’s generally recognized, or at least accepted by a lot of observers, that we’ll have the hardware to manipulate human intelligence within a brief period of time – I’d say about twenty years. A thousand dollars of computation will equal the 20 million billion calculations per second of the human brain. What’s more controversial is whether or not we will have the software. People acknowledge that we’ll have very fast computers that could in theory emulate the human brain, but we don’t really know how the brain works, and we won’t have the software, the methods, or the knowledge to create a human level of intelligence. Without this you just have an extremely fast calculator.

But our knowledge of how the brain works is also growing exponentially. The brain is not of infinite complexity. It’s a very complex entity, and we’re not going to achieve a total understanding through one simple breakthrough, but we’re further along in understanding the principles of operation of the human brain than most people realize. The technology for scanning the human brain is growing exponentially, our ability to actually see the internal connection patterns is growing, and we’re developing more and more detailed mathematical models of biological neurons. We actually have very detailed mathematical models of several dozen regions of the human brain and how they work, and have recreated their methodologies using conventional computation. The results of those re-engineered or re-implemented synthetic models of those brain regions match the human brain very closely.

We’re also literally replacing sections of the brain that are degraded or don’t work any more because of disabilities or disease. There are neural implants for Parkinson’s Disease and well-known cochlear implants for deafness. There’s a new generation of those that are coming out now that provide a thousand points of frequency resolution and will allow deaf people to hear music for the first time. The Parkinson’s implant actually replaces the cortical neurons themselves that are destroyed by that disease. So we’ve shown that it’s feasible to understand regions of the human brain, and reimplement those regions in conventional electronics computation that will actually interact with the brain and perform those functions.

If you follow this work and work out the mathematics of it. It’s a conservative scenario to say that within 30 years – possibly much sooner – we will have a complete map of the human brain, we will have complete mathematical models of how each region works, and we will be able to re-implement the methods of the human brain, which are quite different than many of the methods used in contemporary artificial intelligence.

But these are actually similar to methods that I use in my own field – pattern recognition – which is the fundamental capability of the human brain. We can’t think fast enough to logically analyze situations very quickly, so we rely on our powers of pattern recognition. Within 30 years we’ll be able to create non-biological intelligence that’s comparable to human intelligence. Just like a biological system, we’ll have to provide it an education, but here we can bring to bear some of the advantages of machine intelligence: Machines are much faster, and much more accurate. A thousand-dollar computer can remember billions of things accurately – we’re hard-pressed to remember a handful of phone numbers.

Once they learn something, machines can also share their knowledge with other machines. We don’t have quick downloading ports at the level of our intra-neuronal connection patterns and our concentrations of neurotransmitters, so we can’t just download knowledge. I can’t just take my knowledge of French and download it to you, but machines can. So we can educate machines through a process that can be hundreds or thousands of times faster than the comparable process in humans. It can provide a 20-year education to a human-level machine in maybe a few weeks or a few days and then these machines can share their knowledge.

The primary implication of all this will be to enhance our own human intelligence. We’re going to be putting these machines inside our own brains. We’re starting to do that now with people who have severe medical problems and disabilities, but ultimately we’ll all be doing this. Without surgery, we’ll be able to introduce calculating machines into the blood stream that will be able to pass through the capillaries of the brain. These intelligent, blood-cell-sized nanobots will actually be able to go to the brain and interact with biological neurons. The basic feasibility of this has already been demonstrated in animals.

One application of sending billions of nanobots into the brain is full-immersion virtual reality. If you want to be in real reality, the nanobots sit there and do nothing, but if you want to go into virtual reality, the nanobots shut down the signals coming from my real senses, replace them with the signals I would be receiving if I were in the virtual environment, and then my brain feels as if it’s in the virtual environment. And you can go there yourself – or, more interestingly you can go there with other people – and you can have everything from sexual and sensual encounters to business negotiations, in full-immersion virtual reality environments that incorporate all of the senses.

People will beam their own flow of sensory experiences and the neurological correlates of their emotions out into the Web, the way people now beam images from web cams in their living rooms and bedrooms. This will enable you to plug in and actually experience what it’s like to be someone else, including their emotional reactions, a´ la the plot concept of Being John Malkovich. In virtual reality you don’t have to be the same person. You can be someone else, and can project yourself as a different person.

Most importantly, we’ll be able to enhance our biological intelligence with non-biological intelligence through intimate connections. This won’t mean just having one thin pipe between the brain and a non-biological system, but actually having non-biological intelligence in billions of different places in the brain. I don’t know about you, but there are lots of books I’d like to read and Web sites I’d like to go to, and I find my bandwidth limiting. So instead of having a mere hundred trillion connections, we’ll have a hundred trillion times a million. We’ll be able to enhance our cognitive pattern recognition capabilities greatly, think faster, and download knowledge.

If you follow these trends further, you get to a point where change is happening so rapidly that there appears to be a rupture in the fabric of human history. Some people have referred to this as the “Singularity.” There are many different definitions of the Singularity, a term borrowed from physics, which means an actual point of infinite density and energy that’s kind of a rupture in the fabric of space-time.

Here, that concept is applied by analogy to human history, where we see a point where this rate of technological progress will be so rapid that it appears to be a rupture in the fabric of human history. It’s impossible in physics to see beyond a Singularity, which creates an event boundary, and some people have hypothesized that it will be impossible to characterize human life after the Singularity. My question is, what will human life be like after the Singularity, which I predict will occur somewhere right before the middle of the 21st century?

A lot of the concepts we have of the nature of human life – such as longevity – suggest a limited capability as biological, thinking entities. All of these concepts are going to undergo significant change as we basically merge with our technology. It’s taken me a while to get my own mental arms around these issues. In the book I wrote in the 1980s, The Age of Intelligent Machines, I ended with the specter of machines matching human intelligence somewhere between 2020 and 2050, and I basically have not changed my view on that time frame, although I left behind my view that this is a final specter. In the book I wrote ten years later, The Age of Spiritual Machines, I began to consider what life would be like past the point where machines could compete with us. Now I’m trying to consider what that will mean for human society.

One thing that we should keep in mind is that innate biological intelligence is fixed. We have 1026 calculations per second in the whole human race and there are ten billion human minds. Fifty years from now, the biological intelligence of humanity will still be at that same order of magnitude. On the other hand, machine intelligence is growing exponentially, and today it’s a million times less than that biological figure. So although it still seems that human intelligence is dominating, which it is, the crossover point is around 2030 and non-biological intelligence will continue its exponential rise.

EDGE: This reminds me of a conversation I once had with John Lilly about dolphins. I asked him, “How do you know they’re more intelligent than we are?” Isn’t knowledge tautological? How can we know more than we do know? Who would know it, except us?

KURZWEIL: That’s actually a very good point, because one response is not to want to be enhanced, not to have nanobots. A lot of people say that they just want to stay a biological person. But what will the Singularity look like to people who want to remain biological? The answer is that they really won’t notice it, except for the fact that machine intelligence will appear to biological humanity to be their transcendent servants. It will appear that these machines are very friendly are taking care of all of our needs, and are really our transcendent servants. But providing that service of meeting all of the material and emotional needs of biological humanity will comprise a very tiny fraction of the mental output of the non-biological component of our civilization. So there’s a lot that, in fact, biological humanity won’t actually notice.

There are two levels of consideration here. On the economic level, mental output will be the primary criterion. We’re already getting close to the point that the only thing that has value is information. Information has value to the extent that it really reflects knowledge, not just raw data. There are a few products on this table – a clock, a camera, tape recorder – that are physical objects, but really the value of them is in the information that went into their design: the design of their chips and the software that’s used to invent and manufacture them. The actual raw materials – a bunch of sand and some metals and so on – is worth a few pennies, but these products have value because of all the knowledge that went into creating them.

And the knowledge component of products and services is asymptoting towards 100 percent. By the time we get to 2030 it will be basically 100 percent. With a combination of nanotechnology and artificial intelligence, we’ll be able to create virtually any physical product and meet all of our material needs. When everything is software and information, it’ll be a matter of just downloading the right software, and we’re already getting pretty close to that.

On a spiritual level, the issue of what is consciousness is another important aspect of this, because we will have entities by 2030 that seem to be conscious, and that will claim to have feelings. We have entities today, like characters in your kids’ video games, that can make that claim, but they are not very convincing. If you run into a character in a video game and it talks about its feelings, you know it’s just a machine simulation; you’re not convinced that it’s a real person there. This is because that entity, which is a software entity, is still a million times simpler than the human brain.

In 2030, that won’t be the case. Say you encounter another person in virtual reality that looks just like a human but there’s actually no biological human behind it – it’s completely an AI projecting a human-like figure in virtual reality, or even a human-like image in real reality using an android robotic technology. These entities will seem human. They won’t be a million times simpler than humans. They’ll be as complex as humans. They’ll have all the subtle cues of being humans. They’ll be able to sit here and be interviewed and be just as convincing as a human, just as complex, just as interesting. And when they claim to have been angry or happy it’ll be just as convincing as when another human makes those claims.

At this point, it becomes a really deeply philosophical issue. Is that just a very clever simulation that’s good enough to trick you, or is it really conscious in the way that we assume other people are? In my view there’s no real way to test that scientifically. There’s no machine you can slide the entity into where a green light goes on and says okay, this entity‘s conscious, but no, this one’s not. You could make a machine, but it will have philosophical assumptions built into it. Some philosophers will say that unless it’s squirting impulses through biological neurotransmitters, it’s not conscious, or that unless it’s a biological human with a biological mother and father it’s not conscious. But it becomes a matter of philosophical debate. It’s not scientifically resolvable.

The next big revolution that’s going to affect us right away is biological technology, because we’ve merged biological knowledge with information processing. We are in the early stages of understanding life processes and disease processes by understanding the genome and how the genome expresses itself in protein. And we’re going to find – and this has been apparent all along – that there’s a slippery slope and no clear definition of where life begins. Both sides of the abortion debate have been afraid to get off the edges of that debate: that life starts at conception on the one hand or it starts literally at birth on the other. They don’t want to get off those edges, because they realize it’s just a completely slippery slope from one end to the other.

But we’re going to make it even more slippery. We’ll be able to create stem cells without ever actually going through the fertilized egg. What’s the difference between a skin cell, which has all the genes, and a fertilized egg? The only differences are some proteins in the eggs and some signaling factors that we don’t fully understand, yet that are basically proteins. We will get to the point where we’ll be able to take some protein mix, which is just a bunch of chemicals and clearly not a human being, and add it to a skin cell to create a fertilized egg that we can then immediately differentiate into any cell of the body. When I go like this and brush off thousands of skin cells, I will be destroying thousands of potential people. There’s not going to be any clear boundary.

This is another way of saying also that science and technology are going to find a way around the controversy. In the future, we’ll be able to do therapeutic cloning, which is a very important technology that completely avoids the concept of the fetus. We’ll be able to take skin cells and create, pretty directly without ever going through a fetus, all the cells we need.

We’re not that far away from being able to create new cells. For example, I’m 53 but with my DNA, I’ll be able to create the heart cells of a 25-year-old man, and I can replace my heart with those cells without surgery just by sending them through my blood stream. They’ll take up residence in the heart, so at first I’ll have a heart that’s one percent young cells and 99 percent older ones. But if I keep doing this every day, a year later, my heart is 99 percent young cells. With that kind of therapy we can ultimately replenish all the cell tissues and the organs in the body. This is not something that will happen tomorrow, but these are the kinds of revolutionary processes we’re on the verge of.

If you look at human longevity – which is another one of these exponential trends – you’ll notice that we added a few days every year to the human life expectancy in the 18th century. In the 19th century we added a few weeks every year, and now we’re now adding over a hundred days a year, through all of these developments, which are going to continue to accelerate. Many knowledgeable observers, including myself, feel that within ten years we’ll be adding more than a year every year to life expectancy.

As we get older, human life expectancy will actually move out at a faster rate than we’re actually progressing in age, so if we can hang in there, our generation is right on the edge. We have to watch our health the old-fashioned way for a while longer so we’re not the last generation to die prematurely. But if you look at our kids, by the time they’re 20, 30, 40 years old, these technologies will be so advanced that human life expectancy will be pushed way out.

There is also the more fundamental issue of whether or not ethical debates are going to stop the developments that I’m talking about. It’s all very good to have these mathematical models and these trends, but the question is if they going to hit a wall because people, for one reason or another – through war or ethical debates such as the stem cell issue controversy – thwart this ongoing exponential development.

I strongly believe that’s not the case. These ethical debates are like stones in a stream. The water runs around them. You haven’t seen any of these biological technologies held up for one week by any of these debates. To some extent, they may have to find some other ways around some of the limitations, but there are so many developments going on. There are dozens of very exciting ideas about how to use genomic information and proteomic information. Although the controversies may attach themselves to one idea here or there, there’s such a river of advances. The concept of technological advance is so deeply ingrained in our society that it’s an enormous imperative. Bill Joy has gotten around – correctly – talking about the dangers, and I agree that the dangers are there, but you can’t stop ongoing development.

The kinds of scenarios I’m talking about 20 or 30 years from now are not being developed because there’s one laboratory that’s sitting there creating a human-level intelligence in a machine. They’re happening because it’s the inevitable end result of thousands of little steps. Each little step is conservative, not radical, and makes perfect sense. Each one is just the next generation of some company’s products. If you take thousands of those little steps – which are getting faster and faster – you end up with some remarkable changes 10, 20, or 30 years from now. You don’t see Sun Microsystems saying the future implication of these technologies is so dangerous that they’re going to stop creating more intelligent networks and more powerful computers. Sun can’t do that. No company can do that because it would be out of business. There’s enormous economic imperative.

There is also a tremendous moral imperative. We still have not millions but billions of people who are suffering from disease and poverty, and we have the opportunity to overcome those problems through these technological advances. You can’t tell the millions of people who are suffering from cancer that we’re really on the verge of great breakthroughs that will save millions of lives from cancer, but we’re canceling all that because the terrorists might use that same knowledge to create a bioengineered pathogen.

This is a true and valid concern, but we’re not going to do that. There’s a tremendous belief in society in the benefits of continued economic and technological advance. Still, it does raise the question of the dangers of these technologies, and we can talk about that as well, because that’s also a valid concern.

Another aspect of all of these changes is that they force us to re-evaluate our concept of what it means to be human. There is a common viewpoint that reacts against the advance of technology and its implications for humanity. The objection goes like this: we’ll have very powerful computers but we haven’t solved the software problem. And because the software‘s so incredibly complex, we can’t manage it.

I address this objection by saying that the software required to emulate human intelligence is actually not beyond our current capability. We have to use different techniques – different self-organizing methods – that are biologically inspired. The brain is complicated but it’s not that complicated. You have to keep in mind that it is characterized by a genome of only 23 million bytes. The genome is six billion bits – that’s eight hundred million bytes – and there are massive redundancies. One pretty long sequence called ALU is repeated 300 thousand times. If you use conventional data compression on the genomes (at 23 million bytes, a small fraction of the size of Microsoft Word), it’s a level of complexity that we can handle. But we don’t have that information yet.

You might wonder how something with 23 million bytes can create a human brain that’s a million times more complicated than itself. That’s not hard to understand. The genome creates a process of wiring a region of the human brain involving a lot of randomness. Then, when the fetus becomes a baby and interacts with a very complicated world, there’s an evolutionary process within the brain in which a lot of the connections die out, others get reinforced, and it self-organizes to represent knowledge about the brain. It’s a very clever system, and we don’t understand it yet, but we will, because it’s not a level of complexity beyond what we’re capable of engineering.

In my view there is something special about human beings that’s different from what we see in any of the other animals. By happenstance of evolution we were the first species to be able to create technology. Actually there were others, but we are the only one that survived in this ecological niche. But we combined a rational faculty, the ability to think logically, to create abstractions, to create models of the world in our own minds, and to manipulate the world. We have opposable thumbs so that we can create technology, but technology is not just tools. Other animals have used primitive tools, but the difference is actually a body of knowledge that changes and evolves itself from generation to generation. The knowledge that the human species has is another one of those exponential trends.

We use one stage of technology to create the next stage, which is why technology accelerates, why it grows in power. Today, for example, a computer designer has these tremendously powerful computer system design tools to create computers, so in a couple of days they can create a very complex system and it can all be worked out very quickly. The first computer designers had to actually draw them all out in pen on paper. Each generation of tools creates the power to create the next generation.

So technology itself is an exponential, evolutionary process that is a continuation of the biological evolution that created humanity in the first place. Biological evolution itself evolved in an exponential manner. Each stage created more powerful tools for the next, so when biological evolution created DNA it now had a means of keeping records of its experiments so evolution could proceed more quickly. Because of this, the Cambrian explosion only lasted a few tens of millions of years, whereas the first stage of creating DNA and primitive cells took billions of years. Finally, biological evolution created a species that could manipulate its environment and had some rational faculties, and now the cutting edge of evolution actually changed from biological evolution into something carried out by one of its own creations, Homo sapiens, and is represented by technology. In the next epoch this species that ushered in its own evolutionary process – that is, its own cultural and technological evolution, as no other species has – will combine with its own creation and will merge with its technology. At some level that’s already happening, even if most of us don’t necessarily have them yet inside our bodies and brains, since we’re very intimate with the technology-it’s in our pockets. We’ve certainly expanded the power of the mind of the human civilization through the power of its technology.

We are entering a new era. I call it “the Singularity.” It’s a merger between human intelligence and machine intelligence that is going to create something bigger than itself. It’s the cutting edge of evolution on our planet. One can make a strong case that it’s actually the cutting edge of the evolution of intelligence in general, because there’s no indication that it’s occurred anywhere else. To me that is what human civilization is all about. It is part of our destiny and part of the destiny of evolution to continue to progress ever faster, and to grow the power of intelligence exponentially. To contemplate stopping that – to think human beings are fine the way they are – is a misplaced fond remembrance of what human beings used to be. What human beings are is a species that has undergone a cultural and technological evolution, and it’s the nature of evolution that it accelerates, and that its powers grow exponentially, and that’s what we’re talking about. The next stage of this will be to amplify our own intellectual powers with the results of our technology.

What is unique about human beings is our ability to create abstract models and to use these mental models to understand the world and do something about it. These mental models have become more and more sophisticated, and by becoming embedded in technology, they have become very elaborate and very powerful. Now we can actually understand our own minds. This ability to scale up the power of our own civilization is what’s unique about human beings.

Patterns are the fundamental ontological reality, because they are what persists, not anything physical. Take myself, Ray Kurzweil. What is Ray Kurzweil? Is it this stuff here? Well, this stuff changes very quickly. Some of our cells turn over in a matter of days. Even our skeleton, which you think probably lasts forever because we find skeletons that are centuries old, changes over within a year. Many of our neurons change over. But more importantly, the particles making up the cells change over even more quickly, so even if a particular cell is still there the particles are different. So I’m not the same stuff, the same collection of atoms and molecules that I was a year ago.

But what does persist is that pattern. The pattern evolves slowly, but the pattern persists. So we’re kind of like the pattern that water makes in a stream; you put a rock in there and you’ll see a little pattern. The water is changing every few milliseconds; if you come a second later, it’s completely different water molecules, but the pattern persists. Patterns are what have resonance. Ideas are patterns, technology is patterns. Even our basic existence as people is nothing but a pattern. Pattern recognition is the heart of human intelligence. Ninety-nine percent of our intelligence is our ability to recognize patterns.

There’s been a sea change just in the last several years in the public understanding of the acceleration of change and the potential impact of all of these technologies – computer technology, communications, biological technology – on human society. There’s really been tremendous change in popular public perception in the past three years because of the onslaught of stories and news developments that document and support this vision. There are now several stories every day that are significant developments and that show the escalating power of these technologies.

http://www.edge.org/3rd_culture/ kurzweil_singularity/kurzweil_singularity_index.html

ZDNet.com, November 1, 2009, by Chris Jablonski  —  Software vulnerabilities that take days or weeks to fix may one day be a thing of the past. A team of researchers have presented new software, called ClearView, that automatically patches errors in deployed software in a matter of minutes.

As Technology Review reports, ClearView works without assistance from humans and without access to a program’s underlying source code. Instead, it monitors the behavior of a binary: the form the program takes in order to execute instructions on a computer’s hardware.

A paper, Automatically Patching Errors in Deployed Software, published by the Association for Computing Machinery, explains how ClearView works as five sequential steps:

  1. It observes normal executions to learn invariants that characterize the application’s normal behavior
  2. Uses error detectors to distinguish normal executions from erroneous executions
  3. Identifies violations of learned invariants that occur during erroneous executions
  4. Generates candidate repair patches that enforce selected invariants by changing the state or flow of control to make the invariant true, and;
  5. Observes the continued execution of patched applications to select the most successful patch

In other words, by observing a program’s normal behavior and assigning a set of rules, ClearView detects certain types of errors, particularly those caused when an attacker injects malicious input into a program. So when something goes amiss, ClearView detects the anomaly and identifies the rules that have been violated. And then it comes up with several potential patches that are applied directly to the binary (rather than access the source code) to force the software to follow the violated rules. ClearView analyzes these possibilities to decide which are most likely to work, then installs the top candidates and tests their effectiveness. And If additional rules are violated, or if a patch causes the system to crash, ClearView rejects it and tries another.  The process is illustrated below:


With this approach, ClearView can detect errors and provide for an automatic fix that doesn’t require restart nor interrupt the execution, making it well-suited to correct errors in software with high availability requirements.

According to Martin Rinard, a professor of computer science at MIT, ClearView could be used to fix programs without requiring the cooperation of the company that made the software, or to repair programs that are no longer being maintained. He hopes the system could extend the life of older versions of software, created by companies that have gone out of business, in addition to protecting current software, writes TR.

To test the system, the researchers (most from MIT) brought in an independent team to attack a group of computers running Firefox.  The team developed ten code injection exploits and used these exploits to repeatedly attack an application protected by ClearView. ClearView successfully detected and fended off the would-be attacks. For seven of the ten exploits, ClearView automatically generated patches that corrected the error, enabling the application to survive the attacks and continue on to successfully process subsequent inputs.

Finally, the independent team attempted to make Clear-View apply an undesirable patch, but ClearView’s patch evaluation mechanism enabled ClearView to identify and discard both ineffective patches and damaging patches.

The work was presented earlier this month at the 22nd ACM Symposium on Operating Systems Principles.

For more technical information consult the PDF and slides presented.