August 2008 – University of Reading scientists say an unlikely source – a blob of rat brain cells – controls a robot, a process that could provide clues into diseases like Alzheimer’s.

Meet Gordon, the world’s first robot controlled exclusively by living brain tissue.

The groundbreaking experiment of the neuron-powered machine explores the vanishing boundary between natural and artificial intelligence. One of the lead researchers said it sheds light on the fundamental building blocks of memory and learning.

The project marries 300,000 rat neurons to a robot that navigates via sonar.

The neurons are now being taught to steer the robot around obstacles and avoid the walls of the small pen in which it is kept. By studying what happens to the neurons as they learn, its creators hope to reveal how memories are laid down.

“The purpose is to figure out how memories are actually stored in a biological brain,” said Kevin Warwick, a professor at the University of Reading and one of the robot’s principle architects.

Observing how the nerve cells turn into a network as they fire off electrical impulses may also help scientists combat neurodegenerative diseases that attack the brain such as Alzheimer’s and Parkinson’s

“If we can understand some of the basics of what is going on in our little model brain, it could have enormous medical spin-offs,” he said.

Gordon looks like the garbage-compacting hero of the blockbuster animation “Wall-E”; his brain is composed of 50,000 to 100,000 active neurons.

Once removed from rat fetuses and disentangled from each other with an enzyme bath, the specialized nerve cells are laid out in a nutrient-rich medium across a five-by-five inch array of 60 electrodes.

This “multi-electrode array” (MEA) serves as the connection between living tissue and machine. The brain sends electrical impulses to drive the wheels of the robots, and receiving impulses delivered by sensors reacting to the environment.

The cells are living tissue, and are therefore kept separate from the robot in a temperature-controlled cabinet in a container pitted with electrodes.

Scientists transmit signals to the robot via Bluetooth short-range radio.

Once the robot can learn to steer, researchers plan to disrupt the memories in an effort to recreate the gradual loss of mental faculties seen in diseases such as Alzheimer’s and Parkinson’s.

They want to know how neural tissue is degraded or copes with the disruption.

“One of the fundamental questions that neuroscientists are facing today is how we link the activity of individual neurons to the complex behaviors that we see in whole organisms and whole animals,” said Dr Ben Whalley, a neuroscientist at Reading.

“This project gives us a really useful and unique opportunity to look at something that may exhibit whole behaviors but still remains closely tied to the activity of individual neurons,” he said.

The Reading team is not the first to use living tissue to control robots.

Dr. Steve Potter at the Georgia Institute of Technology pioneered work in 2003 on what he dubbed “hybrots” that connected neural tissue and robots.

August 2008, BBCNews.com – Dr Ben Whalley, from the University of Reading has carried out tests on the ‘rat- brain-controlled’ robot.

A robot controlled by a blob of rat brain cells could provide insights into diseases such as Alzheimer’s, University of Reading scientists say.

The project marries 300,000 rat neurons to a robot that navigates via sonar.

The neurons are now being taught to steer the robot around obstacles and avoid the walls of the small pen in which it is kept.

By studying what happens to the neurons as they learn, its creators hope to reveal how memories are laid down.

Hybrid machines

The blob of nerves forming the brain of the robot was taken from the neural cortex in a rat foetus and then treated to dissolve the connections between individual neurons.

Sensory input from the sonar on the robot is piped to the blob of cells to help them form new connections that will aid the machine as it navigates around its pen.

As the cells are living tissue, they are kept separate from the robot in a temperature-controlled cabinet in a container pitted with electrodes. Signals are passed to and from the robot via Bluetooth short-range radio.

The robot and rat brain cells work together

The brain cells have been taught how to control the robot’s movements so it can steer round obstacles and the next step, say its creators, is to get it to recognise its surroundings.

Once the robot can do this the researchers plan to disrupt the memories in a bid to recreate the gradual loss of mental faculties seen in diseases such as Alzheimer’s and Parkinson’s.

Studies of how neural tissue is degraded or copes with the disruption could give insights into these conditions.

“One of the fundamental questions that neuroscientists are facing today is how we link the activity of individual neurons to the complex behaviours that we see in whole organisms and whole animals,” said Dr Ben Whalley, a neuroscientist at Reading.

“This project gives us a really useful and unique opportunity to look at something that may exhibit whole behaviours but still remains closely tied to the activity of individual neurons,” he said.

The Reading team is not the first to harness living tissue to control robots.

In 2003, Dr Steve Potter at the Georgia Institute of Technology pioneered work on what he dubbed “hybrots” that marry neural tissue and robots.

In earlier work, scientists at Northwestern University Medical Center in the US wired a wheeled robot up to a lamprey in a bid to explore novel ways of controlling prosthetics.

Story from BBC NEWS:

The human brain could become a battlefield in future wars, a new report predicts, including ‘pharmacological land mines’ and drones directed by mind control


August 2008, Guardian.Co.UK – – Rapid advances in neuroscience could have a dramatic impact on national security and the way in which future wars are fought, US intelligence officials have been told.

In a report commissioned by the Defense Intelligence Agency, leading scientists were asked to examine how a greater understanding of the brain over the next 20 years is likely to drive the development of new medicines and technologies.

They found several areas in which progress could have a profound impact, including behaviour-altering drugs, scanners that can interpret a person’s state of mind and devices capable of boosting senses such as hearing and vision.

On the battlefield, bullets may be replaced with “pharmacological land mines” that release drugs to incapacitate soldiers on contact, while scanners and other electronic devices could be developed to identify suspects from their brain activity and even disrupt their ability to tell lies when questioned, the report says.

“The concept of torture could also be altered by products in this market. It is possible that some day there could be a technique developed to extract information from a prisoner that does not have any lasting side effects,” the report states.

The report highlights one electronic technique, called transcranial direct current stimulation, which involves using electrical pulses to interfere with the firing of neurons in the brain and has been shown to delay a person’s ability to tell a lie.

Drugs could also be used to enhance the performance of military personnel. There is already anecdotal evidence of troops using the narcolepsy drug modafinil, and ritalin, which is prescribed for attention deficit disorder, to boost their performance. Future drugs, developed to boost the cognitive faculties of people with dementia, are likely to be used in a similar way, the report adds.

Greater understanding of the brain’s workings is also expected to usher in new devices that link directly to the brain, either to allow operators to control machinery with their minds, such as flying unmanned reconnaissance drones, or to boost their natural senses.

For example, video from a person’s glasses, or audio recorded from a headset, could be processed by a computer to help search for relevant information. “Experiments indicate that the advantages of these devices are such that human operators will be greatly enhanced for things like photo reconnaissance and so on,” Kit Green, who chaired the report committee, said.

The report warns that while the US and other western nations might now consider themselves at the forefront of neuroscience, that is likely to change as other countries ramp up their computing capabilities. Unless security services can monitor progress internationally, they risk “major, even catastrophic, intelligence failures in the years ahead”, the report warns.

“In the intelligence community, there is an extremely small number of people who understand the science and without that it’s going to be impossible to predict surprises. This is a black hole that needs to be filled with light,” Green told the Guardian.

The technologies will one day have applications in counter-terrorism and crime-fighting. The report says brain imaging will not improve sufficiently in the next 20 years to read peoples’ intentions from afar and spot criminals before they act, but it might be good enough to help identify people at a checkpoint or counter who are afraid or anxious.

“We’re not going to be reading minds at a distance, but that doesn’t mean we can’t detect gross changes in anxiety or fear, and then subsequently talk to those individuals to see what’s upsetting them,” Green said.

The development of advanced surveillance techniques, such as cameras that can spot fearful expressions on people’s faces, could lead to some inventive ways to fool them, the report adds, such as Botox injections to relax facial muscles.

Researchers hope to find out how memory works

August 2008, A BIOLOGICAL robot controlled by a blob of rat brain has been created by British scientists.

The wheeled machine is wirelessly linked to a bundle of neurons kept at body temperature in a sterile cabinet.

Signals from the “brain” allow the robot to steer left or right to avoid objects in its path.

Researchers at the University of Reading are now trying to “teach” the robot to become familiar with its surroundings.

They hope the experiment will show how memories manifest themselves in nerve connections as the robot revisits territory it has been to before.

Scientists in other parts of the world are also developing robots with living brains made from cultured cells.

At the Georgia Institute of Technology in Atlanta, US researchers have built a similar mobile machine.

New Scientist magazine reported that the US team was training their robot as if it was an animal.

The British research is led by Professor Kevin Warwick, who has pioneered the merging of biology and robotics by conducting bizarre “cyborg” experiments on himself.

Another aspect of the research is achieving a better understanding of conditions that affect the brain such as Alzheimer’s and Parkinson’s disease, and strokes.

Prof Warwick said: “This new research is tremendously exciting as firstly the biological brain controls its own moving robot body, and secondly it will enable us to investigate how the brain learns and memorises its experiences.

“This research will move our understanding forward of how brains work, and could have a profound effect on many areas of science and medicine.”


Professor Kevin Warwick

What happens when a man is merged with a computer?

This is the question that Professor Kevin Warwick and his team at the the department of Cybernetics, University of Reading intend to answer with ‘Project Cyborg’.

On Monday 24th August 1998, at 4:00pm, Professor Kevin Warwick underwent an operation to surgically implant a silicon chip transponder in his foream. Dr. George Boulous carried out the operation at Tilehurst Surgery, using local anaesthetic only.

This experiment allowed a computer to monitor Kevin Warwick as he moved through halls and offices of the Department of Cybernetics at the University of Reading, using a unique identifying signal emitted by the implanted chip. He could operate doors, lights, heaters and other computers without lifting a finger.

The chip implant technology has the capability to impact our lives in ways that have been previously thought possible in only sci-fi movies. The implant could carry all sorts of information about a person, from Access and Visa details to your National Insurance number, blood type, medical records etc., with the data being updated where necessary.

The second phase of the experiment Project Cyborg 2.0 got underway in March 2002. This phase will look at how a new implant could send signals back and forth between Warwick’s nervous system and a computer. If this phase succeeds with no complications, a similar chip will be implanted in his wife, Irena. This will allow the investigation of how movement, thought or emotion signals could be transmitted from one person to the other, possibly via the Internet. The question is how much can the brain process and adapt to unfamiliar information coming in through the nerve branches? Will the brain accept the information? Will it try to stop it or be able to cope? Professor Kevin Warwicks answer to these questions is quite simply “We don’t have an idea – yet, but if this experiment has the possiblility to help even one person, it is worth doing just to see what might happen”.


The World’s First Cyborg

The Next Step Towards True Cyborgs?

On the 14th of March 2002 a one hundred electrode array was surgically implanted into the median nerve fibres of the left arm of Professor Kevin Warwick. The operation was carried out at Radcliffe Infirmary, Oxford, by a medical team headed by the neurosurgeons Amjad Shad and Peter teddy. The procedure, which took a little over two hours, involved inserting a guiding tube into a two inch incision made above the wrist, inserting the microelectrode array into this tube and firing it into the median nerve fibres below the elbow joint.

A number of experiments have been carried out using the signals detected by the array, most notably Professor Warwick was able to control an electric wheelchair and an intelligent artificial hand, developed by Dr Peter Kyberd, using this neural interface. In addition to being able to measure the nerve signals transmitted down Professor Wariwck’s left arm, the implant was also able to create artificial sensation by stimluating individual electrodes within the array. This was demonstrated with the aid of Kevin’s wife Irena and a second, less complex implantconnecting to her nervous system.

Another important aspect of the work undertaken as part of this project has been to monitor the effects of the implant on Professor Warwick’s hand functions. This was carried out by Allesio Murgia a research student at the department, using the Southampton Hand Assessment Procedure (SHAP) test. By testing hand functionality during the course of the project the difference between the performance indicators before, during and after the implant was present in Kevin’s arm can be used to give a measure of the risks associated with this and future cyborg experiments.


Kevin Warwick is Professor of Cybernetics at the University of Reading, England, where he carries out research in artificial intelligence, control, robotics and biomedical engineering. He is also Director of the University KTP Centre, which links the University with Small to Medium Enterprises and raises over £2Million each year in research income for the University.

Kevin was born in Coventry, UK and left school to join British Telecom, at the age of 16. At 22 he took his first degree at Aston University, followed by a PhD and a research post at Imperial College, London. He subsequently held positions at Oxford, Newcastle and Warwick universities before being offered the Chair at Reading, at the age of 33.

He has been awarded higher doctorates (DScs) both by Imperial College and the Czech Academy of Sciences, Prague. He was presented with The Future of Health technology Award from MIT (USA), was made an Honorary Member of the Academy of Sciences, St.Petersburg and received The IEE Achievement Medal in 2004. In 2000 Kevin presented the Royal Institution Christmas Lectures, entitled “The Rise of The Robots”.

Kevin has carried out a series of pioneering experiments involving the neuro-surgical implantation of a device into the median nerves of his left arm in order to link his nervous system directly to a computer in order to assess the latest technology for use with the disabled. He has been successful with the first extra-sensory (ultrasonic) input for a human and with the first purely electronic communication experiment between the nervous systems of two humans. His research has been discussed by the US White House Presidential Council on BioEthics, The European Commission FTP and has led to him being widely referenced and featured in academic circles as well as appearing as cover stories in several magazines – e.g. Wired (USA), The Week (India).

His work is now used as material in several advanced Level Physics courses in the UK and in many University courses including Harvard, Stanford, MIT & Tokyo. His implants are on display in the Science Museums in London and Naples. As a result, Kevin regularly gives invited Keynote presentations around the world at top international conferences.

Kevin’s research involves robotics and he is responsible (with Jim Wyatt) for Cybot, a robot exported around the world as part of a magazine “Real Robots” – this has resulted in royalties totalling over £1M for Reading University. Robots designed and constructed by Kevin’s group (Ian Kelly, Ben Hutt) are on permanent interactive display in the Science Museums in London, Birmingham and Linz.

Kevin is currently working closely with Dr Daniela Cerqui, a social and cultural anthropologist to address the main social, ethical, philosophical and anthropological issues related to his research into robotics and cyborgs.

Kevin regularly makes international presentations for the UK Foreign Office and the British Council, e.g.2004/5 India, New Zealand, Singapore, Malaysia, China, Spain, Czech Rep., USA and Hong Kong.

His presentations include The 1998 Robert Boyle Memorial Lecture at Oxford University, The 2000 Royal Institution Christmas Lectures, The 2001 Higginson Lecture at Durham University, The 2003 Royal Academy of Engineering/Royal Society of Edinburgh Joint lecture in Edinburgh, The 2003 IEEE (UK) Annual Lecture in London, The 2004 Woolmer Lecture at York University, the Robert Hooke Lecture (Westminster) in 2005, the 2005 Einstein Lecture in Potsdam, Germany and the 2006 IMechE Mechatronics Prestige Lecture in London.

Kevin was a member of the 2001 HEFCE (unit 29) panel on Electrical & Electronic Engineering, is Deputy Chairman for the same panel in the 2007/8 exercise and is a member of the EPSRC Peer College. He has produced over 400 publications on his research including more than 90 refereed journal articles and 25 books. Kevin received the EPSRC Millenium Award (2000) for his schools robot league project and is the youngest ever Fellow of the City and Guilds of London Institute. Kevin’s research has featured in many TV and film documentaries, e.g. in 2004/5 – Inventions that changed the world (BBC2), Future Scope (RAI 1) and in The Making of I Robot (Twentieth Century Fox/Channel 5). He has appeared 3 times on Tomorrow’s World, 5 times in Time magazine, twice in Newsweek and was selected by Channel 4 as one of the Top 6 UK Scientists for their 2001 series “Living Science”. In 2002 he was chosen by the IEE as one of the top 10 UK Electrical Engineers. Kevin also appeared as one of 30 “great minds on the future” in the THES/Oxford University book – Predictions – with J.K.Galbraith, Umberto Eco and James Watson.

Kevin’s research is frequently referred to by other authors – recent examples being in books by Robert Winston, Peter Cochrane, Jeremy Clarkson and Susan Greenfield. Kevin’s research has also been selected by National Geographic International for a 1 hour documentary, entitled “I,Human” to be screened in 2006 – this will be broadcast in 143 countries and translated into 23 different languages.


Cyborg 1.0

Kevin Warwick outlines his plan to become one with his computer.

I was born human. But this was an accident of fate – a condition merely of time and place. I believe it’s something we have the power to change. I will tell you why.

In August 1998, a silicon chip was implanted in my arm, allowing a computer to monitor me as I moved through the halls and offices of the Department of Cybernetics at the University of Reading, just west of London, where I’ve been a professor since 1988. My implant communicated via radio waves with a network of antennas throughout the department that in turn transmitted the signals to a computer programmed to respond to my actions. At the main entrance, a voice box operated by the computer said “Hello” when I entered; the computer detected my progress through the building, opening the door to my lab for me as I approached it and switching on the lights. For the nine days the implant was in place, I performed seemingly magical acts simply by walking in a particular direction. The aim of this experiment was to determine whether information could be transmitted to and from an implant. Not only did we succeed, but the trial demonstrated how the principles behind cybernetics could perform in real-life applications.

Eighteen months from now, or possibly sooner, I will conduct a follow-up experiment with a new implant that will send signals back and forth between my nervous system and a computer. I don’t know how I will react to unfamiliar signals transmitted to my brain, since nothing quite like this has ever before been attempted. But if this test succeeds, with no complications, then we’ll go ahead with the placement of a similar implant in my wife, Irena.

My research team is made up of 20 scientists, including two who work directly with me: Professor Brian Andrews, a neural-prosthesis specialist who recently joined our project from the University of Alberta in Canada, and professor William Harwin, a cybernetics expert and former codirector of the Rehabilitation Robotics Laboratory at the University of Delaware in the US. The others are a mixture of faculty and researchers, divided into three teams charged with developing intelligent networks, robotics and sensors, and biomedical signal processing – i.e., creating software to read the signals the implant receives from my nervous system and to condition that data for retransmission.

We are in discussions with Dr. Ali Jamous, a neurosurgeon at Stoke Mandeville Hospital in nearby Aylesbury, to insert my next implant, although we’re still sorting out the final details. Ordinarily, there might be a problem getting a doctor to consider this type of surgery, but my department has a long-standing research link with the hospital, whose spinal-injuries unit does a lot of advanced work in neurosurgery. We’ve collaborated on a number of projects to help people overcome disabilities through technical aids: an electric platform for children who use wheelchairs, a walking frame for people with spinal injuries, and a self-navigating wheelchair. While Jamous has his own research agenda, we are settling on a middle ground that will satisfy both parties’ scientific goals.

My first implant was inserted by Dr. George Boulos at Tilehurst Surgery in Reading into the upper inside of my left arm, beneath the inner layer of skin and on top of the muscle. The next device will be connected to the nerve fibers in my left arm, positioned about halfway between my elbow and shoulder. (It doesn’t matter which arm carries the implant; I chose my left because I’m right-handed, and I hope I will suffer less manual impairment if any problems arise during the experiment.) Most of the nerves in this part of the body are connected to the hand, and send and receive the electronic impulses that control dexterity, feeling, even emotions. A lot of these signals are traveling here at any given time: This nerve center carries more information than any other part of the anatomy, aside from the spine and the head (in the optic and auditory nerves), and so is large and quite strong. Moreover, very few of the nerves branch off to muscles and other parts of the upper arm – it’s like a freeway with only a few on- and off-ramps, providing a cleaner pathway to the nervous system.

While we ultimately may need to place implants nearer to the brain – into the spinal cord or onto the optic nerve, where there is a more powerful setup for transmitting and receiving specific complex sensory signals – the arm is an ideal halfway point.

This implant, like the first, will be encased in a glass tube. We chose glass because it’s fairly inert and won’t become toxic or block radio signals. There is an outside chance that the glass will break, which could cause serious internal injuries or prove fatal, but our previous experiment showed glass to be pretty rugged, even when it’s frequently jolted or struck.

One end of the glass tube contains the power supply – a copper coil energized by radio waves to produce an electric current. In the other end, three mini printed circuit boards will transmit and receive signals. The implant will connect to my body through a band that wraps around the nerve fibers – it looks like a little vicar’s collar – and is linked by a very thin wire to the glass capsule.

The chips in the implant will receive signals from the collar and send them to a computer instantaneously. For example, when I move a finger, an electronic signal travels from my brain to activate the muscles and tendons that operate my hand. The collar will pick up that signal en route. Nerve impulses will still reach the finger, but we will tap into them just as though we were listening in on a telephone line. The signal from the implant will be analog, so we’ll have to convert it to digital in order to store it in the computer. But then we will be able to manipulate it and send it back to my implant.

No processing will be done inside the implant. Rather, it will only send and receive signals, much like a telephone handset sends and receives sound waves. It’s true that onboard power would increase our options for programming more complex tasks into the implant, but that would require a much larger device. While a 1-inch-long glass tube isn’t obtrusive, I really don’t fancy an object the size of an orange built into my arm.

We’ll tap into my nerve fibers and try a progression of experiments once my new implant is switched on. One of the first will be to record and identify signals associated with motion. When I waggle my left index finger, it will send a corresponding signal via the implant to the computer, where it will be recorded and stored. Next, we can transmit this signal to the implant, hoping to generate an action similar to the original. I will consider the test a fantastic success if we can record a movement, then reproduce it when we send the signals back to the arm.

Pain also provides a distinctly clear electronic signal on the nervous system as it moves from its point of origin to the brain. We intend to find out what happens if that signal is transmitted to the computer and then played back again. Will I feel the same sensation, or something more akin to the phantom pains amputees “feel” in their missing limbs? Our brains associate an ache with a specific point on the body; it will also be interesting to see whether this sensation can be manipulated by slightly modifying the signal in the computer and then trying to send it to another area.

When the new chip is in place, we will tap into my nerve fibers and try out a whole new range of senses.

We will then attempt this exercise with emotional signals. When I’m happy, we’ll record that signal. Then, if my mood changes the next day, we’ll play the happy signal back and see what happens.

I am most curious to find out whether implants could open up a whole new range of senses. For example, we can’t normally process signals like ultraviolet, X rays, or ultrasound. Infrared detects visible heat given off by a warm body, though our eyes can’t see light in this part of the spectrum. But what if we fed infrared signals into the nervous system, bypassing the eyes? Would I be able to learn how to perceive them? Would I feel or even “see” the warmth? Or would my brain simply be unable to cope? We don’t have any idea – yet.

The potential for medical breakthroughs in existing disabilities is phenomenally important. Might it be possible to add an extra route for more senses or to provide alternative pathways for blind or deaf people to “see” or “hear” with ultrasonic and infrared wavelengths? Perhaps a blind person could navigate around objects with ultrasonic radar, much the way bats do. Robots have been programmed to perform this action already, and neuroscientists have not dismissed the idea for humans. But few people have ever had their nervous systems linked to a computer, so the concept of sensing the world around us using more than our natural abilities is still science fiction. I’m hoping to change that.

People have asked me, too, whether it would be possible to get high from drugs, store those signals, and then return them to the nervous system later to reproduce the sensation. To that end, I plan to have a glass or two of wine and record my body’s reaction, captured in exactly the same way I “saved” movement or pain. The following day, I will play back the recorded signals. As my brain tries to make sense of these, it might search for past experiences, trying to put things in terms of what it already knows. Thus, when my brain receives the “drunk” signal, it might believe it is indeed intoxicated. Varying on that theme, perhaps particular electronic patterns can be transmitted to the nervous system to bring about a sensation equivalent to that of drinking bourbon or rum.

If this type of experiment works, I can foresee researchers learning to send antidepressant stimulation or even contraception or vaccines in a similar manner. We have the potential to alter the whole face of medicine, to abandon the concept of feeding people chemical treatments and cures and instead achieve the desired results electronically. Cyberdrugs and cybernarcotics could very well cure cancer, relieve clinical depression, or perhaps even be programmed as a little pick-me-up on a particularly bad day.

We don’t know how much the brain can adapt to unfamiliar information coming in through the nerve branches. Our hunch is that the brain of a young child is pliable, so that it might well be able to take in new sensory information in its own right. In response to the additional input, the nerve fibers linked to an implant might begin to grow thicker and more powerful with the ability to carry more and different kinds of information. A 45-year-old brain like mine is another matter. In the absence of any previous sensory reference, will my brain be able to process signals that don’t correspond precisely to sight, sound, smell, taste, or touch? It will probably deal with something like X-ray stimulation in terms of the signals it thinks most similar. Depending on its best guess, I might feel pain, tension, or excitement. But we want to avoid feeding in too much noise, as that could be distinctly risky. I do worry that certain kinds of raw input could make me crazy. For me, in any case, all these experiments are worth doing just to see what might happen. If the results aren’t encouraging, then – what the hell – at least I tried.

I plan to keep my next implant in place for a minimum of a week, possibly up to two. If the experiments are successful, we would then place implants into two people at the same time. We’d like to send movement and emotion signals from one person to the other, possibly via the Internet. My wife, Irena, has bravely volunteered to go ahead with his-and-hers implants. The way she puts it is that if anyone is going to jack into my limbic system – to know definitively when I’m feeling happy, depressed, angry, or even sexually aroused – she wants it to be her.

Irena and I will investigate the whole range of emotion and sensation. If I move a hand or finger, then send those signals to Irena, will she make the same movement? I think it likely she’ll feel something. Might she feel the same pain as I do? If I sprained my ankle, could I send the signal to Irena to make her feel as though she has injured herself?

We know that different people have varying emotional responses to the same stimulus. If I send a particular signal to her, will she recognize it in the same way? Based on my own reaction to having my emotional impulses replayed on my nervous system, we will have a preliminary idea of what Irena might experience, but we are entering progressively uncharted territory once we attempt to relay prerecorded signals. What her brain can comprehend in terms of my neural impulses is completely unknown. Yet if Irena’s brain can make out, even roughly, my incoming signals, then I believe her own stored knowledge will be able to decipher the information into a recognizable sensation or emotion.

We would also like to demonstrate how the signals could be sent over the Internet. One of us will travel to New York, and the other will remain in the UK. Then we’ll send real-time movement and emotion signals from person to person across the continents. I am terrified of heights. If I’m staying on the 16th floor of a hotel in the US and I transmit my signals to Irena, how will they affect her? How far could we go in transmitting feelings and desires? I want to find out. What if the other person became sexually aroused? Could we record signals at the height of our arousal, then play these back and relive the experience? (As keen as I am to know the answer here, I have difficulty imagining what the scientific press might make of it.)

Will we evolve into a cyborg community? Linking people via chip implants to superintelligent machines seems a natural progression – creating, in effect, superhumans.

We are not the first group to link computers with the human nervous system via implants. Dr. Ross Davis’ team at the Neural Engineering Clinic in Augusta, Maine, has been trying to use the technology to treat patients whose central nervous systems have been damaged or affected by diseases like multiple sclerosis, and has been able to achieve basic controls over, for example, muscle function.

In 1997, a widely publicized project at the University of Tokyo attached some of a cockroach’s motor neurons to a microprocessor. Artificial signals sent to the neurons through electrodes were then used to involuntarily propel the cockroach, despite what it might have chosen to do. Also, in an experiment published last summer by John Chapin at the MCP Hahnemann School of Medicine in Philadelphia and Miguel Nicolelis at Duke University, electrodes were implanted into rats’ brains and used to transmit signals so that the rats merely had to “think” about pressing a lever in order to receive a treat. Researchers were interested to learn that the signals indicating what the rats were about to do appeared in a different part of the brain than the one usually associated with planning.

And I’m amazed by results from a team at Emory University in Atlanta, which to great international interest has implanted a transmitting device into the brain of a stroke patient. After the motor neurons were linked to silicon, the patient was able to move a cursor on a computer monitor just by thinking about it. That means thought signals were directly transmitted to a computer and used to operate it, albeit in a rudimentary way. The Emory team is looking to gradually extend the range of controls carried out.

As for self-experimentation, physicians and scientists have done this throughout history. During the early ’50s, US Air Force colonel John Stapp repeatedly strapped his body to rocket sleds and propelled himself to more than 600 mph before hitting the brakes to stop in less than 2 seconds. The military physician’s study of the human body’s tolerance for crash forces helped improve automobile, airplane, and spacecraft safety. Although Stapp survived his perilous experiments, he suffered eye damage, a hernia, a concussion, and broken bones and permanently impaired his sense of balance.

In 1984, Barry Marshall, a resident at Royal Perth Hospital in Australia, swallowed an ulcer-causing bacteria to show that the organism, and not stress, caused the abdominal ailment. Then there was Werner Forssmann, a German physician so obsessed with learning the intricacies of the human heart that in 1929 he inserted a catheter into an artery in his arm and snaked it all the way to his right auricle. In 1892, another German doctor, Max von Pettenkofer, drank a culture of the bacterium that causes cholera to show that environmental factors must also be present before the germ produces the disease. He was sick for about a week but lived – pure luck, of course, since we now know his hypothesis was erroneous. And Isaac Newton stuck needles into his eyes – for what reason, I’m not entirely sure.

As for me, I am not a foolish scientist putting my life in harm’s way. In fact, my next implant will be the culmination of my professional work: working for British Telecom, studying computer engineering and robotics, and teaching the principles of cybernetics. I have been involved with technology all my life, and now I will be able to take my research one step further.

Admittedly, I’m putting the neurological and medical aspects of the operation in the hands of the surgeon. I realize the chance of infection is higher with my second implant, since it will touch the nerve bundles. And connecting to the nervous system could also lead to permanent nerve damage, resulting in the loss of feelings or movement, or continual pain. But I am putting aside my fears and accepting my less-than-absolute understanding of the technical and psychological ramifications inherent in our attempt. I want to know.

I believe this desire – this urge to explore – is intrinsically human. My entire team is venturing into the unknown with me in order to bring humans and technology together in a way that has never been attempted. The excitement of looking over the horizon into a new world – the world of cyborgs – far outweighs the risks. Just think: Anything a computer link can help operate or interface with could be controllable via implants: airplanes, locomotives, tractors, machinery, cash registers, bank accounts, spreadsheets, word processing, and intelligent homes. In each case, merely by moving a finger, one could cause such systems to operate. It will, of course, require the requisite programs to be set up, just as keyboard entries are now required. But such programming, along with the implant owner learning a few tricks, will be relatively trivial exercises.

Linking up in this way could allow for computer intelligence to be hooked more directly into the brain, allowing humans immediate access to the Internet, enabling phenomenal math capabilities and computer memory. Will you need to learn any math if you can call up a computer merely by your thoughts? Must you remember anything at all when you can access a world Internet memory bank?

I can envision a future when we send signals so that we don’t have to speak. Thought communication will place telephones firmly in the history books. Philosophers point to language in humans as being an important part of our culture and who we are. Certainly, language has had everything to do with human development. But language is merely a tool we use to translate our thoughts. In the future, we won’t need to code thoughts into language – we will uniformly send symbols and ideas and concepts without speaking. We will probably become less open, more able to control our feelings and emotions – which will also become necessary, since others will more easily be able to access what we’re thinking or feeling. We will still fall back on speech in order to communicate with our newborns, however, since it will take a few years before they can safely get implants of their own, but in the future, speech will be what baby talk is today.

Thought-to-thought communication is just one feature of cybernetics that will become vitally important to us as we face the distinct possibility of being superseded by highly intelligent machines. Humans are crazy enough not only to build machines with an overall intelligence greater than our own, but to defer to them and give them power that matters. So how will humans cope, later this century, with machines more intelligent than us? Here, again, I believe cybernetics can help. Linking people via chip implants directly to those machines seems a natural progression, a potential way of harnessing machine intelligence by, essentially, creating superhumans. Otherwise, we’re doomed to a future in which intelligent machines rule and humans become second-class citizens. My project explores a middle ground that gives humans a chance to hang in there a bit longer. Right now, we’re moving toward a world where machines and humans remain distinct, but instead of just handing everything over to them, I offer a more gradual coevolution with computers.

Yet once a human brain is connected as a node to a machine – a networked brain with other human brains similarly connected – what will it mean to be human? Will we evolve into a new cyborg community? I believe humans will become cyborgs and no longer be stand-alone entities. What we think is possible will change in response to what kinds of abilities the implants afford us. Looking at the world and understanding it in many dimensions, not just three, will put a completely different context on how we – whatever “we” are – think.

I base this on my own experience with my first implant, when I actually became emotionally attached to the computer. It took me only a couple of days to feel like my implant was one with my body. Every day in the building where I work, things switched on or opened up for me – it felt as though the computer and I were working in harmony. As a scientist, I observed that the feelings I had were neither expected nor completely explainable – and certainly not quantifiable. It was a bit like being half of a pair of Siamese twins. The computer and I were not one, but neither were we separate. We each had our own distinct but complementary abilities. To be truthful, Irena started to get rather worried – jealous, perhaps – when I tried to explain these sensations.

With the new implant, I expect this feeling of connectedness to be much stronger, particularly when emotional signals are brought into the equation. From a medical point of view, I was pleased when the first implant was taken out, but I was otherwise quite upset – I felt as though a friend had just died. With the new implant I might find it impossible to let go, despite the potential for long-term problems were I to retain it.

These desires – which draw me closer to the implant – could ultimately influence my own values and what it means to me to be human. Morals and ethics are an outgrowth of the way in which humans interact with each other. Cultures may have diverse ethics, but, regardless, individual liberties and human life are always valued over and above machines. What happens when humans merge with machines? Maybe the machines will then become more important to us than another human life. Those who have become cyborgs will be one step ahead of humans. And just as humans have always valued themselves above other forms of life, it’s likely that cyborgs will look down on humans who have yet to “evolve.”

Surprisingly, nobody has reacted to my plans by telling me, “That’s impossible” – I think because no one really knows what will happen. When I tell others about my work, more often they are aghast, not really comprehending what I’m talking about. But no scientists have told me I shouldn’t be playing God or that what I’m doing is unfeasible or too dangerous. Even so, I am certain that after Alexander Graham Bell said, “Mr. Watson, come here, I want you,” the cynics asked, “Why didn’t you just walk to the next room and speak to him?” At the time, it was difficult to see where it all might lead. Of course, I don’t put myself in the same category as people like Bell or Charles Lindbergh or John F. Kennedy – pioneers who were convinced we could do things like land men on the moon. But I’ve been inspired by these visionaries, these risk takers, each of whom spent his lifetime obsessively pursuing his goals.

Since childhood I’ve been captivated by the study of robots and cyborgs. Now I’m in a position where I can actually become one. Each morning, I wake up champing at the bit, eager to set alight the 21st century – to change society in ways that have never been attempted, to change how we communicate, how we treat ourselves medically, how we convey emotion to one another, to change what it means to be human, and to buy a little more time for ourselves in the inevitable evolutionary process that technology has accelerated. In the meantime, I feel like screaming when I have to do paperwork or shop or go to sleep – it’s stopping me from getting on with what I really want to do. The next implant cannot come soon enough.

Kevin Warwick (kw@cyber.rdg.ac.uk) is a professor of cybernetics at the University of Reading in the UK (www.cyber.rdg.ac.uk) .


Planet Earth’s First Cyborg

Kevin Warwick, “I, Cyborg”

The cybernetic pioneer who is upgrading the human body – starting with himself

Professor Kevin Warwick, the world’s leading expert in Cybernetics, here he unveils the story of how he became the worlds first Cyborg in a ground breaking set of scientific experiments.

In the years ahead we will witness machines with an intelligence more powerful than that of humans. This will mean that robots, not humans, make all the important decisions. It will be a robot dominated world with dire consequences for humankind. Is there an alternative way ahead?

Humans have limited capabilities. Humans sense the world in a restricted way, vision being the best of the senses. Humans understand the world in only 3 dimensions and communicate in a very slow, serial fashion called speech. But can this be improved on? Can we use technology to upgrade humans?

The possibility exists to enhance human capabilities. To harness the ever increasing abilities of machine intelligence, to enable extra sensory input and to communicate in a much richer way, using thought alone. Kevin Warwick has taken the first steps on this path, using himself as a guinea pig test subject receiving, by surgical operation, technological implants connected to his central nervous system.

A Cyborg is a Cybernetic Organism, part human part machine. In this book Kevin gives a very personal account of his amazing steps towards becoming a Cyborg. The story is one of scientific endeavour and devotion, splitting apart the personal lives of himself and those around him. This astounding and unique story takes in top scientists from around the globe and seriously questions human morals, values and ethics.

Overriding everything, at the expense of a normal life, is Kevin’s all encompassing scientific quest and desire to be a Cyborg.


Robot With a Biological Brain

Stanford & Evolved Machines, Inc.

Intelligence for the Humanoid Robot ASIMO: A Synthetic Appro

Redwood Center for Theoretical Neuroscience, UC Berkeley

Neuroscience Research at Salk Institute, San Diego

Andy Grove at Society for Neuroscience, Part 1 of 2

Andy Grove at Society for Neuroscience, Part 2