Science Weekly: Bugs, bowels and bats

Filed Under Uncategorized | Comments Off on Science Weekly: Bugs, bowels and bats

How dangerous are the bacteria lurking in our homes? Is a vaccine against diarrhoea a realistic prospect? Plus: an insight into the sex lives of fruit bats

Charting the brain: Scientists will use both structural and functional brain imaging to create detailed maps of 1,200 human brains. In the top image, areas in yellow and red are structurally connected to the area indicated by the blue spot. In the bottom image, areas in yellow and red are those that are functionally connected to the blue spot. Credit: David Van Essen, Washington University



Scanning 1,200 brains could help researchers chart the organ’s fine structure and better understand neurological disorders

MIT Technology Review, September 29, 2010, by Emily Singer  —  A massive new project to scan the brains of 1,200 volunteers could finally give scientists a picture of the neural architecture of the human brain and help them understand the causes of certain neurological and psychological diseases.

The National Institutes of Health announced $40 million in funding this month for the five-year effort, dubbed the Human Connectome Project. Scientists will use new imaging technologies, some still under development, to create both structural and functional maps of the human brain.

The project is novel in its size; most brain-imaging studies have looked at tens to hundreds of brains. Scanning so many people will shed light on the normal variability within the brain structure of healthy adults, which will in turn provide a basis for examining how neural “wiring” differs in such disorders as autism and schizophrenia.

The researchers also plan to collect genetic and behavioral data, testing participants’ sensory and motor skills, memory, and other cognitive functions, and deposit this information along with brain scans in a public database (although the patients’ personal information will be stripped out). Scientists around the world can then use the database to search for the genetic and environmental factors that influence the structure of the brain.

“We want to learn as much as we can, not only about the typical patterns of brain connectivity, but also about the differences in wiring that make each of us a unique individual,” says David Van Essen, a neuroscientist at Washington University in St. Louis, who is one of the project leaders. “If you’re good at math, and I’m better at certain types of memory, can we identify some of the wiring characteristics that account for those differences?”

The most detailed studies to date of the neural circuits that connect one brain cell to another have focused on animal brains, because scientists can examine the animals’ living tissue cells and their networks under a microscope. “We don’t know how our species specifically is wired up,” says Michael Huerta, associate director of the Division of Neuroscience and Basic Behavioral Science at the National Institute of Mental Health, and director of the Connectome project. “There is an entire class of data that is missing from neuroscience that is fundamentally important for how the brain works and how it breaks down in different disorders.” And because researchers will be scanning only identical and fraternal twins and their siblings, the scientists can get a sense of the role that genetics and environment play in shaping brain structure. Structures of the brain that are highly dictated by genes will be more similar in identical twins than in fraternal twins, for example.

Most human brain imaging studies have employed magnetic resonance imaging (MRI) to examine the gross anatomy of the brain or functional MRI to detect which regions are active during specific tasks. But advances in brain imaging technologies in recent years, as well as growing computing power, have made it possible to look at the fine wiring connecting brain regions. “If we want to understand the brain, we need to know what individual areas are doing and how they talk to each other,” says Russell Poldrack, director of the Imaging Research Center at the University of Texas at Austin. “Moving from examining how 120 brain areas operate on their own to determining how those 120 areas interact with each other increases the complexity by an order of magnitude, and the scale you need to address the problem also goes up.”

Van Essen and his collaborators plan to scan participants using two relatively recent variations on MRI. Diffusion imaging, which detects the flow of water molecules down insulated neural wires, indirectly measures the location and direction of the fibers that connect one part of the brain to another. Functional connectivity, in contrast, examines whether activity in different parts of the brain fluctuates in synchrony. The regions that are highly correlated are most likely to be connected, either directly or indirectly. Combining both approaches will give scientists a clearer picture. Collaborators at the University of Minnesota and Massachusetts General Hospital are optimizing existing scanners with new magnets and custom analysis programs so that they are better suited to detecting these circuits.

“This will be a landmark study,” says Robert Williams, a neuroscientist at the University of Tennessee, in Memphis. “I think it will have the same kind of impact on neuroscience that the Human Genome Project had on human genetics, providing a strong foundation for other work.”

Credit: Technology Review


The need for operating systems to help brains and machines work together

MIT Technology Review, September 29, 2010, by Edward Boyden, Brian Allen, Doug Fritz

The last few decades have seen a surge of invention of technologies that enable the observation or perturbation of information in the brain. Functional MRI, which measures blood flow changes associated with brain activity, is being explored for purposes as diverse as lie detection, prediction of human decision making, and assessment of language recovery after stroke. Implanted electrical stimulators, which enable control of neural circuit activity, are borne by hundreds of thousands of people to treat conditions such as deafness, Parkinson’s disease, and obsessive-compulsive disorder. And new methods, such as the use of light to activate or silence specific neurons in the brain, are being widely utilized by researchers to reveal insights into how to control neural circuits to achieve therapeutically useful changes in brain dynamics. We are entering a neurotechnology renaissance, in which the toolbox for understanding the brain and engineering its functions is expanding in both scope and power at an unprecedented rate.

This toolbox has grown to the point where the strategic utilization of multiple neurotechnologies in conjunction with one another, as a system, may yield fundamental new capabilities, both scientific and clinical, beyond what they can offer alone. For example, consider a system that reads out activity from a brain circuit, computes a strategy for controlling the circuit so it enters a desired state or performs a specific computation, and then delivers information into the brain to achieve this control strategy. Such a system would enable brain computations to be guided by predefined goals set by the patient or clinician, or adaptively steered in response to the circumstances of the patient’s environment or the instantaneous state of the patient’s brain.

Some examples of this kind of “brain coprocessor” technology are under active development, such as systems that perturb the epileptic brain when a seizure is electrically observed, and prosthetics for amputees that record nerves to control artificial limbs and stimulate nerves to provide sensory feedback. Looking down the line, such system architectures might be capable of very advanced functions–providing just-in-time information to the brain of a patient with dementia to augment cognition, or sculpting the risk-taking profile of an addiction patient in the presence of stimuli that prompt cravings.

Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an “operating system” that defines how the overall system works as a unified whole–analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components.

Developing such brain coprocessor architectures would take some work–in particular, it would require technologies standardized enough, or perhaps open enough, to be interoperable in a variety of combinations. Nevertheless, much could be learned from developing relatively simple prototype systems. For example, recording technologies by themselves can report brain activity, but cannot fully attest to the causal contribution that the observed brain activity makes to a specific behavioral or clinical outcome; control technologies can input information into neural targets, but by themselves their outcomes might be difficult to interpret due to endogenous neural information and unobserved neural processing. These scientific issues can be disambiguated by rudimentary brain coprocessors, built with readily available off-the-shelf components, that use recording technologies to assess how a given neural circuit perturbation alters brain dynamics. Such explorations may begin to reveal principles governing how best to control a circuit–revealing the neural targets and control strategies that most efficaciously lead to a goal brain state or behavioral effect, and thus pointing the way to new therapeutic strategies. Miniature, implantable brain coprocessors might be able to support new kinds of personalized medicine, for example continuously adapting a neural control strategy to the goals, state, environment, and history of an individual patient–important powers, given the dynamic nature of many brain disorders.

In the future, the computational module of a brain coprocessor may be powerful enough to assist in high-level human cognition or complex decision making. Of course, the augmentation of human intelligence has been one of the key goals of computer engineers for well over half a century. Indeed, if we relax the definition of brain coprocessor just a bit, so as not to require direct physical access to the brain, many consumer technologies being developed today are converging upon brain coprocessor-like architectures. A large number of new technologies are attempting to discover information useful to a user and to deliver this information to the user in real time. Also, these discovery and delivery processes are increasingly shaped by the environment (e.g., location) and history (e.g., social interactions, searches) of the user. Thus we are seeing a departure from the classical view (as initially anticipated by early thinkers about human-machine symbiosis such as J. C. R. Licklider) in which computers receive goals from humans, perform defined computations, and then provide the results back to humans.

Of course, giving machines the authority to serve as proactive human coprocessors, and allowing them to capture our attention with their computed priorities, has to be considered carefully, as anyone who has lost hours due to interruption by a slew of social-network updates or search-engine alerts can attest. How can we give the human brain access to increasingly proactive coprocessing technologies without losing sight of our overarching goals? One idea is to develop and deploy metrics that allow us to evaluate the IQ of a human plus a coprocessor, working together–evaluating the performance of collaborating natural and artificial intelligences in a broad battery of problem-solving contexts. After all, humans with Internet-based brain coprocessors (e.g., laptops running Web browsers) may be more distractible if the goals include long, focused writing tasks, but they may be better at synthesizing data broadly from disparate sources; a given brain coprocessor configuration may be good for some problems but bad for others. Thinking of emerging computational technologies as brain coprocessors forces us to think about them in terms of the impacts they have on the brain, positive and negative, and importantly provides a framework for thoughtfully engineering their direct, as well as their emergent, effects.

Ed Boyden is Assistant Professor of Biological Engineering and Brain and Cognitive Sciences at the Media Lab, whose Synthetic Neurobiology group works on neurotechnologies for systematic analysis and control of neural circuits.

Doug Fritz is a Media Lab PhD student in the Fluid Interfaces group, working on extending human capability through just-in-time processing that augments our interface to the world.

Brian Allen is a Media Lab PhD student in the Synthetic Neurobiology group, working to develop new approaches to understanding how the brain gives rise to emotion.

Graphic: MIT Technology Review

The importance of engineering motivation into intelligence

MIT Technology Review, by Edward Boyden  —  Some futurists such as Ray Kurzweil have hypothesized that we will someday soon pass through a singularity–that is, a time period of rapid technological change beyond which we cannot envision the future of society. Most visions of this singularity focus on the creation of machines intelligent enough to devise machines even more intelligent than themselves, and so forth recursively, thus launching a positive feedback loop of intelligence amplification. It’s an intriguing thought. (One of the first things I wanted to do when I got to MIT as an undergraduate was to build a robot scientist that could make discoveries faster and better than anyone else.) Even the CTO of Intel, Justin Rattner, has publicly speculated recently that we’re well on our way to this singularity, and conferences like the Singularity Summit (at which I’ll be speaking in October) are exploring how such transformations might take place.

As a brain engineer, however, I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies. Call it the need for “machine leadership skills” or “machine philosophy”–without it, such a feedback loop might quickly sputter out.

We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important. Most science-fiction stories prefer their artificial intelligences to be extremely motivated to do things–for example, enslaving or wiping out humans, if The Matrix and Terminator II have anything to say on the topic. But I find just as plausible the robot Marvin, the superintelligent machine from Douglas Adams’ The Hitchhiker’s Guide to the Galaxy, who used his enormous intelligence chiefly to sit around and complain, in the absence of any big goal.

Indeed, a really advanced intelligence, improperly motivated, might realize the impermanence of all things, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence, concluding that inventing an even smarter machine is pointless. (A corollary of this thinking might explain why we haven’t found extraterrestrial life yet: intelligences on the cusp of achieving interstellar travel might be prone to thinking that with the galaxies boiling away in just 1019 years, it might be better just to stay home and watch TV.) Thus, if one is trying to build an intelligent machine capable of devising more intelligent machines, it is important to find a way to build in not only motivation, but motivation amplificationthe continued desire to build in self-sustaining motivation, as intelligence amplifies. If such motivation is to be possessed by future generations of intelligence–meta-motivation, as it were–then it’s important to discover these principles now.

There’s a second issue. An intelligent being may be able to envision many more possibilities than a less intelligent one, but that may not always lead to more effective action, especially if some possibilities distract the intelligence from the original goals (e.g., the goal of building a more intelligent intelligence). The inherent uncertainty of the universe may also overwhelm, or render irrelevant, the decision-making process of this intelligence. Indeed, for a very high-dimensional space of possibilities (with the axes representing different parameters of the action to be taken), it might be very hard to evaluate which path is the best. The mind can make plans in parallel, but actions are ultimately unitary, and given finite accessible resources, effective actions will often be sparse.

The last two paragraphs apply not only to AI and ET, but also describe features of the human mind that affect decision making in many of us at times–lack of motivation and drive, and paralysis of decision making in the face of too many possible choices. But it gets worse: we know that a motivation can be hijacked by options that simulate the satisfaction that the motivation is aimed toward. Substance addictions plague tens of millions of people in the United States alone, and addictions to more subtle things, including certain kinds of information (such as e-mail), are prominent too. And few arts are more challenging than passing on motivation to the next generation, for the pursuit of a big idea. Intelligences that invent more and more interesting and absorbing technologies, that can better grab and hold their attention, while reducing impact on the world, might enter the opposite of a singularity.

What is the opposite of a singularity? The singularity depends on a mathematical recursion: invent a superintelligence, and then it will invent an even more powerful superintelligence. But as any mathematics student knows, there are other outcomes of an iterated process, such as a fixed point. A fixed point is a point that, when a function is applied, gives you the same point again. Applying such a function to points near the fixed point will often send them toward the fixed point.

A “societal fixed point” might therefore be defined as a state that self-reinforces, remaining in the status quo–which could in principle be peaceful and self-sustaining, but could also be extremely boring–say, involving lots of people plugged into the Internet watching videos forever. Thus, we as humans might want, sometime soon, to start laying out design rules for technologies so that they will motivate us to some high goal or end–or at least away from dead-end societal fixed points. This process will involve thinking about how technology could help confront an old question of philosophy–namely, “What should I do, given all these possible paths?” Perhaps it is time for an empirical answer to this question, derived from the properties of our brains and the universe we live in. 

One neuron, two innovations: A mouse neuron expressing a natural opsin for controlling it, and a natural fluorescent protein for seeing it.
Credit: Brian Chow, Xue Han and Ed Boyden/MIT




Defining an Algorithm for Inventing from Nature


MIT Technology Review, by Edward Boyden and Brian Y. Chow  —  “Time after time we have rushed back to nature’s cupboard for cures to illnesses,” noted the United Nations in declaring 2010 the International Year of Biodiversity. Billions of years of evolution have equipped natural organisms with an incredible diversity of genetically encoded wealth, which, given our biological nature as humans, presents great potential when it comes to understanding our physiology and advancing our medicine. Natural products such as penicillin and aspirin are used daily to treat disease, yeast and corn yield biofuels, and viruses can deliver therapeutic genes into the body. Some of the most powerful tools for understanding biology, such as the PCR reaction, which enables DNA to be amplified and analyzed starting from tiny samples, or the green fluorescent protein (GFP), which glows green and thus enables proteins and processes to be visualized in living cells, are bioengineering applications of genes that occur in specialized organisms in specific ecological niches. But how exactly do these tools make it from the wild to benchtop or bedside?

Many bioengineering applications of natural products take place long after the basic science discovery of the product itself. For example, Osamu Shimomura, who first isolated GFP from jellyfish in the 1960s, and who won a share of the 2008 Nobel Prize in Chemistry, once explained: “I don’t do my research for application, or any benefit. I just do my research to understand why jellyfish luminesce.” Around 30 years later, Douglas Prasher, Martin Chalfie, and Roger Tsien and their colleagues isolated the gene for GFP, expressed it, and began altering the gene, enabling countless new kinds of study. Bioengineering can emerge from the conscious exploration of nature, although sometimes with long latency. Every gene product is a potential tool for perturbing or observing a biological process, as long as bioengineers proactively imagine and explore the significance of each finding in order to convert natural products into tools.

Conversely, many bioengineering needs are probably satisfied, at least in part, by a process found somewhere in nature–whether it’s making magnetic nanoparticles, or sensing heat, or synthesizing structural polymers, or implementing complex computations. The question in basic science often boils down to how generally important a process is across ecological diversity, but a bioengineer only needs one example of something to begin copying, utilizing, and modifying it.

If we can build more direct connections between bioengineering and the fields of ecology and basic organismal sciences–converging at a place you might call “econeering”–we could together meet urgent bioengineering needs more quickly, and direct resources toward basic science discovery. Scientists could deploy these basic science discoveries more rapidly for human bioengineering benefit.

Recently we’ve begun to examine some of the emerging principles of econeering, as we and others pioneer a new area–the use of natural reagents to mediate control of biological processes using light, sometimes called “optogenetics.”

As an example: Opsins are light-sensitive proteins that can, among other things, naturally alter the voltage of cells when they’re illuminated with light. They’re almost like tiny, genetically encoded solar cells. Many opsins are found in organisms that live in extreme environments, like salty ponds. The opsins help these organisms sense light and convert it into biologically useful forms of energy, an evolutionarily early sort of photosynthesis.

Plant biologists, bacteriologists, protein biochemists, and other scientists have widely studied opsins at the basic science level since the 1970s. Their goal has been to find out how these compact light-powered machines work. It was clear to one of us (Boyden) around a decade ago that opsins could, if genetically expressed in cells that signal via electricity (such as neurons or heart cells), be used to alter the electrical activity of those cells in response to pulses of light.

Such tools could thus be a huge benefit to neuroscience. They could enable scientists to assess the causal role of a specific cell type or neural activity pattern in a behavior or pathology, and make it easier to study how other excitable cells, such as heart, immune, and muscle cells, play roles in organ and organism function. Furthermore, given the emerging importance of neuromodulation therapy tools, such as deep brain stimulation (DBS), opsins could enable novel therapies for correcting aberrant activity in the nervous system.

What might be called the “example phase” of this econeering field began about 10 years ago, when several papers suggested that these molecules might be used safely and efficaciously in mammalian cells. For example, foundational papers in 1999 (by Okuno and colleagues) and 2003 (by Nagel and colleagues) revealed and characterized opsins from archaebacteria and algae with properties appropriate for expression and operation in electrically excitable mammalian cells. Even within these papers, basic science examples began to lead directly to bioengineering insights, demonstrating in the case of the Nagel paper that an opsin could be expressed and successfully operate in a mammalian cell line. In 2005 and 2007, we and our colleagues, in a collaboration between basic scientists and bioengineers, showed that these molecules, when genetically expressed in neurons, could be used to mediate light-driven activation of neurons and light-driven quieting of neurons. In the few years since, these tools have found use in activities ranging from accelerating drug screening, to investigating how neural circuits implement sensation, movement, cognition, and emotion, to analyzing the pathological circuitry of, and development of novel therapies for, neural disorders.

Now this econeering quest is entering what could be called the “classification phase,” as we acquire enough data to predict the ecological resources that will yield tools optimal for specific bioengineering goals. For example, in a paper from our research group published in Nature on January 7, 2010, we screened natural opsins from species from every kingdom of living organism except for animals. With enough examples in hand, distinct classes of opsins emerged, with different functional properties.

We found that opsins from species of fungi were more easily driven by blue light than opsins from species of archaebacteria, which were more easily driven by yellow or red light. The two classes, together, enable perturbation of two sets of neurons by two different colors of light. This finding not only enables very powerful perturbation of two intermeshed neural populations separately– important for determining how they work together–but also opens up the possibility of altering activity in two different cell types, opening up new clinical possibilities for correcting aberrant brain activity. Building off of data from and conversations with many basic scientists, we then began mutating these genes to explore the classes more thoroughly, creating artificial opsins to help us identify the boundary between the classes. Understanding these boundaries not only gave us clarity about the space of bioengineering possibility, but told us where to look further in nature if we wanted to augment a specific bioengineering property.

In the current model of econeering, the “example phase” and the “classification phase” both provide opportunities for productive interactions between bioengineers and ecologists or organismal scientists. During the example phase described above, both basic scientists and bioengineers tested out candidate reagents to see what was useful, and later many groups initiated hunts for new examples. During the classification phase, more systematic synthetic biology and genomic strategies enabled more thorough assessment of the properties of classes of reagents.

Interestingly, something similar has been happening recently with GFP, as classes of fluorescent protein emerge with distinct properties: for a while, it’s been known that mutating the original jellyfish GFP can yield blue and yellow fluorescent proteins, but not red ones. A decade ago, an example of a red fluorescent protein from coral was revealed– now this example has yielded, through bioengineering, a new class of fluorescent molecules with colors such as tomato and plum. So it is possible that the cycle described here –find an example, define a class, repeat–might represent a generally useful econeering process, one of luck optimization intermeshed with scientific and engineering skill.

Did the opsin community do “better” than the fluorescent protein community, in speeding up the conversion of basic science insight into bioengineering application? Well, one of the opsins that we screened in this month’s paper was first characterized in the early 1970s, and it was better at changing the voltage of a mammalian cell than perhaps half of the other opsins we screened. So one could argue that a decent candidate reagent had hidden in plain sight for almost 40 years!

Although these two specific fields have benefited from basic scientists and bioengineers working together, a more general way to speed up the process of econeering would be to have working summits to bring together ecology minded and organismal scientists and bioengineers at a much larger scale, to explore what natural resources could be more deeply investigated, or what bioengineering needs could be probed further. Then interfaces, both monetary and intellectual, could facilitate the active flow of insights and reagents between these fields. The next step could involve teaching people in each field the skills of their counterparts: how many bioengineers would relish the ability to hunt down and characterize species in the ocean or desert? How many organismal biologists and ecologists would benefit from trying out applications in specific areas of medical need?

To fulfill the vision of econeering, we should devise technologies for assessing the functions of biological subsystems fully and quickly, perhaps even enabling rapid basic science and bioengineering assessments to be done in one fell swoop. Devices for point-of-discovery phenotyping that allow for gene or gene pathway cloning, heterologous expression, and functional screening–and maybe even downstream methodologies such as in-the-field directed evolution–would allow the rapid assessment of the physiology of the products of genes or interacting sets of gene products. (Note well: the gene sequence is important, but only the beginning; gene sequences are not sufficient by themselves to fully understand the function of a gene product in a complex natural or bioengineering context.)

Bioinformatic visualization tools could be useful: can we scan ecology with a bioengineering lens, revealing areas of evolutionary space that haven’t been investigated (at either the example or class level)? What are the areas of bioengineering need where examples from nature might be useful in inspiring solutions?

Ideally, an econeering toolbox will emerge that will let us confront some of our greatest unmet needs–not just brain disorders, but needs in complex spaces such as energy, antibiotic resistance, desalination, and climate. If we can better understand, invent from, and improve the preservation of our natural resources, we’ll be poised to equip ourselves with a billion years of natural bioengineering. This will give us a great advantage in tackling the big problems of our time–and help future generations tackle theirs.

Graphic: MIT Technology Review, (SecondHandSmoke), by Wesley J. Smith  —  Some of our discussions here at SHS about human exceptionalism have considered the prospect for Artificial Intelligence (AI), and engaged the advocacy by some that such intelligent computers or robots–meaning those that had attained true consciousness–be declared persons and accorded what today are called human rights. I have expressed profound doubt that any machine would ever be actually intelligent in this sense. This position finds articulate support in the article by Professor David Gelernter in Technology Review, called, “Artificial Intelligence is Lost in the Woods.”  . It’s a very long article, too long to fully consider here, but well worth the read.

Gelernter believes that conscious software is a near impossibility,” in other words, that scientists won’t ever create true AI because consciousness involves more than just rational thought, but also emotions, sensations, etc., which a machine could almost surely never truly actually experience. However, he believes that what he calls unconsciousartificial intelligence–what might be described as capable of two-dimensional as opposed to three-dimensional responses–might be doable. He writes:

Unfortunately, AI, cognitive science, and philosophy of mind are nowhere near knowing how to build one. They are missing the most important fact about thought: the “cognitive continuum” that connects the seemingly unconnected puzzle pieces of thinking (for example analytical thought, common sense, analogical thought, free association, creativity, hallucination). The cognitive continuum explains how all these reflect different values of one quantity or parameter that I will call “mental focus” or “concentration”–which changes over the course of a day and a lifetime.

Without this cognitive continuum, AI has no comprehensive view of thought: it tends to ignore some thought modes (such as free association and dreaming), is uncertain how to integrate emotion and thought, and has made strikingly little progress in understanding analogies–which seem to underlie creativity.

Gelernter explains the difference between conscious thinking and unconscious machine thought:

In conscious thinking, you experience your thoughts. Often they are accompanied by emotions or by imagined or remembered images or other sensations. A machine with a conscious (simulated) mind can feel wonderful on the first fine day of spring and grow depressed as winter sets in. A machine that is capable only of unconscious intelligence “reads” its thoughts as if they were on cue cards. One card might say, “There’s a beautiful rose in front of you; it smells sweet.” If someone then asks this machine, “Seen any good roses lately?” it can answer, “Yes, there’s a fine specimen right in front of me.” But it has no sensation of beauty or color or fragrance. It has no experiences to back up the currency of its words. It has no inner mental life and therefore no “I,” no sense of self.

As a consequence, any computer or robot would actually not be conscious, but no matter how dazzling its responses, remain a mere machine. Such a machine would thus not present us with the problem of according it human-equivalent moral status, the prospect of which some enjoy raising in discussions of human exceptionalism and personhood theory. He also points out the folly of attempting to create a truly conscious machine, believing that if it could be accomplished, it would be cruel, pointing out that in any event, “No such mind could even grasp the word “itch.”

An unconscious machine intelligence could be a useful tool in teaching humans about the workings of the brain. But it would be just that, an inanimate object, a machine, a very valuable piece of property–nothing more.

Perhaps it is time to put the AI argument against human exceptionalism to bed and focus on ensuring that human rights apply to all of us–not just those who are able to hurdle subjective barriers to full inclusion in the moral community.

Is Artificial Intelligence Lost in the Woods?

By Danielle Watkins

Artificial intelligence has been obsessed with several questions from the start: Can we build a mind out of software? If not, why not? If so, what kind of mind are we talking about? A conscious mind? Or an unconscious intelligence that seems to think but experiences nothing and has no inner mental life? These questions are central to our view of computers and how far they can go, of computation and its ultimate meaning–and of the mind and how it works.
They are deep questions with practical implications. Al researchers have long maintained that the mind provides good guidance as we approach subtle, tricky, or deep computing problems. Software today can cope with only a smattering of the information-processing problems that our minds handle routinely–when we recognize faces or pick elements out of large groups based on visual cues, use common sense, understand the nuances of natural language, or recognize what makes a musical cadence final or a joke funny or one movie better than another. ALI offers to figure out how thought works and to make that knowledge available to software designers.
It even offers to deepen our understanding of the mind itself. Questions about software and the mind are central to cognitive science and philosophy. Few problems are more far-reaching or have more implications for our fundamental view of ourselves.
The current debate centers on what I’ll call a “simulated conscious mind” versus a “simulated unconscious intelligence.” We hope to learn whether computers make it possible to achieve one, both, or neither.
I believe it is hugely unlikely, though not impossible, that a conscious mind will ever be built out of software. Even if it could be, the result (I will argue) would be fairly useless in itself. But an unconscious simulated intelligence certainly could be built out of software–and might be useful. Unfortunately, AI, cognitive science, and philosophy of mind are nowhere near knowing how to build one. They are missing the most important fact about thought: the “cognitive continuum” that connects the seemingly unconnected puzzle pieces of thinking (for example analytical thought, common sense, analogical thought, free association, creativity, hallucination). The cognitive continuum explains how all these reflect different values of one quantity or parameter that I will call “mental focus” or “concentration”–which changes over the course of a day and a lifetime.
Without this cognitive continuum, Al has no comprehensive view of thought: it tends to ignore some thought modes (such as free association and dreaming), is uncertain how to integrate emotion and thought, and has made strikingly little progress in understanding analogies–which seem to underlie creativity.
My case for the near-impossibility of conscious software minds resembles what others have said. But these are minority views. Most AI researchers and philosophers believe that conscious software minds are just around the corner. To use the standard term, most are “cognitivists.” Only a few are “anticognitivists.” I am one. In fact, I believe that the cognitivists are even wronger than their opponents usually say.
But my goal is not to suggest that AI is a failure. It has merely developed a temporary blind spot. My fellow anticognitivists have knocked down cognitivism but have done little to replace it with new ideas. They’ve showed us what we can’t achieve (conscious software intelligence) but not how we can create something less dramatic but nonetheless highly valuable: unconscious software intelligence. Once ALI has refocused its efforts on the mechanisms (or algorithms) of thought, it is bound to move forward again.
Until then, Al is lost in the woods.

What Is Consciousness?
In conscious thinking, you experience your thoughts. Often they are accompanied by emotions or by imagined or remembered images or other sensations. A machine with a conscious (simulated) mind can feel wonderful on the first fine day of spring and grow depressed as winter sets in. A machine that is capable only of unconscious intelligence “reads” its thoughts as if they were on cue cards. One card might say, “There’s a beautiful rose in front of you; it smells sweet.” If some one then asks this machine, “Seen any good roses lately?” it can answer, “Yes, there’s a fine specimen right in front of me.” But it has no sensation of beauty or color or fragrance. It has no experiences to back up the currency of its words. It has no inner mental life and therefore no “I” no sense of self.
But if an artificial mind can perform intellectually just like a human, does consciousness matter? Is there any practical, perceptible advantage to simulating a conscious mind?
An unconscious entity feels nothing, by definition. Suppose we ask such an entity some questions, and its software returns correct answers.
“Ever felt friendship?” The machine says, “No.”
“Love?” “No.” “Hatred?” “No.” “Bliss?” “No.”
“Ever felt hungry or thirsty?” “Itchy, sweaty, tickled, excited, conscience stricken?”
“Ever mourned?” “Ever rejoiced?”
No, no, no, no.
In theory, a conscious software mind might answer “yes” to all these questions; it would be conscious in the same sense you are (although its access to experience might be very different, and strictly limited).
So what’s the difference between a conscious and an unconscious software intelligence? The potential human presence that might exist in the simulated conscious mind but could never exist in the unconscious one.
You could never communicate with an unconscious intelligence as you do with a human–or trust or rely on it. You would have no grounds for treating it as a being toward which you have moral duties rather than as a tool to be used as you like.
But would a simulated human presence have practical value? Try asking lonely people-and all the young, old, sick, hurt, and unhappy people who get far less attention than they need. A made-to-order human presence, even though artificial, might be a godsend.
AI (I believe) won’t ever produce one. But it can still lead the way to great advances in computing. An unconscious intelligence might be powerful. Alan Turing, the great English mathematician who founded AI, seemed to believe (sometimes) that consciousness was not central to thought, simulated or otherwise.
He discussed consciousness in the celebrated 1950 paper in which he proposed what is now called the “Turing test.” The test is meant to determine whether a computer is “intelligent,” or “can think”–terms Turing used interchangeably. If a human “interrogator” types questions, on any topic whatever, that are sent to a computer in a back room, and the computer sends back answers that are indistinguishable from a human being’s, then we have achieved AI, and our computer is “intelligent”: it “can think.”
Does artificial intelligence require (or imply the existence of) artificial consciousness? Turing was cagey on these questions. But he did write,

I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.

That is, can we build intelligent (or thinking) computers, and how can we tell if we have succeeded? Turing seemed to assert that we can leave consciousness aside for the moment while we attack simulated thought.
But AI has grown more ambitious since then. Today, a substantial number of researchers believe one day we will build conscious software minds. This group includes such prominent thinkers as the inventor and computer scientist Ray Kurzweil. In the fall of 2006, Kurzweil and I argued the point at MIT, in a debate sponsored by the John Templeton Foundation. This piece builds, in part, on the case I made there.

A Digital Mind

The goal of cognitivist thinkers is to build an artificial mind out of software running on a digital computer.
Why does AI focus on digital computers exclusively, ignoring other technologies? For one reason, because computers seemed from the first like “artificial brains” and the first AI programs of the 1950s–the “Logic Theorist,” the “Geometry Theorem-Proving Machine”–seemed at their best to be thinking. Also, computers are the characteristic technology of the age. It is only natural to ask how far we can push them.
Then there’s a more fundamental reason why Al cares specifically about digital computers: computation underlies today’s most widely accepted view of mind. (The leading technology of the day is often pressed into service as a source of ideas.)
The ideas of the philosopher Jerry Fodor make him neither strictly cognitivist nor anticognitivist. In The Mind Doesn’t Work That Way (2000), he discusses what he calls the “New Synthesis”–a broadly accepted view of the mind that places Al and cognitivism against a biological and Darwinian backdrop. “The key idea of New Synthesis psychology,” writes Fodor, “is that cognitive processes are computational. … A computation, according to this understanding, is a formal operation on syntactically structured representations.” That is, thought processes depend on the form, not the meaning, of the items they work on.
In other words, the mind is like a factory machine in a 1940s cartoon, which might grab a metal plate and drill two holes in it, flip it over and drill three more, flip it sideways and glue on a label, spin it around five times, and shoot it onto a stack. The machine doesn’t “know” what it’s doing. Neither does the mind.
Likewise computers. A computer can add numbers but has no idea what “add” means, what a “number” is, or what “arithmetic” is for. Its actions are based on shapes, not meanings. According to the New Synthesis, writes Fodor, “the mind is a computer.”
But if so, then a computer can be a mind, can be a conscious mind–if we supply the right software. Here’s where the trouble starts. Consciousness is necessarily subjective: you alone are aware of the sights, sounds, feels, smells, and tastes that flash past “inside your head.” This subjectivity of mind has an important consequence: there is no objective way to tell whether some entity is conscious. We can only guess, not test.
Granted, we know our fellow humans are conscious; but how? Not by testing them! You know the person next to you is conscious because he is human. You’re human, and you’re conscious–which moreover seems fundamental to your humanness. Since your neighbor is also human, he must be conscious too.
So how will we know whether a computer running fancy AI software is conscious? Only by trying to imagine what it’s like to be that computer; we must try to see inside its head.
Which is clearly impossible. For one thing, it doesn’t have a head. But a thought experiment may give us a useful way to address the problem. The “Chinese Room” argument, proposed in 1980 by John Searle, a philosophy professor at the University of California, Berkeley, is intended to show that no computer running software could possibly manifest understanding or be conscious. It has been controversial since it first appeared. I believe that Searle’s argument is absolutely right–though more elaborate and oblique than necessary.
Searle asks us to imagine a program that can pass a Chinese Turing test–and is accordingly fluent in Chinese. Now, someone who knows English but no Chinese, such as Searle himself, is shut up in a room. He takes the Chinese-understanding software with him; he can execute it by hand, if he likes.
Imagine “conversing” with this room by sliding questions under the door; the room returns written answers. It seems equally fluent in English and Chinese. But actually, there is no understanding of Chinese inside the room. Searle handles English questions by relying on his knowledge of English, but to deal with Chinese, he executes an elaborate set of simple instructions mechanically. We conclude that to behave as if you understand Chinese doesn’t mean you do.
But we don’t need complex thought experiments to conclude that a conscious computer is ridiculously unlikely. We just need to tackle this question: What is it like to be a computer running a complex A program ?
Well, what does a computer do? It executes “machine instructions”–low-level operations like arithmetic (add two numbers), comparisons (which number is larger?), “branches” (if an addition yields zero, continue at instruction 200), data movement (transfer a number from one place to another in memory), and so on. Everything computers accomplish is built out of these primitive instructions.
So what is it like to be a computer running a complex Al program? Exactly like being a computer running any other kind of program.
Computers don’t know or care what instructions they are executing. They deal with outward forms, not meanings. Switching applications changes the output, but those changes have meaning only to humans. Consciousness, however, doesn’t depend on how anyone else interprets your actions; it depends on what you yourself are aware of. And the computer is merely a machine doing what it’s supposed to do–like a clock ticking, an electric motor spinning, an oven baking. The oven doesn’t care what it’s baking, or the computer what it’s computing.
The computer’s routine never varies: grab an instruction from memory and execute it; repeat until something makes you stop.
Of course, we can’t know literally what it’s like to be a computer executing a long sequence of instructions. But we know what it’s like to be a human doing the same. Imagine holding a deck of cards. You sort the deck; then you shuffle it and sort it again. Repeat the procedure, ad infinitum. You are doing comparisons (which card comes first?), data movement (slip one card in front of another), and so on. To know what it’s like to be a computer running a sophisticated Al application, sit down and sort cards all afternoon. That’s what it’s like.
If you sort cards long enough and fast enough, will a brand-new conscious mind (somehow) be created? This is, in effect, what cognitivists believe. They say that when a computer executes the right combination of primitive instructions in the right way, a new conscious mind will emerge. So when a person executes the right combination of primitive instructions in the right way, a new conscious mind should (also) emerge; there’s no operation a computer can do that a person can’t.
Of course, humans are radically slower than computers. Cognitivists argue that sure, you know what executing low-level instructions slowly is like; but only when you do them very fast is it possible to create a new conscious mind. Sometimes, a radical change in execution speed does change the qualitative outcome. (When you look at a movie frame by frame, no illusion of motion results. View the frames in rapid succession, and the outcome is different.) Yet it seems arbitrary to the point of absurdity to insist that doing many primitive operations very fast could produce consciousness. Why should it? Why would it? How could it? What makes such a prediction even remotely plausible?
But even if researchers could make a conscious mind out of software, it wouldn’t do them much good.
Suppose you could build a conscious software mind. Some cognitivists believe that such a mind, all by itself, is Al’s goal. Indeed, this is the message of the Turing test. A computer can pass Turing’s test without ever mingling with human beings.
But such a mind could communicate with human beings only in a drastically superficial way.
It would be capable of feeling emotion in principle. But we feel emotions with our whole bodies, not just our minds; and it has no body. (Of course, we could say, then build it a humanlike body! But that is a large assignment and poses bio engineering problems far beyond and outside Al. Or we could build our new mind a body unlike a human one. But in that case we couldn’t expect its emotions to be like ours, or to establish a common ground for communication.)
Consider the low-energy listlessness that accompanies melancholy, the overflowing jump-for-joy sensation that goes with elation, the pounding heart associated with anxiety or fear, the relaxed calm when we are happy, the obvious physical manifestations of excitement–and other examples, from rage to panic to pity to hunger, thirst, tiredness, and other conditions that are equally emotions and bodily states. In all these cases, your mind and body form an integrated whole. No mind that lacked a body like yours could experience these emotions the way you do.

No such mind could even grasp the word “itch.”

In fact, even if we achieved the bioengineering marvel of a synthetic human body, our problems wouldn’t be over. Unless this body experienced infancy, childhood, and adolescence, as humans do–unless it could grow up, as a member of human society–how could it understand what it means to “feel like a kid in a candy shop” or to “wish I were 16 again”? How could it grasp the human condition in its most basic sense?
A mind-in-a-box, with no body of any sort, could triumphantly pass the Turing test–which is one index of the test’s superficiality. Communication with such a contrivance would be more like a parody of conversation than the real thing. (Even in random Internet chatter, all parties know what it’s like to itch, and scratch, and eat, and be a child.) Imagine talking to someone who happens to be as articulate as an adult but has less experience than a six-week-old infant. Such a “conscious mind” has no advantage, in itself, over a mere unconscious intelligence.
But there’s a solution to these problems. Suppose we set aside the gigantic chore of building a synthetic human body and make do with a mind-in-a-box or a mind-in-an-anthropoid-robot, equipped with video cameras and other sensors–a rough approximation of a human body. Now we choose some person (say, Joe, age 35) and simply copy all his memories and transfer them into our software mind. Problem solved. (Of course, we don’t know how to do this; not only do we need a complete transcription of Joe’s memories, we need to translate them from the neural form they take in Joe’s brain to the software form that our software mind understands. These are hard, unsolved problems. But no doubt we will solve them someday.)
Nonetheless: understand the enormous ethical burden we have now assumed. Our software mind is conscious (by assumption) just as a human being is; it can feel pleasure and pain, happiness and sadness, ecstasy and misery. Once we’ve transferred Joe’s memories into this artificial yet conscious being, it can remember what it was like to have a human body–to feel spring rain, stroke someone’s face, drink when it was thirsty, rest when its muscles were tired, and so forth. (Bodies are good for many purposes.) But our software mind has lost its body–or had it replaced by an elaborate prosthesis. What experience could be more shattering? What loss could be harder to bear? (Some losses, granted, but not many.) What gives us the right to inflict such cruel mental pain on a conscious being?
In fact, what gives us the right to create such a being and treat it like a tool to begin with? Wherever you stand on the religious or ethical spectrum, you had better be prepared to tread carefully once you have created consciousness in the laboratory.

The Cognitivists’ Best Argument
But not so fast! say the cognitivists. Perhaps it seems arbitrary and absurd to assert that a conscious mind can be created if certain simple instructions are executed very fast; yet doesn’t it also seem arbitrary and absurd to claim that you can produce a conscious mind by gathering together lots of neurons?
The cognitivist response to my simple thought experiment (“Imagine you’re a computer”) might run like this, to judge from a recent book by a leading cognitivist philosopher, Daniel C. Dennett. Your mind is conscious; yet it’s built out of huge numbers of tiny unconscious elements. There are no raw materials for creating consciousness except unconscious ones.
Now, compare a neuron and a yeast cell. “A hundred kilos of yeast does not wonder about Braque,” writes Dennett, “… but you do, and you are made of parts that are fundamentally the same sort of thing as those yeast cells, only with different tasks to perform.” Many neurons add up to a brain, but many yeast cells don’t, because neurons and yeast cells have different tasks to perform. They are programmed differently.
In short: if we gather huge numbers of unconscious elements together in the right way and give them the right tasks to perform, then at some point, something happens, and consciousness emerges. That’s how your brain works. Note that neurons work as the raw material, but yeast cells don’t, because neurons have the right tasks to perform. So why can’t we do the same thing using software elements as raw materials–so long as we give them the right tasks to perform? Why shouldn’t something happen, and yield a conscious mind built out of software?
Here is the problem. Neurons and yeast cells don’t merely have “different tasks to perform.” They perform differently because they are chemically different.
One water molecule isn’t wet; two aren’t; three aren’t; 100 aren’t; but at some point we cross a threshold, something happens, and the result is a drop of water. But this trick only works because of the chemistry and physics of water molecules! It won’t work with just any kind of molecule. Nor can you take just any kind of molecule, give it the right “tasks to perform” and make it a fit raw material for producing water.
The fact is that the conscious mind emerges when we’ve collected many neurons together, not many doughnuts or low-level computer instructions. Why should the trick work when I substitute simple computer instructions for neurons? Of course, it might work. But there isn’t any reason to believe it would.
My fellow anticognitivist John Searle made essentially this argument in a paper that referred to the “causal properties” of the brain. His opponents mocked it as reactionary stuff. They asserted that since Searle is unable to say just how these “causal properties” work, his argument is null and void. Which is nonsense again. I don’t need to know anything at all about water molecules to realize that large groups of them yield water, whereas large groups of krypton atoms don’t.

Why the Cognitive Spectrum Is More Exciting than Consciousness
To say that building a useful conscious mind is highly unlikely is not to say that Al has nothing worth doing. Consciousness has been a “mystery” (as Turing called it) for thousands of years, but the mind holds other mysteries, too. Creativity is one of the most important; it’s a brick wall that psychology and philosophy have been banging their heads against for a long time. Why should two people who seem roughly equal in competence and intelligence differ dramatically in creativity? It’s widely agreed that discovering new analogies is the root (or one root) of creativity. But how are new analogies discovered? We don’t know. In his 1983 classic The Modularity of Mind, Jerry Fodor wrote, “It is striking that, while everybody thinks analogical reasoning is an important ingredient in all sorts of cognitive achievements that we prize, nobody knows anything about how it works.”
Furthermore, to speak of the mystery of consciousness makes consciousness sound like an all-or-nothing proposition. But how do we explain the different kinds of consciousness we experience? “Ordinary” consciousness is different from your “drifting” state when you are about to fall asleep and you register external events only vaguely. Both are different from hallucination as induced by drugs, mental illness–or life. We hallucinate every day, when we fall asleep and dream.
And how do we explain the difference between a child’s consciousness and an adult’s? Or the differences between child-style and adult-style thinking? Dream thought is different from drifting or free-associating pre-sleep thought, which is different from “ordinary” thought. We know that children tend to think more concretely than adults. Studies have also suggested that children are better at inventing metaphors. And the keenest of all observers of human thought, the English Romantic poets, suggest that dreaming and waking consciousness are less sharply distinguished for children than for adults. Of his childhood, Wordsworth writes (in one of the most famous short poems in English), “There was a time when meadow, grove, and stream, / The earth, and every common sight, / To me did seem / Apparelled in celestial light, / The glory and the freshness of a dream.”
Today’s cognitive science and philosophy can’t explain any of these mysteries.
The philosophy and science of mind has other striking blind spots, too. Al researchers have been working for years on common sense. Nonetheless, as Fodor writes in The Mind Doesn’t Work That Way, “the failure of artificial intelligence to produce successful simulations of routine commonsense cognitive competences is notorious, not to say scandalous.” But the scandal is wider than Fodor reports. AI has been working in recent years on emotion, too, but has yet to understand its integral role in thought.
In short, there are many mysteries to explain–and many “cognitive competences” to understand. AI–and software in general–can profit from progress on these problems even if it can’t build a conscious computer.
These observations lead me to believe that the “cognitive continuum” (or, equally, the consciousness continuum) is the most important and exciting research topic in cognitive science and philosophy today.
What is the “cognitive continuum”? And why care about it? Before I address these questions, let me note that the cognitive continuum is not even a scientific theory. It is a “prescientific theory”–like “the earth is round.”
Anyone might have surmised that he earth is round, on the basis of everyday observations–especially the way distant ships sink gradually below (or rise above) the horizon. No special tools or training were required. That the earth is round leaves many basic phenomena unexplained: the tides, the seasons, climate, and so on. But unless we know that the earth is round, it’s hard to progress on any of these problems.
The cognitive continuum is the same kind of theory. I don’t claim that it’s a millionth as important as the earth’s being round. But for me as a student of human thought, it’s at least as exciting.
What is this “continuum”? It’s a spectrum (the “cognitive spectrum”) with infinitely many intermediate points between two endpoints.
When you think, the mind assembles thought trains–sequences of distinct thoughts or memories. (Sometimes one blends into the next, and sometimes our minds go blank. But usually we can describe the train that has just passed.) Sometimes our thought trains are assembled–so it seems–under our conscious, deliberate control. Other times our thoughts wander, and the trains seem to assemble themselves. If we start with these observations and add a few simple facts about “cognitive behavior,” a comprehensive picture of thought emerges almost by itself.
Obviously, you must be alert to think analytically. To solve a set of mathematical equations or follow a proof, you need to focus your attention. Your concentration declines as you grow tired over the day.
And your mind is in a strange state just before you fall asleep: a free-associative state in which, rather than following from another logically, one thought “suggests” the next. In this state, you cannot focus: if you decide to think about one thing, you soon find yourself thinking about something else (which was “suggested” by thing one), and then something else, and so on. In fact, cognitive psychologists have discovered that we start to dream before we fall asleep. So the mental state right before sleep is the state of dreaming.
Since we start the day in one state (focused) and finish in another (free-associating, unfocused), the two must be connected. Over the day, focus declines-perhaps steadily, perhaps in a series of oscillations.
Which suggests that there is a continuum of mental states between highest focus and lowest. Your “focus level” is a large factor in determining your mode of thought (or of consciousness) at any moment. This spectrum must stretch from highest-focus thought (best for reasoning or analysis) downward into modes based more on experience or common sense than on abstract reasoning; down further to the relaxed, drifting thought that might accompany gazing out a window; down further to the uncontrolled free association that leads to dreaming and sleep–where the spectrum bottoms out.
Low focus means that your tendency (not necessarily your ability) to free-associate increases. A wide-awake person can free-associate if he tries; an exhausted person has to try hard not to free-associate. At the high end, you concentrate unless you try not to. At the low end, you free-associate unless you try not to.
Notice that the role of associative recollection–in which one thought or memory causes you to recall another–in creases as you move down-spectrum. Reasoning works (theoretically) from first principles. But common sense depends on your recalling a familiar idea or technique, or a previous experience. When your mind drifts as you look out a window, one recollection leads to another, and to a third, and onward–but eventually you return to the task at hand. Once you reach the edge of sleep, though, free association goes unchecked. And when you dream, one character or scene transforms itself into another smoothly and illogically–just as one memory transforms itself into another in free association. Dreaming is free association “from the inside.”
At the high-focus end, you assemble your thought train as if you were assembling a comic strip or a story-board. You can step back and “see” many thoughts at once. (To think analytically, you must have your premises, goal, and subgoals in mind.) At the high-focus end, you manipulate your thoughts as if they were objects; you control the train.
At the bottom, it’s just the opposite. You don’t control your thoughts. You say, “my mind is wandering,” as if you and your mind were separate, as if your thoughts were roaming around by themselves.
If at high focus you manipulate your thoughts “from the outside,” at low focus you step into each thought as if you were entering a room; you inhabit it. That’s what hallucination means. The opposite of high focus, where you control your thoughts, is hallucination–where your thoughts control you. They control your perceived environment and experiences; you “inhabit” each in turn. (We sometimes speak of “surrendering” to sleep; surrendering to your thoughts is the opposite of controlling them.)
At the high-focus end, your “I” is separate from your thought train, observing it critically and controlling it. At the low end, your “I” blends into it (or climbs aboard).
The cognitive continuum is, arguably, the single most important fact about thought. If we accept its existence, we can explain and can model (say, in software) the dynamics of thought. Thought styles change throughout the day as our focus level changes. (Focus levels depend, in turn, partly on personality and intelligence: some people are capable of higher focus; some are more comfortable in higher-focus states.)
It also seems logical to surmise that cognitive maturing increases the focus level you are able to reach and sustain–and therefore increases your ability and tendency to think abstractly.
Even more important: if we accept the existence of the spectrum, an explanation and model of analogy discovery–thus, of creativity-falls into our laps.
As you move down-spectrum, where you inhabit (not observe) your thoughts, you feel them. In other words, as you move down-spectrum, emotions emerge. Dreaming, at the bottom, is emotional.
Emotions are a powerful coding or compression device. A bar code can encapsulate or encode much information. An emotion is a “mental bar code” that encapsulates a memory. But the function E(m)–the “emotion” function that takes a memory m and yields the emotion you in particular feel when you think about m–does not generate unique values. Two different-seeming memories can produce the same emotion.
How do we invent analogies? What made Shakespeare write, “Shall I compare thee to a summer’s day?” Shakespeare’s lady didn’t look like a summer’s day. (And what does a “summer’s day” look like?)
An analogy is a two-element thought train–“a summer’s day” followed by the memory of some person. Why should the mind conjure up these two elements in succession? What links them?
Answer: in some cases (perhaps in many), their “emotional bar codes” match–or were sufficiently similar that one recalled the other. The lady and the summer’s day made the poet feel the same sort of way.
We experience more emotions than we can name. “Mildly happy,” “happy,” “ebullient,” “elated”; our choice of English words is narrow. But how do you feel when you are about to open your mailbox, expecting a letter that will probably bring good news but might be crushing? When you see a rhinoceros? These emotions have no names. But each “represents” or “encodes” some collection of circumstances. Two experiences that seem to have nothing in common might awaken–in you only–the same emotion. And you might see, accordingly, an analogy that no one else ever saw.
The cognitive spectrum suggests that analogies are created by shared emotion–the linking of two thoughts with shared or similar emotional content.
To build a simulated unconscious mind, we don’t need a computer with real emotions; simulated emotions will do. Achieving them will be hard. So will representing memories (with all their complex “multimedia” data).
But if we take the route Turing hinted at back in 1950, if we forget about consciousness and concentrate on the process of thought, there’s every reason to believe that we can get AI back on track–and that Al can produce powerful software and show us important things about the human mind.