One of the features of consciousness, according to the integrated information theory proposed by neuroscientists, Christof Koch and Giuloio Tononi [i], is that it is highly informative. When you perceive something, your brain rules out several other possible images. Even after awakening in a dark room, you see only a dark room and not some other image like the darkness of the jungle. Your brain chooses the image – among many others – that most fit your past experience and recent perceptions. Secondly, conscious information is integrated. When you are conscious of something, you are aware also of several other aspects that form part of the whole. When you see another person’s face, you are also aware at the same time that she is wearing glasses or that she is crying. Koch and Tononi contend that “whatever scene enters consciousness remains whole and complete; it cannot be subdivided into independent and unrelated components that can be experienced on their own.” To be conscious is, therefore, commensurate with having access to a large store of integrated information and the ability to retrieve that information and decide between alternatives. For a system to be conscious, it must be made up of parts that are specialized and well-integrated – “parts that can do more together than alone.”

A reason why a machine – a digital camera, for example – may fail a test for consciousness is that its parts are largely independent and not sufficiently integrated. A seemingly simple test that Koch and Tononi propose to deduce if a machine is sentient is to ask it to perform the task of working out what is wrong with a picture. You could, for instance, insert objects into several images in such a way that those composites (with one exception) do not make sense. For example, a keyboard placed in front of a computer is the right, logical choice, but not a potted plant. This task will, however, defeat most “intelligent” machines today. This is because, as the authors suggest, “solving that simple problem requires having a lot of contextual knowledge, vastly more than can be supplied with the algorithms that advanced computers depend on…”

Decision making by machines seems restricted – thanks to the digital architecture that underlies their intelligence – to the narrow chasm between the Scylla and Charybdis of Boolean choices. (Based on available algorithms, a potted plant in front of a computer screen is completely illogical; there are no two ways about it.) They are unable to cope with situations which require not merely choosing between two alternatives but also identifying a third more realistic – even if seemingly irrational – choice. This may be because cognition is based not only on a repertoire of algorithms but also the ability to access and analyze immense amounts of contextual data which is the outcome of past experiences. Fritjof Capra illustrates the limits of artificial intelligence by using a simple test proposed by Terry Winograd, a Stanford University professor and an expert on human-computer design.[i][ii] In this test, a computer is asked to interpret a simple statement like “Tommy had just been given a new set of blocks. He was opening the box when he saw Jimmy coming in.” Winograd explains that a computer would have no clue as to what is in the box, but we assume immediately that it contains Tommy’s new blocks. We do so because we know that gifts often come in boxes and that opening the box is the proper thing to do. Most importantly, we assume that the two sentences are connected, whereas the computer sees no reason to connect the box with the blocks. In other words, our interpretation of this simple text is based on several common-sense assumptions and expectations that are unavailable to the computer.

We, humans, excel machines not only in being able to choose with apparent ease between a set of black or white (or either this or that) conclusions (“Placing a potted plant in front of an iMac makes sense: Yes or No?”) but can also opt for something in between. For instance, another acceptable response to this question could be that there is actually nothing wrong or illogical about placing a potted plant in front of a computer: maybe it is an abstract expression of an artist’s view of the place of nature in a digital age. We are therefore even able to indulge in intentional indecisiveness when it suits us. (As a former Prime Minister of India put it – in an attempt to rationalize his own alleged inability to take decisions – “any decision for which the time is not ripe, needs a clear-cut decision of not taking, either express or silent.”)[ii][iii]

Some of us can even extend this deliberate nebulousness and vacillation into an art form, somewhat similar to Yeswell Sortov, the fictional character Michael Frayn discussed in his book, “The Human Touch: Our Part in the Creation of a Universe.”[iii][iv] Sortov, “the philosopher of vagueness, imprecision, and ambiguity” believed that the world around us was a seamless flux and everything was tied up indivisibly with everything else. There was hence no point in making a clear distinction between this and that. Sortov went beyond Heraclitus in believing that even stepping once in a river was an unrealizable goal. It is not surprising that Sortov took a particular liking for quantum physics. As Frayn notes, Sortov “developed an early prototype of multiple universe theory; but held that all the different universes were actually present in front of our eyes, piled on top of one another in inextricable confusion.”

What is even more astonishing about our abilities to choose (or not) is the fact that we manage these feats using what the great scientist Richard Feynman called “last week’s potatoes.” Feynman made this observation when he compared certain scientific insights to a religious experience. We, in fact, achieve these intuitions using atoms in the cell that was not there two weeks ago as our cells are renewed over time. As Feynman noted, “So what is this mind, what are these atoms with consciousness? Last week’s potatoes! That is what now can remember what was going on in my mind a year ago – a mind which has long ago been replaced.”

[i] In “A Test for Consciousness,” Scientific American, June 2011.
[ii] Fritjof Capra, “The Web of Life: A New Synthesis of Mind and Matter,” Flamingo, London, 1997, pp:269.
[iii] Christopher Kremmer, “Inhaling the Mahatma,” Harper Perennial, Australia, 2008, p: 88-9.
[iv] P: 170.

By Venkat Ramanan
vidyarthifoundation@gmail.com