Why the "Dreams" of Google's Neural Nets Are the Closest Thing to Reality

Last week's post on Google's Research Blog has made front-page news with a series of stunning, hallucinatory images produced by artificial neural networks. Were these images our first glimpse into an artificial mind?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Last week's post on Google's Research Blog has made front-page news with a series of stunning, hallucinatory images produced by artificial neural networks. Were these images our first glimpse into an artificial mind? Were neural networks "dreaming of electric sheep" - as one commentator suggested by making reference to Philip K. Dick's famous short story?

Neural networks have a long history. In 1943 two pioneers of cybernetics, the neurophysiologist Warren S. McCulloch and the logician Walter Pitts, demonstrated that neurons could be equivalent to running programs on Turing machines. In other words, they showed that the human brain could be simulated by a computer. Since then one of the most promising approaches to Artificial Intelligence has been the design and development of "artificial neural networks" that try to mimic how the brain functions. We know from brain science that the biological neurons in our brain are organised in hierarchies as well as in groups of various specialisations. When information is fed to our brain via the senses our neurons begin to process this information by first trying to identify its essential characteristics, then gradually building a hypothesis to classify the information and thus make sense of it. Recognition, classification and identification happen in stages, as "lower" neurons pass their outputs to other "higher" neurons to process further, in the brain's hierarchical way of doing things. Memory plays a very important part in this process: it guides our neurons to quickly respond to new sensory inputs by taking shortcuts. Memory is our built-in information hacker.

This information-processing idea from biological evolution has been transferred to computers, and has been advanced significantly since the early forties to the point of last week's publication of how artificial neural networks "see the world". What Google's researchers did was to feed the output of their neural networks as inputs back into the system, and observe what happened. But they did more than that: they fed their neural networks with "white noise", i.e. with "nothing". And then a very interesting thing happened. The neural networks seemed to "imagine", or "dream". They set upon interpreting nothingness as an input. This is analogous to what happens in our brains during our sleep. With our senses turned off our brain has no external informational material to process. And yet our neurons do not cease to work. During so-called REM sleep, information about nothing is processed in our brain, and the output are dreams. Could Google's neural networks be dreaming too?

If artificial neural networks are a technological metaphor for biological neural networks then last week's images are indeed the technological metaphor for dreams. But to what degree does the metaphor match "reality"? This is a very interesting question because neurobiology and cognitive psychology tell us that our brains "construct" reality. We are told that we see, hear, feel is not "really" what is out there, but what our neurons construct. This construction is the result of evolution. Therefore "reality" can only be of the perceived kind, servicing our species' survival ends and nothing more. The distance between "perceived" reality and "real" reality has been puzzling philosophers and scientists since Plato. It is the fodder of heated debates in today's consciousness studies, mind philosophy, and neuroscience. But perhaps last week's images can offer a new, experimental, prespective in the debate.

Artificial neural networks are not programmed like conventional computers, but they do operate on the basis of mathematical equations that weigh evidence and perform calculations. The mathematical basis of artificial neurons is not merely practical. It takes the neuron-Turing machine equivalence suggested by McCulloch and Pitts to its logical conclusion, i.e. that the nature of perception is mathematical.

But if we accept this ontological conclusion then we must also accept that, given that mathematics are bereft of subjectivity, there must be a fundamental element of objectivity in our subjective perceptions too. In other words, our brain may "construct" reality but it constructs it mathematically - just like Google's artificial networks. Continuously feeding the results of outputs from neural network processing as inputs to new processing, mother Nature as well as Google, use "feedback loops" - or "reflexivity" - to abstract mathematical results into "meaning". The artificial hallucinations published last week are perhaps the first depictions of what reality looks like for a machine that simulates our brains. The fact that we found it interesting and familiar, even artistic, is perhaps the most exciting and fascinating thing that we got from research in Artificial Intelligence so far: the first experimental confirmation that our minds are not the only ones that see the world as we do. That "reality" is probably very close to what we actually perceive. Somewhat ironically, Artificial Intelligence is revealing to us that we do not live in a Matrix world.

Popular in the Community

Close

What's Hot