Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > FPGAs/PLDs

Many-core ICs tripping 'the singularity'?

Posted: 30 Mar 2012 ?? ?Print Version ?Bookmark and Share

Keywords:multicore processors? artificial intelligence? FPGA designs?

Will the rapidly increasing processing power being enabled by many-core processors cause the advent of machines with super-human intelligence, an event sometimes referred to as the singularity?

This was the question posted by analyst Jon Peddie to a panel of the best minds in multicore processor theory and design assembled at the Multicore DevCon.

Peddie set up the debate by referencing Vernor Vinge, a science fiction writer, who had predicted that computing power would be equivalent to human processing power by about 2023. One particular aspect of the concept of the singularity is that once machines either in single units or collectively exceed human intelligence there may be an explosion of machine learning advancement that it would not be possible for humans to fathom, by definition, making the singularity a kind of event horizon.

Another extrapolation of computing progress had 2045 as the year in which it might be possible to buy a machine with the processing power of human brain for $2,000 in 2045.

Pradeep Dubey of Intel Corp.'s parallel computing labs illustrated the progress by saying that a petaflops supercomputer can already simulate a cat's brain. A human brain has 20 to 30 times more neurons and 1,000 times more synapses. So the complete simulation of human brain is only a matter of a 5 or 6 years away, Dubey said. "Exaflops could simulate a human brain," he noted.

According to Dubey, there are currently three approaches: simulate the process with a neuron- and synapse-level model; ignore brain architecture and treat the problem as data and statistical problem; or to build hardware that mimics neurons and synapses.

However, the simulation of the brain is not the same as thinking or generating the emotional intelligence we see in human beings, said Ian Oliver director of Codescape development tools at processor licensor Imagination Technologies Group plc. "We probably have the wrong memory model. The human brain is non-deterministic. It operates on the edge of chaos," he said.

Oliver pointed out that the use of genetic algorithms to derive FPGA designs through evolution produced much more brain-like architectures but were not readily usable in the real world and as such computer and human intelligence appeared to be distinct.

Mike Rayfield, vice president of the mobile business unit at Nvidia Corp. argued that the number of processor cores is a red herring. But Intel's Dubey countered that more cores is better saying that massive data engines can capture correlations if not causality. He pointed out that machines can already do some things far better than humans, which is the reason they exist. "We can build planes but we can't build birds," he said.

Could computers be already smarter than humans?
Imagination's Oliver added that there are many examples human brains of linkages to the body and that help drive the behavior. "Can you have intelligence without a body?" he asked adding that if we wish to see the advent of the singularity perhaps we should look for it in robots.

The audience participated with the panel arguing on the one hand that megaflops were not what is needed to approach human intelligence and that hardware is easy part on the other; that the missing element is software.

Another member of the audience asked what is the application for such levels of performance, apart from creating an automaton. Jem Davies, ARM Ltd's vice president of technology, picked up the response that in specific domains you want computers to do things that humans cannot. Laser eye surgery is now done by a machine he said because it is more precise and capable than human.

This led the panel on to a discussion of the Turing test and whether supercomputers had yet been able to meet it. The test, proposed by Alan Turing, is that if a human judge, when devoid of visual and other cues, cannot tell the difference between talking to another human being and a machine, then the machine is effectively intelligent.

Imagination's Oliver argued that the definition of Artificial Intelligence seemed to change so that it encompassed those things that computers are not yet capable of, something more akin to an Arthur C Clarke definition of magic. As soon as computers do become capable of a function, for example speech recognition, that task gets reclassified as not being part of intelligence.

From the floor it was asked if the known inefficiencies of multicore arrays for many tasks, were a limitation that would prevent the advent of the singularity. Oliver said there is no doubt that parallel processing is the best way to simulate or recreate brain-like thinking. Computers just happen to be good at only a few tasks such as high-speed numerical processing, that humans are not so good at.

ARM's Davies admitted that general purpose GPU type processing tends to favor particular classes of problem such as computational image processing "but we should not be limited by our imagination," he said. He took a build-it-and-they-will-come position. "I don't need to know what the killer application is going to be. Human ingenuity will find a way to use the technology." Intel's Dubey also argued in favor of the hardware approaches we have today. "It is not a system problem. It's a programming model problem."

- Peter Clarke
??EE Times

Article Comments - Many-core ICs tripping 'the singular...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top