Monday, November 12, 2007

Artificial brain falls for optical illusions

New Scientist reports: [edited]

A computer program that emulates the human brain falls for the same optical illusions humans do. It suggests the illusions are a by-product of the way babies learn to filter their complex surroundings. Researchers say this means future robots must be susceptible to the same tricks as humans are in order to see as well as us.

For some time, scientists have believed one class of optical illusions result from the way the brain tries to disentangle the colour of an object and the way it is lit. An object may appear brighter or darker, either because of the shade of its colour, or because it is in bright light or shadows.

The brain learns how to tackle this through trial and error when we are babies, the theory goes. Mostly it gets it right, but occasionally a scene contradicts our previous experiences. The brain gets it wrong and we perceive an object lighter or darker than it really is – creating an illusion.

Until now there has been no way of knowing whether this theory is correct. Beau Lotto and David Corney at University College London, UK, think they have finally done it. They created a program that learns to predict the lightness of an image based on its past experiences – just like a baby. And just like a human, it falls prey to optical illusions.

They trained it using 10,000 greyscale images of fallen leaves that animals might face in nature. It had to predict the true shade of the centre pixel of the images, and change its technique depending on whether its answer was right or wrong.

The researchers then tested the program on lightness illusions that would fool humans. First, it was shown images of a light object on a darker background, and vice versa. Just like humans, the software predicted the objects to be respectively lighter and darker than they really were. It also exhibited more subtle similarities – overestimating lighter shades more than darker shades.

Next, the researchers tried White's Illusion. Again like a human, the program saw areas of grey as darker when placed on a black stripe, and lighter when placed on a white stripe.

Previous computer models tried to directly copy the brain's structure. They could fall for either of the two illusions, but unlike a human, not both at once.

Lotto's programme was instead just designed to judge shades through learning, without being modelled on the brain. He says that suggests our ability to see illusions really is a direct consequence of learning to filter useful information from our environment. "We didn’t evolve to see things accurately, but to see things that would be useful." Lotto points out.

That has implications for robot vision. Most creators of machine vision try to copy human vision because it is so well suited to a variety of environments. The new findings suggest that if we want to exploit its advantages, we also have to suffer its failings. It will be impossible to create a perfect, superhuman robot that never makes mistakes.

Thomas Serre, a vision expert at the Massachusetts Institute of Technology, Cambridge, US, is impressed with the team's results. "It's a very neat and elegant way of showing that [learning experiences] alone can explain illusions," he says.


Skep said...

Wow, we will never create a perfect, superhuman robot... how much do we believe that? The difference is that the trial and error we see in robots can be passed on through generations, through data copying, something humans can't do. I reckon it'll be possible, just not any time this decade.

brett jordan said...

hi skep...

i take your point about robots being able to 'improve themselves' in a way that humans can't/shouldn't (eugenics anyone?), however humans do have the ability to pass on information from generation to generation...

and i think what the article is saying is that the scientists are questioning what type of sight is 'optimal' rather than 'perfect'... my digital camera has better 'eyesight' than me, but it's survival abilities are poor...

human eyesight is a complex mixture of optics/software/bio-feedback... and its major function is not to allow us to mimic things, but to interpret, sort and prioritise the items that are the most important/valuable/dangerous