Wednesday, November 27, 2019

AI Speaker debates itself at Cambridge Union

Cambridge Independent reports: [edited]

On 21 November, the Cambridge Union Society hosted what turned out to be the most popular debate of term: This house believes artificial intelligence will bring more harm than good.

IBM Research’s Project Debater, the first artificial intelligence platform that can debate humans on complex topics, was the leading ‘speaker’ on both the proposition and opposition.

The night opened with a brief introduction from the principal investigator of Project Debater, Noam Slonim. He explained that the AI system uses a variety of techniques to anticipate the opposition’s choice of evidence: “The AI is not perfect but it’s going in the right direction."

Project Debater launched into its proposition speech in a soothingly monotonous voice. The audience was captivated by its ability to seamlessly weave together a series of arguments from the 511 responses submitted by members of the Union and others.

It was a little unnerving to hear that “AI will not be able to make morally correct decisions, which can lead to disasters. It can only make decisions that it has been programmed to solve, whereas humans can be programmed for all scenarios” from the Debater itself.

It also urged the floor to vote for the motion by raising issues of employment, disconnected societies, and abuse of control.

The AI then proceeded to argue against itself as the first ‘speaker’ of the opposition, claiming that “Artificial Intelligence is the technology of the future designed by humans”.

It continued to assert that AI can eliminate human errors in mundane and repetitive tasks, giving the example of autonomous vehicles.

Following this spectacle of self-sparring, Sharmila Parmanand, second proposition speaker and PhD candidate in Gender Studies at the University of Cambridge, spearheaded the human debate against artificial intelligence.She warned against labour displacement and entrenching biases, and raised the critical observation that the context in which AI is being developed – a world plagued by “already-existing power hierarchies” and an “inherently weak regulatory environment” – requires careful consideration.

By contrast, Sylvie Delacroix, professor in Law and Ethics at the University of Birmingham, highlighted that the rise of AI has led us to spiral into an unnecessary degree of paranoia. She compared this fear to people being scared of electricity or cars, which could and can still be used to kill people: “We should see AI as a special tool because of the sheer speed at which it is transforming us”.

She acknowledged that artificial intelligence might be at risk of being manipulated, but also emphasised that as long “as wide a variety of people can select this data,” it can be extremely “beneficial”.

Neil Lawrence, DeepMind professor of machine learning at the University of Cambridge, concluded the proposition debate with the foreboding thought that “over the next 10 years, we will be on a perilous journey that will undermine our very selves”.

He drew particular attention to the dangers of big data: the “new route to manipulating statistics as presented to us”. Lawrence reiterated the importance of precautionary measures: “We should believe that AI should do us harm, because it is the best way to prevent us from doing that harm”.

The debate ended with a final speech from Harish Natarajan, head of economic risk analysis at AKE International in London, who raised the perceptive point that any criticism towards bias in artificial intelligence is made redundant by the fact that “cognitive biases exist on all sides: there is plenty of bias in human interaction”.

He reassured the sceptics that the “benefits of the democratisation of AI will be huge” in a “world that needs multiple layers of improvement”.

And with that, the noes beat the ayes on this occasion.

Image: Tomi Baikie
------------

No comments:

 
UA-60915116-2