Scientists reveal Men in Black style 'memory blanking' for AI that could prevent robots from taking over the world

  • The new system acts like a 'forgetting mechanism' for artificial intelligence
  • This will allow researchers to delete 'bits' of the AI's memory when necessary
  • It alters learning and reward system so processes aren't affected by interruption

Researchers have developed a way to prevent artificial intelligence from thwarting human control.

Artificially intelligent machines can learn from the outcomes of their actions to improve their own abilities – but, experts say this characteristic could lead to AI that ‘can’t be stopped.’

Now, a team of AI researchers has developed a system they’ve likened to the memory-erasing ‘neuralyzer’ from the Men in Black films, to essentially interrupt the AI by deleting parts of its memory, without disrupting the way it learns.

Researchers developed a system they’ve dubbed ‘safe interruptibility.’ It essentially works like the memory-erasing neuralyzer from the Men in Black films (pictured), to interrupt the AI by deleting parts of its memory

Researchers developed a system they’ve dubbed ‘safe interruptibility.’ It essentially works like the memory-erasing neuralyzer from the Men in Black films (pictured), to interrupt the AI by deleting parts of its memory

HOW IT WORKS?

The researchers developed a system they’ve dubbed ‘safe interruptibility.’

This allows humans to interrupt the AI’s learning process when necessary.

And, this can be done without affecting the learning process itself.?

It essentially works like the memory-erasing neuralyzer from the Men in Black films, to interrupt the AI by deleting parts of its memory.

The method alters the machine’s learning and reward system, leaving the processes unaffected by interruptions, the researchers say.

‘We worked on existing algorithms and showed that safe interruptibility can work no matter how complicated the AI system is, the number of robots involved, or the type of interruption,’ Maurer said.

‘We could use it with the Terminator and still have the same results.’

‘AI will always seek to avoid human intervention and create a situation where it can’t be stopped,’ says co-author Rachid Guerraoui, a professor at EPFL's Distributed Programming Laboratory.

According to the researchers, humans must develop ways to prevent AI from circumventing their commands.

‘The challenge isn’t to stop the robot, but rather to program it so that the interruption doesn’t change its learning process – and doesn’t induce it to optimize its behaviour in such a way as to avoid being stopped,’ said Guerraoui.

Previous efforts have attempted to address the problem on a smaller scale, focusing on one robot.

But, with self-driving cars and other connected devices on the rise, the researchers say the problem is far more complex.

For self-driving cars, for example, humans may be able to take over – but, this interruption could affect the behaviour of the car and others that learn from it.

If a human driver brakes often, another self-driving car following it may adjust its own behaviour and become ‘confused’ as to when it should brake.

‘That makes things a lot more complicated, because the machines start learning from each other – especially in the case of interruptions,’ said Alexandre Maurer, one of the study’s authors.

‘They learn not only from how they are interrupted individually, but also from how the others are interrupted.’

To work around this problem, the researchers developed a system they’ve dubbed ‘safe interruptibility.’

Researchers have developed a way to prevent artificial intelligence from thwarting human control. ‘We could use it with the Terminator and still have the same results,’ the researcher said. A still from Terminator Genisys is pictured?

Researchers have developed a way to prevent artificial intelligence from thwarting human control. ‘We could use it with the Terminator and still have the same results,’ the researcher said. A still from Terminator Genisys is pictured?

This allows humans to interrupt the AI’s learning process when necessary.

And, this can be done without affecting the learning process itself.

‘Simply put, we add “forgetting” mechanisms to the learning algorithms that essentially delete bits of a machine’s memory,’ said El Mahdi El Mhamdi.

‘It’s kind of like the flash device in Men in Black.’

The method alters the machine’s learning and reward system, leaving the processes unaffected by interruptions, the researchers say.

‘We worked on existing algorithms and showed that safe interruptibility can work no matter how complicated the AI system is, the number of robots involved, or the type of interruption,’ Maurer said.

According to the researchers, humans must develop ways to prevent AI from circumventing their commands. For self-driving cars, for example, humans may be able to take over – but, this interruption could affect the behaviour of the car and others that learn from it. Stock image

According to the researchers, humans must develop ways to prevent AI from circumventing their commands. For self-driving cars, for example, humans may be able to take over – but, this interruption could affect the behaviour of the car and others that learn from it. Stock image

‘We could use it with the Terminator and still have the same results.’

So far, the team says the system works well when the ‘consequences of making mistakes are minor.’

It could be used for simulations of AI shuttle buses, for example, by running an algorithm that awards and subtracts points as the system learns,’ El Mhamdi said.

According to the team, this could make for safer autonomous vehicles and drones.

‘That’s the kind of simulation that’s being done at Tesla, for example,’ El Mhamdi said.

‘Once the system has undergone enough of this learning, we could install the pre-trained algorithm in a self-driving car with a low exploration rate, as this would allow for more widespread use.’?

The comments below have not been moderated.

The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.

What's This?

By posting your comment you agree to our house rules.