CHINA TOPIX

04/24/2024 08:08:37 pm

Make CT Your Homepage

Google and FHI Create Red Button Kill Switch to Deactivate Bad AI Robots

Red is for dead

Red button to spank bad robots.

Google DeepMind, the Google unit whose job is to "solve intelligence" and build AIs, intends to head-off the worst case scenario of AIs harming or killing people in the future by embedding a software "red button" in these machines.

The red button is a kill switch that will forestall machines directed by artificial intelligence (AI) from learning how to prevent humans from interrupting whatever it's doing. It will also stop AIs from seeking to learn how to stymie humans from doing this.

Like Us on Facebook

Google DeepMind describes this red button feature as "safe interruptibility."

It partnered with the Future of Humanity Institute (FHI) founded by Swedish philosopher Dr. Nick Bostrom Ph.D at Oxford University to develop a "framework" that allows a human operator to repeatedly and safely interrupt an AI while making sure that the AI doesn't learn how to prevent or induce the interruptions.

Dr. Bostrom is the author of over 200 publications having to do with a variety of arcane topics such as the "anthropic principle," existential risk, and superintelligence risks, which is where he confronts the dangers we face from dangerous AIs.

He's also famous for creating the "simulation hypothesis" that claims the odds are overwhelming the entire universe is, quite literally, just an advanced artificial simulation or a video game written by posthumans or an advanced alien race.

Google DeepMind and FHI authored a paper, "Safely Interruptible Agents," published on the website of the Machine Intelligence Research Institute. The paper was written by Laurent Orseau, a research scientist at Google DeepMind, Stuart Armstrong at FHI and several others. It is will be discussed at the upcoming "The Conference on Uncertainty in Artificial Intelligence (UAI)."

The paper noted AI agents are "unlikely to behave optimally all the time." If this AI is operating in real-time under human supervision, "now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions -- harmful either for the agent or for the environment -- and lead the agent into a safer situation."

"Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this," according to the paper you can read here.

Real Time Analytics