Science Technology

Apple Co-Founder Issues Apocalyptic A.I. Warning: ‘The Future is Scary and Very Bad For People’

Stay ahead of the curve... Get top posts first!

Thank you for subscribing!

Get updates on Facebook

In an interview with the Australian Financial Review, Wozniak joined Bill Gates, Stephen Hawking, Clive Sinclair and Elon Musk by making his own apocalyptic warning about machines superseding the human race.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people,” Wozniak said. “If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.”

Wozniak continued: “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that … But when I got that thinking in my head about if I’m going to be treated in the future as a pet to these smart machines … well I’m going to treat my own pet dog really nice.”

However, Wozniak suggests that physics could stop A.I. In computer science, there’s a guiding principle called Moore’s Law, which states that computer processing speed doubles roughly every two years. That’s held true more or less since the 1970s, but this may soon change, as the size of a silicon transistor is expected to be the size of a single atom, the Financial Review says. Any smaller than that, and scientists will need to figure out how to manipulate subatomic particles—a field commonly referred to as quantum computing—which has not yet been cracked.

Wozniak had previously rejected the predictions of futurists such as Ray Kurzweil, who argued that super machines will outpace human intelligence within several decades, so his predictions represent a bit of a turnaround. Wozniak told the Financial Review that he came around after he realized the prognostication was coming true.

“Computers are going to take over from humans, no question,” Wozniak said.

“I hope it does come, and we should pursue it because it is about scientific exploring,” he added. “But in the end we just may have created the species that is above us.”

And Wozniak is certainly not alone in his fears. In January, during a Reddit AMA, Gates wrote: “I am in the camp that is concerned about super intelligence,” adding “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Stephen Hawking said last month that artificial intelligence “could spell the end of the human race.”

British inventor Clive Sinclair has also said he thinks artificial intelligence will doom humankind. “Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” he told the BBC. “It’s just an inevitability.”

Elon Musk was famously among the first to express deep concern over artificial intelligence. Speaking at the MIT aeronautics and astronautics department’s Centennial Symposium in October, the Tesla founder said: “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, yeah, he’s sure he can control the demon. Didn’t work out.”

Want our best on Facebook?

Facebook comments

“Apple Co-Founder Issues Apocalyptic A.I. Warning: ‘The Future is Scary and Very Bad For People’”