“Hydrogen…, given enough time, turns into people.”
-Edward Robert Harrison, 1995
There isn’t a consensus on the When and What of AI.
When will this change happen (if ever)?
What will this change mean for the future of humans?
According the physicist and cosmologist, Max Tegmark – author of Life 3.0: Being Human in the Age of Artificial Intelligence,
“The way I see it, there are three distinct schools of thought that all need to be taken seriously, because they each include a number of world-leading experts. As illustrated in figure 1.2, I think of them as digital utopians, techno-skeptics and members of the beneficial-AI movement, respectively. Please let me introduce you to some of their most eloquent champions.”
Max talks about a night out at dinner with some friends. He met this guy Larry. Larry and Elon Musk were debating AI.
After a few cocktails, Larry gives a passion filled defence of digital-Utopianism,
“that digital life is the natural and desirable next step in the cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good.”
Larry sounds like a hippie.
Don’t worry about AI. It won’t happen in 100 years. It won’t even happen in 200 years.
Computer scientist, Andrew Ng, gives it to us straight,
“Fearing a rise of killer robots is like worrying about overpopulation on Mars”
Also, after Max met roboticist, Rodney Brooks – Rodney laid it out plain an simple,
“When I met Rodney Brooks at a birthday party in December 2014, he told me that he was 100% sure it wouldn’t happen in my lifetime. “Are you sure you don’t mean 99%?,” I asked in a follow-up email, to which he replied, “No wimpy 99%. 100%. Just isn’t going to happen.”
Andrew and Rodney are doing a lecture tour series together, where they go around the country and destroy the dreams of geeks everywhere.
The Beneficial-AI Movement
This final group takes a more cautious approach. Max himself is part of this group,
““I felt that this polarized situation needed to change […] “I’d founded a nonprofit organization called the Future of Life Institute (FLI: http://futureoflife.org)”
“Our goal was simple: to help ensure that the future of life existed and would be as awesome as possible […] the goal should be to create not undirected intelligence, but beneficial intelligence”
Awesome as possible – sounds fun to me. I think I’ll be part of this group.
Especially because their conferences sound fun,
“Perhaps it was a combination of the sunshine and the wine, or perhaps it was just that the time was right”
Cocktails and wine, sign me up.
Do you think the Singularity is close and is good for us?
Do you think it’s not even close and we shouldn’t worry about it?
Or should we at least start looking into AI safety measures now, just in case?
Whether you’re a digital-utopian, techno-skeptic, or part of the beneficial-AI movement – let us know in the comments below.
(cover image taken from here)