(Credit: OnInnovation) If you hadn't heard, Elon Musk is worried about the machines. Though that may seem a quixotic stance for the head of multiple tech companies to take, it seems that his proximity to the bleeding edge of technological development has given him the heebie-jeebies when it comes to artificial intelligence. He's shared his fears of AI running amok before, likening it to "summoning the demon," and Musk doubled down on his stance at a meeting of the National Governors Association this weekend, telling state leaders that AI poses an existential threat to humanity. Amid a discussion of driverless vehicles and space exploration, Musk called for greater government regulations surrounding artificial intelligence research and implementation, stating:
“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal. AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late," according to the MIT Tech Review.
It's far from delusional to voice such concerns, given that AI could one day reach the point where it becomes capable of improving upon itself, sparking a feedback loop of progress that takes it far beyond human capabilities. When we'll actually reach that point is anyone's guess, and we're not at all close at the moment, as today's footage of a security robot wandering blindly into a fountain makes clear. While computers may be snapping up video game records and mastering poker, they cannot approximate anything like general intelligence — the broad reasoning skills that allow us to accomplish many variable tasks. This is why AI that excels at a single task, like playing chess, fails miserably when asked to do something as simple as describe a chair. To get some perspective on Musk's comments, Discover reached out to computer scientists and futurists working on the very kind of AI that the tech CEO warns about.