(Credit: OnInnovation) If you hadn't heard, Elon Musk is worried about the machines. Though that may seem a quixotic stance for the head of multiple tech companies to take, it seems that his proximity to the bleeding edge of technological development has given him the heebie-jeebies when it comes to artificial intelligence. He's shared his fears of AI running amok before, likening it to "summoning the demon," and Musk doubled down on his stance at a meeting of the National Governors Association this weekend, telling state leaders that AI poses an existential threat to humanity. Amid a discussion of driverless vehicles and space exploration, Musk called for greater government regulations surrounding artificial intelligence research and implementation, stating:
“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal. AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late," according to the MIT Tech Review.
It's far from delusional to voice such concerns, given that AI could one day reach the point where it becomes capable of improving upon itself, sparking a feedback loop of progress that takes it far beyond human capabilities. When we'll actually reach that point is anyone's guess, and we're not at all close at the moment, as today's footage of a security robot wandering blindly into a fountain makes clear. While computers may be snapping up video game records and mastering poker, they cannot approximate anything like general intelligence — the broad reasoning skills that allow us to accomplish many variable tasks. This is why AI that excels at a single task, like playing chess, fails miserably when asked to do something as simple as describe a chair. To get some perspective on Musk's comments, Discover reached out to computer scientists and futurists working on the very kind of AI that the tech CEO warns about.
Oren Etzioni
University of Washington computer science professor and CEO of the Allen Institute for Artificial Intelligence
Elon Musk’s obsession with AI as an existential threat for humanity is a distraction from the real concern about AI’s impact on jobs and weapons systems. What the public needs is good information about the actual consequences of AI both positive and negative. We have to distinguish between science and science fiction. In fictional accounts, AI is often cast as the “bad guy”, scheming to take over the world, but in reality AI is a tool, a technology and one that has the potential to save many lives by improving transportation, medicine, and more. Instead of creating a new regulatory body, we need to better educate and inform people on what AI can and cannot do. We need research on how to build ‘AI guardians’—AI systems that monitor and analyze other AI systems to help ensure they obey our laws and values. The world needs AI for its benefits, AI needs regulation like the Pacific ocean needs global warming.
Toby Walsh
Artificial Intelligence from the Logic Piano to Killer Robots"
Elon Musk's remarks are alarmist. I recently surveyed 300 leading AI researchers and
the majority of them think it will take at least 50 more years to get to machines as smart as humans. So this is not a problem that needs immediate attention.
And I'm not too worried about what happens when we get to super-intelligence, as there's a healthy research community working on ensuring that these machines won't pose an existential threat to humanity. I expect they'll have worked out precisely what safeguards are needed by then.
But Elon is right about one thing: We do need government to start regulating AI now. However, it is the stupid AI we have today that we need to start regulating. The biased algorithms. The arms race to develop "killer robots", where stupid AI will be given the ability to make life or death decisions. The threat to our privacy as the tech companies get hold of all our personal and medical data. And the distortion of political debate that the internet is enabling.
The tech companies realize they have a problem, and they have made some efforts to avoid government regulation by beginning to self-regulate. But there are serious questions to be asked whether they can be left to do this themselves. We are witnessing an AI race between the big tech giants, investing billions of dollars in this winner takes all contest. Many other industries have seen government step in to prevent monopolies behaving poorly. I've said this in a talk recently, but I'll repeat it again: If some of the giants like Google and Facebook aren't broken up in twenty years time, I'll be immensely worried for the future of our society.
Fei-Fei Li
Director of the Stanford Artificial Intelligence Lab
There are no independent machine values; machine values are human values. If humanity is truly worried about the future impact of a technology, be it AI or energy or anything else, let's have all walks and voices of life be represented in developing and applying this technology. Every technologist has a role in making benevolent technology for bettering our society, no matter if it's Stanford, Google or Tesla. As an AI educator and technologist, my foremost hope is to see much more inclusion and diversity in both the development of AI as well as the dissemination of AI voices and opinions.
Raja Chatila
Chair of The IEEE Global AI Ethics Initiative
Artificial Intelligence is already everywhere. Its ramifications of use rival that of the Internet, and actually reinforces them. AI is being embedded in almost every algorithm and system we're building now and in the future. There is an essential opportunity to prioritize ethical and responsible design today for AI. However, this is more related to the greater immediate risk for AI and society, which is the prioritization of exponential economic growth while ignoring environmental and societal issues.
In terms of whether Musk's warnings of existential threats regarding Artificial Super-intelligence merit immediate attention, we actually risk large-scale negative and unintended consequences because we're placing exponential growth and shareholder value above societal flourishing metrics as indicators of success for these amazing technologies.
To address these issues, every stakeholder creating AI must address issues of transparency, accountability and traceability in their work. They must ensure the safe and trusted access to and exchange of user data as encouraged by the GDPR (General Data Protection Regulation) in the EU. And they must prioritize human rights-centric well being metrics like the UN Sustainable Development Goals as predetermined global metrics of success that can provably increase human prosperity.
The IEEE Global AI Ethics Initiative created Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems to pragmatically help any stakeholders creating these technologies to proactively deal with the general types of ethical issues Musk's concerns bring up. The group of over 250 global AI and Ethics experts were also the inspiration behind the series of IEEE P7000 Standards - Model Process for Addressing Ethical Concerns During System Design currently in progress, designed to create solutions to these issues in a global consensus building process.
My biggest concern about AI is designing and proliferating the technology without prioritizing ethical and responsible design or rushing to increase economic growth in a time we so desperately need to focus on environmental and societal sustainability to avoid the existential risks we've already created without the help of AI. Humanity doesn't need to fear AI, as long as we act now to prioritize ethical and responsible design of it.
Martin Ford
Elon Musk's concerns about AI that will pose an existential threat to humanity are legitimate and should not be dismissed—but they concern developments that almost certainly lie in the relatively far future, probably at least 30 to 50 years from now, and perhaps much more.
Calls to immediately regulate or restrict AI development are misplaced for a number of reasons, perhaps most importantly because the U.S. is currently engaged in active competition with other countries, especially China. We cannot afford to fall behind in this critical race.
Additionally, worries about truly advanced AI "taking over" distract us from the much more immediate issues associated with progress in specialized artificial intelligence. These include the possibility of massive economic and social disruption as millions of jobs are eliminated, as well as potential threats to privacy and the deployment of artificial intelligence in cybercrime and cyberwarfare, as well as the advent of truly autonomous military and security robots. None of these more near term developments rely on the development of the advanced super-intelligence that Musk worries about. They are a simple extrapolation of technology that already exists. Our immediate focus should be on addressing these far less speculative risks, which are highly likely to have a dramatic impact within the next two decades.