AI platforms deliver unparalleled efficiency, but do the risks outweigh the benefits?
Evidence of the advance of artificial intelligence (AI) is everywhere these days — popping up in industries as disparate as fast food and law enforcement. While this sudden ubiquity can make it challenging to separate the gimmicks from the gamechangers, the technology’s progression marks one of the most exciting eras in human innovation, and stands to become a transformative force across a number of sectors. In healthcare, for example, AI has already been successful not only in improving and accelerating diagnoses, but cutting R&D costs for biopharmaceutical companies and increasing the success of drug trials.
But alongside every new and powerful technology comes the ever-present concern that automation, AI-based or otherwise, will cost people their jobs. These fears become even more extreme as visions of ruthless machines and an enslaved human race continue to play out in popular culture — from the Matrix to Robocop to Westworld.
The truth is, we don’t yet know exactly where AI will lead us — a fact that’s not necessarily cause for doomsday prophecies. In fact, the benefits of a complementary approach to AI tools — one that strikes a connective balance between technology and humanity — are simply too significant to overlook. But as it currently stands, pursuit of this balance hardly represents a consensus position among the high-profile names operating in or at the margins of the burgeoning AI industry.
Clash of the Titans
Speaking at the launch of the Leverhulme Centre for the Future of Intelligence last year, renowned theoretical physicist Stephen Hawking observed, “Success in creating AI would be the biggest event in the history of our civilization. But it could also be the last.” Hawking has long advocated for cautious AI development, warning that in the worst-case scenario, AI “would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
In his address to the National Governors Association’s 2017 Summer Meeting, SpaceX CEO Elon Musk espoused similar concerns about the potentially overwhelming power of a rogue AI. “I have exposure to the most cutting-edge AI,” Musk confessed, “and I think people should be really concerned by it.” Consequently, Musk explained, “AI is a rare case where I think we need to be proactive in regulation rather than be reactive.”
Musk later took to Twitter to reiterate the need for concern around the development of AI, arguing that the technology presents a greater risk to humanity than North Korea. The tweet coincided with the testing of an AI designed by OpenAI (a Musk-backed non-profit) that successfully trounced professional human players of Dota 2 — an online multiplayer battle game.
According to The Guardian, the AI “displayed the ability to predict where human players would deploy forces and improvise on the spot, in a game where sheer speed of operation does not correlate with victory, meaning the AI was simply better, not just faster than the best human players.”
If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc
— Elon Musk (@elonmusk) August 12, 2017
Not everyone agrees with Hawking and Musk, however, and some are growing concerned with what they see as anti-AI alarmism. During a Facebook Live stream in July, Facebook CEO Mark Zuckerberg fielded a question probing his thoughts on Musk’s comments and the potential of AI to change the world for the better. He responded, citing enhanced healthcare diagnostic capabilities and more innovative avenues for drug discovery as just two areas where AI is already changing the world for the better.
“I am optimistic,” he says in the video. “And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.”
For Zuckerberg, AI holds tremendous potential, and if wielded responsibly, he believes that it can help humanity overcome some of our toughest challenges. “Whenever I hear people saying AI is going to hurt people in the future,” he continued, “I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used. But people who are arguing for slowing down the process of building AI, I just find that really questionable. I have a hard time wrapping my head around that.”
Finding the Right Way Forward
While it may, at times, become contentious, this ongoing public debate among leading scientists, businesspeople, and government officials is a critical step in delineating the future of AI. In truth, Hawking’s and Musk’s fears may well be legitimate. True artificial general intelligence — or “strong AI” — remains entirely prospective, and there is no way for anyone to predict every eventuality with absolute certainty. It is entirely possible that a fully sentient AI would destroy humanity as we know it — either by rendering us functionally obsolete or by, as Hawking implies, tossing us aside in pursuit of its own goals — but this is only a fear at the moment, not fate.
What we do know is that “narrow AI” — AI like Facebook’s News Feed that use machine learning algorithms to accomplish a narrowly defined task — has already begun to improve everything from software development and education to finance and insurance. Ensuring that this generally positive and productive relationship with AI continues is first and foremost a matter of, like Zuckerberg said, being careful about what we build and how we use it.
In this regard, the emergence of organizations like the Leverhulme Centre and the Partnership on AI to Benefit People and Society is an encouraging development. The Partnership’s mission statement is “to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society,” and as long as companies are willing to proceed according to these principles of openness and collaboration, we have little to fear.
The debate around the dangers — or lack thereof — inherent to AI is an important one to have, and we should continually revisit these questions as AI technology matures. At this point, it seems most appropriate to approach AI with cautious optimism. Not only does it offer boosts in efficiency that, until now, were assumed to be impossible, but AI-based technology has the potential to transform — and improve — the way we diagnose and treat disease, develop and introduce new drugs and therapies, and greatly reduce the inefficiencies that have long plagued the American healthcare system. However, when the unknown is at play — as it undoubtedly is in the still young AI industry — it never hurts to carefully measure one’s steps before taking them.