By Nathan Safir
EDITOR’S NOTE: This article was originally published in the Fall 2018 Magazine.
Shortly before his death, Stephen Hawking said, “Artificial intelligence will be either the best thing or worst thing to happen to humanity.” Hawking was not alone in his assessment of the end-all-be-all implications of artificial intelligence (AI); almost every notable thinker in technology, including Bill Gates and Elon Musk, have also addressed the almost incomprehensible severity of the implications of AI development. It’s a frightening prospect to consider that AI could bring about the end of the human race within many of our lifetimes.
Before further discussion, it is important to draw a distinction between artificial narrow intelligence (ANI) and artificial general intelligence (AGI). Simply put, ANI refers to the use of AI techniques to help computers perform small tasks. AGI describes the level of AI a machine must develop to perform any task a human could. Most computer scientists theorize that once machines reach the intelligence level of a human, their artificial intelligence capacity will immediately grow exponentially given their capability of improving themselves in the same way a human programmer could, only better. It is the recurrent ability of machines to self-improve that is truly terrifying. People understand what a human with an IQ of 100, or even a genius IQ of 150, can accomplish. But how can one even begin to conceptualize an entity with an IQ of five hundred, five thousand, or even one million? Society must discuss the implications of super-intelligent machines before technology inevitably shows us.
In spite of legitimate concerns over the potential consequences of recklessly developed AGI, the United States government has done remarkably little in regulating artificial intelligence. Congress has attempted to address this problem by proposing the Fundamentally Understanding the Usability and Realistic Evolution (FUTURE) of Artificial Intelligence Act of 2017, the first ever proposed legislation in U.S. history to attempt to legislate around artificial intelligence. While well-earned skepticism exists regarding whether this bill will ever lead to actual policy, the FUTURE Act of 2017 possesses many promising aspects. First, the FUTURE Act intentionally defines AI in very broad terms, which will maximize the scope of the law. The FUTURE Act would also form the Federal Advisory Committee on the Development and Implementation of Artificial Intelligence to help inform the federal government’s response to the AI sector.
However, clear limitations of the FUTURE Act exist, too. For one thing, the same Federal Advisory Committee it proposes is limited to only informing and recommending legislatures on issues related to AI. In addition, the bill remains in its first stage and still needs to pass in both houses and receive presidential approval before becoming law. So unfortunately, the only real progress that has occured over the last decade or two is a mere chance at a chance of actual regulation. And outside the realm of cold, hard legislation, the rhetorical presence of AGI within politics remains remarkably low. In spite of the most influential innovators of our time espousing the need for putting checks and balances on AGI development, have you ever witnessed AGI, or even AI in any form, worked into the talking points of any national politician’s campaign? Effective AGI legislation remains nothing more than a pipe dream, and the federal government’s apathy over this issue becomes particularly concerning when looking at the agents currently responsible for AGI’s development.
The entities currently responsible for the bulk of AI development are large tech companies like Facebook, Amazon, and Twitter. Tech giants continue to scoop up and purchase smaller AI firms to affirm their spot at the head of the pack. But these same companies have also found themselves in the news in the past few months for all of the wrong reasons. Facebook has faced questioning in several congressional hearings regarding their mismanagement of user data that allowed Russian hackers to manipulate the 2016 election. Twitter has recently taken heat for not banning many alt-right and neo-Nazi figures from their platform. And Amazon has recently been criticized for the low wages they pay their workers juxtaposed with the staggering wealth of their CEO Jeff Bezos.
In multiple ways, these three companies spearheading the AGI revolution have all proven they are more interested in generating profit than helping the common good. But, this attitude is not only true of a few specific companies. Even ignoring the specific actions of any technology company, it’s intuitive that the hyper-competitive space of Silicon Valley is not conducive to safely and cautiously developing technology. Independent of any political ideology, the obvious ethical problems with developing AGI in an unregulated market requires at the least an increased commitment by the government to regulate the ways AI is used before it outsmarts us.
Citations:
https://www.congress.gov/bill/115th-congress/house-bill/4625/text
https://www.cbsnews.com/news/facebook-hearings-6-surprises-from-mark-zuckerberg/
https://www.wsj.com/articles/inside-twitters-long-slow-struggle-to-police-bad-actors-1535972402