The Societal Impact of AI

by Michael Szul, Bill Ahern, Avantika Mehra on

No ads, no tracking, and no data collection. Enjoy this article? Buy us a ☕.

Guest contributor Avantika Mehra collaborated with us on this piece.

Many of the "ethics of AI" discussions exist in a vacuum. What I mean is that we often refer to something that AI does (or that we think AI will do in the future), and we generalize that context, attempting to place it in an almost utilitarian framework. We seem to immediately jump to the academic, and analyze ethical issues as a platonic ideal. We think of the lives affected as parameters in the equation, instead of adding an extra dimension to the problem. When we ask what AI means to the ethics of humankind, we look at the impact on the person, but what about the impact on society as a whole, in the context of our various cultural bubbles?

"Machine Learning has become alchemy." —Ali Rahimi

Ali Rahimi has argued that machine learning today has become much like alchemy. Our deep neural networks are poorly understood and filled with misleading or under-evolved theories, and yet while the media consistently emphasizes the fears and uses of artificial general intelligence (AGI)—something that is years, or decades away—our deep learning models are finding their ways into healthcare, criminal justice, and even credit scores.

A recent article in The Ethical Machine argues that these AGI-related conundrums, while intriguing conversation-starters, are mere distractions from the more pertinent, immediate concerns at hand—namely, those surrounding the development and implementation of artificial narrow intelligence (ANI).

ANI refers to AIs trained on specific datasets to perform a specific set of tasks; its use-cases are endless—social media feeds (e.g., Facebook, Instagram), suggested actions in smartphone applications (e.g, Skype, LinkedIn), music streaming services (e.g., Spotify), online shopping (e.g., Amazon)… virtually every digital service with which you interact has some form of integrated ANI. So instead of hypothesizing and disputing the problems humanity will face at some point in the future after the integration of AGI in society, perhaps, we should instead question the ethical implications of the existing technologies that power our daily lives in an integrated, diverse society?

When technology is applied to society, we often reach the inevitable compromise between social good and individual privacy, but the obsession with hypotheticals in a Jarvis-enable society drive the conversation away from the personal transgressions of machine learning today, and instead focus on the dreams of the future. Instead of worrying about HAL, what about the small design decisions that create rolling, growing waves in the fabric of society? What's missing in an algorithm designed by five white dudes from Connecticut that could be life-altering for Muslim women immigrants in Chicago? And what happens when we can't trace those AI decisions back to the source?

We are designing systems for scale based on data accumulated in mostly a single societal context, potentially amplifying the differences between cultures. In addition, AI hints at the individualist vs. collectivist divide in culture, law, and general society, and the need to talk about the prioritization requirements of AI when it comes to comparing the public good to that of the individual.

But while those in research, technology, and philosophy have questions that need answers, the over-hyped cycle of the news media (combined with wariness over backpropagation) could force disillusionment, and ultimately another AI winter. It was precisely this form of AI over-hype by companies and the media in the MIT/LISP Machine 1970's that led to unreasonable expectations of AI from the public. When these were not met, people decided AI had "failed," causing funding to dry up, resulting in a huge decline in AI innovation, interest, and research. With the challenges facing today's society—namely climate change and agricultural needs—we can ill-afford another misstep.