Artificial intelligence (AI) has long been confined to the imagination of science fiction or to the nightmares of popular culture. The concept of a fully formed AI, often incorrectly conflated with robotics, existed as a helpful science officer on a Starfleet vessel, a murderous computer on board a long range mission to Jupiter, a plucky housemaid, or the harbinger of nuclear annihilation.
However, over the past few decades, AI has become less fictional and increasingly practical. From the AI Watson winning on the game show Jeopardy!, to the digital assistant on your smartphone, to pioneering work being done on self-driving cars, artificial intelligence is breaking out from the laboratory, coming into increasing contact (and use) with its human masters.
Last Thursday at Chicago Ideas Week, the implications of this potentially tectonic shift were discussed at a panel appropriately titled: The Implications of Artificial Intelligence. Featuring Ayanna Howard (Professor, School of Electrical and Computer Engineering, Georgia Institute of Technology), Ben Johnson (host of NPR’s “Marketplace Tech” and “Codebreaker”), Alexander Reben (artist & roboticist), and Adam Waytz (Professor, Kellogg School of Management at Northwestern University) this broad spectrum of voices attempted to reconcile what the emerge of increasingly sophisticated AI means for us and, by implication, for the intelligences themselves.
This spread of differing voices drove home just how wide ranging, and potentially long unfolding, the reach of emergent AI will be. The variety of perspectives lent a greater nuance than the usual binary of technophobes versus tech evangelists. Questions of art, its reproduction, and the role of artificial intelligence in many of our devices were touched on by Reben, while Johnson and Waytz explored the most imminent issues of what will happen to our jobs when smart machines take over (How will our economy work? Who benefits? Who loses out?), with Howard contributing an inside perspective as a professor working with these new technologies.
Overall, the panel was an engaging thought experiment in what might happen to us when AI gains increasing sentience. And like any good piece of sci-fi (which though set far, far away, is more about the present than the future) deeper moral and philosophical issues were also explored, though the full range of them would occupy several dozen textbooks, if not more.
What rights might an AI have in the future? Is it right to create true artificial intelligence, given it might be a distinct entity? And what exactly is intelligence? Should AIs create other AIs, much like parents conceiving a child? Do we even have a right to birth an artificial entity of our design? These are the headier questions beyond the here-and-now that were the primary focus. The entire panel could have focused on a single one for the entire hour-plus session.
Though each panelist was well informed and articulated their various specialties, ultimately, the panel struggled with just how much we don’t know what will happen. This was a panel about predictions and implications and those are as consistent and knowable as Chicago’s weather. Each step forward in the development of these technologies creates new, confounding realities.
The key takeaway was that AI, as it is now and continues to be developed, is a tool. Aside from a few rare examples in the wider animal kingdom, humans make the tools on this planet. All manner of biases and assumptions (conscious and unconscious) are baked into the makeup of what’s being developed, by who, and for what. Given how progress has birthed both the wonders of penicillin and the horrors of the atomic bomb, it’s important to think forward and ask the more soul searching sort of questions at the heart of this panel.
Who are we? What will the development of AI say about us? Looking ahead at the growing beauty and capabilities of artificial intelligence requires we take a far harder look at our selves.