麻豆传媒

a car with wave illustrations around it
Features & Articles

We鈥檙e asking the wrong questions about AI

Tags
  • Technology & Science
  • Arts and Humanities
  • Kenneth P. Dietrich 麻豆传媒 of Arts and Sciences

It鈥檚 hard to get a handle on what鈥檚 happening in artificial intelligence right now. You might read that a tech company created a chatbot so smart it鈥檚 indistinguishable from a human, or that an AI 鈥渆thics advisor鈥 can help you make decisions. Some prognosticators will even tell you that we鈥檙e headed for an AI uprising.

Claims like these lack something crucial, according to , a distinguished professor of history and philosophy of science in the Kenneth P. Dietrich 麻豆传媒 of Arts and Sciences.

鈥淚 think there鈥檚 a lot of credulousness and not enough skepticism. History is being repeated by those who don鈥檛 know it,鈥 he said.

Allen has spent over a decade working on questions around AI ethics and leads the , which aims to embed the idea of 鈥渨isdom鈥 into how AI is used. It鈥檚 a new framework for understanding artificial intelligence computer programs, which have a 60-year record of fooling users with their supposed humanity. Even today, Allen said, AI retains some of the same limitations it displayed in the 1960s.

鈥淲isdom is the interaction between what you know and what you don鈥檛 know, in the sense of being aware of the limits of that knowledge,鈥 he said. 鈥淎I as we know it has no idea what it knows, or why it鈥檚 spewing out what it鈥檚 spewing out. It has no capacity to detect inconsistencies.鈥澨

Instead, that higher-level thinking is the job of those who create AI 鈥 and Allen has over time shifted his focus toward broader shortcomings in the way that people make and use the technology. For instance, even an AI product that gets a passing grade on bias and other important safety criteria can cause harm, said Brett Karlan, who until September was a postdoctoral researcher in Allen鈥檚 lab and is now at Stanford University鈥檚 McCoy Family Center for Ethics and Society.听

鈥淚f you produce that technology, what if it gets used by a reactionary government to further control its people?鈥 Karlan said. 鈥淲hen you ignore the broader social and political systems that involve both humans and machines, you can miss the ethical forest for the trees.鈥

History is being repeated by those who don鈥檛 know it.

Colin Allen

Funding from the program allowed Allen and his team to build their ideas about wisdom into a full-fledged initiative. And in June, in the Journal of Experimental and Theoretical Artificial Intelligence laying out the case for prioritizing wisdom in the AI pipeline 鈥 from a program鈥檚 conception to its design and even how the end product is advertised.

That last step is a particular concern for the duo. As part of the project, Karlan has read through the material on technology companies鈥 websites, seeing how they advertise their services to other companies that might use the technology.

鈥淭he marketing material essentially becomes a handbook for how to use the materials that these large technology companies are putting out,鈥 he said. 鈥淎nd that鈥檚 a real problem when it鈥檚 trying to sell you on the idea that the technologies themselves are safe and ethical.鈥

Money machine

These marketing materials are emblematic, Allen said, of a major barrier to a wise AI industry: Ethical marketing isn鈥檛 necessarily lucrative. A prime example is self-driving cars.

鈥淭esla鈥檚 been doing this dance of playing up the capabilities of the car while trying to convince drivers that they shouldn鈥檛 really take their hands off the wheel,鈥 he explained. 鈥淭hey want people to believe this technology is safe 鈥 and the commercial imperative for them, of course, is to keep pushing how safe it is 鈥 but somehow they鈥檝e got to convince drivers that it鈥檚 not that safe.鈥

For an upcoming paper, the duo has been developing ways to tackle this problem. One method might be to pre-launch AI in a limited way with the intent of testing its limitations. Product teams could hire psychologists who would figure out how users might use and misuse artificial intelligence. For self-driving cars, Karlan even envisions a flight-simulator-like program where drivers can experience the ways a product might fail.

These are expensive and labor-intensive solutions that companies may never arrive at if left to their own devices 鈥 instead, according to Karlan, they may require new regulations or self-policing by industry groups. 鈥淭he way these technologies can be safe and ethical is in the context of a broad system with a lot of checks and balances,鈥 he said.

And lurking behind decisions about how to make and use AI is a more basic question: Is AI even the right fit for the problem at hand? The complex programs that underlie AI have some well-known pitfalls, including that it鈥檚 often difficult or impossible to know why a program arrived at a particular conclusion. In high-stakes decisions where bias could creep in 鈥 for instance, when choosing who gets a loan and who doesn鈥檛 鈥 companies could instead look to simpler and more transparent statistical tools.

鈥淚t鈥檚 really not obvious that we鈥檙e ever going to really understand what is going on in these very large neural networks,鈥 Karlan said. 鈥淏ut there are a lot of solutions that don鈥檛 look like that and where you can, in fact, know what is exactly going on.鈥

In the AI game, in other words, sometimes the wisest move is not to play.

Photo via Getty Images