Thursday, July 12, 2018

The trouble with scientific faith, in this case, in AI

This post was originally posted to the Global Open Access List (GOAL) on July 12, 2018 with the following title:  Why translating all scholarly knowledge for non-specialists using AI is complicated.
To view the full conversation, go to the GOAL archives for July 2018. 
On July 10 Jason Priem wrote about the AI-powered systems "that help explain and contextualize articles, providing concept maps, automated plain-language translations"... that are part of his project's plan to develop a scholarly search engine aimed at a nonspecialist audience. The full post is available here:

We share the goal of making all of the world's knowledge available to everyone without restriction, and I agree that reducing the conceptual barrier for the reader is a laudable goal. However, I think it is important to avoid underestimating the size of this challenge and potential for serious problems to arise. Two factors to consider: the current state of AI, and the conceptual challenges of assessing the validity of automated plain-language translations of scholarly works.

Current state of AI - a few recent examples of the current status of AI:

Vincent, J. (2016). Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a day. The verge.

Wong, J. (2018). Amazon working to fix Alexa after users report bursts of 'creepy' laughter. The Guardian

Meyer, M. (2018). Google should have thought about Duplex's ethical issues before showing it off. Fortune

Quote from Meyer: 
As prominent sociologist Zeynep Tufekci put it: “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.”

These early instances of AI applications involve the automation of relatively simple, repetitive tasks. According to Amazon, "Echo and other Alexa devices let you instantly connect to Alexa to play music, control your smart home, get information, news, weather, and more using just your voice". This is voice to text translation software that lets users speak to their computers instead of using keystrokes. Google's Duplex demonstration is a robot dialing a restaurant to make a dinner reservation. 

Translating scholarly knowledge into simple plain text so that everyone can understand it is a lot more complicated, with the degree of complexity depending on the area of research. Some research in education or public policy might be relatively easy to translate. In other areas, articles are written for an expert audience that is assumed to have spent decades acquiring a basic knowledge in a discipline. It is not clear to me that it is even possible to explain advanced concepts to a non-specialist audience without first developing a conceptual progression. 

Assessing the accuracy and appropriateness of a plain-text translation of a scholarly work intended for a non-specialist audience requires expert understanding of the work and thoughtful understanding of the potential for misunderstandings that could arise. For example, I have never studied physics. If I looked at an automated plain-language translation of a physics text I would have no means of assessing whether the translation was accurate or not. I do understand enough medical terminology, scientific and medical research methods to read medical articles and would have some idea if a plain-text translation was accurate. However, I have never worked as a health care practitioner or health care translation researcher, so would not be qualified to assess the work from the perspective of whether the translation could be mis-read by patients (or some patients).

In summary, Jason and I share the goal of making all of our scholarly knowledge accessible to everyone, specialists and non-specialists alike. However, in the process of developing tools to accomplish this it is important to understand the size and nature of the challenge and the potential for serious unforeseen consequences. AI is in very early stages. Machines are beginning to learn on their own, but what they are learning is not necessarily what we expected or wanted them to learn, and the impact on humans has been described using words like 'creepy', 'horrifying', and 'unethical'. The task of translating complex scholarly knowledge for a non-specialist knowledge and assessing the validity and appropriateness of the translations is a huge challenge. If this is not understood and plans made to conduct rigorous research on the validity of such translations, the result could be widespread dissemination of incorrect translations. 


Heather Morrison
Associate Professor, School of Information Studies, University of Ottawa
Professeur Agrégé, École des Sciences de l'Information, Université d'Ottawa