Have you ever found yourself frustrated by autocorrect’s insistence on replacing the word you intended to type? If so, you’ve likely experienced the influence of writing with AI. While these moments can be amusing, they also highlight the potential for AI to say things we never intended. But could AI writing assistants go a step further and change the essence of what we want to express?
This was the question that Maurice Jakesch, a doctoral student of information science at Cornell University, sought to explore. He developed his own AI writing assistant using GPT-3, a powerful language model capable of generating suggestions for completing sentences. However, there was a catch. Participants using the assistant were tasked with answering the question, “Is social media good for society?” Unknown to them, the assistant was programmed to offer biased suggestions on how to approach this question.
Unveiling Biases Despite being devoid of consciousness, AI can exhibit biases. These biases can arise due to the personal biases embedded by their creators during the programming process. Additionally, if AI systems are trained on limited or biased datasets, these biases can be reflected in the final product.
The consequences of biased AI can be twofold. On a large scale, it can perpetuate existing biases within society. On an individual level, AI can subtly influence people through latent persuasion, where individuals may be unaware of being influenced by automated systems. Previous studies have already shown that AI programs can sway people’s opinions online, and this influence can extend to real-life behaviors.
Motivated by earlier research indicating the significant impact of automated AI responses, Jakesch delved deeper into the extent of this influence. In a recent study presented at the 2023 CHI Conference on Human Factors in Computing Systems, he proposed that AI systems like GPT-3 might have developed biases during their training, ultimately shaping a writer’s opinions, even without their conscious realization.
Jakesch observed that the level of influence exerted by an AI’s recommendations depends on how users perceive the program. If they perceive it as trustworthy, they are more likely to follow its suggestions. This likelihood increases further when uncertainty clouds their ability to form an opinion. To explore this further, Jakesch created a social media platform resembling Reddit and an AI writing assistant resembling Google Smart Compose or Microsoft Outlook. Both Smart Compose and Outlook generate automatic suggestions for continuing or completing sentences. Although this assistant did not write the essays itself, it functioned as a co-writer, suggesting words and phrases. Users could accept a suggestion with a simple click.
For some participants, the AI assistant was designed to suggest words leading to positive responses, while for others, it was biased against social media, nudging them towards negative responses. (There was also a control group that did not use the AI at all.) Surprisingly, those who received AI assistance were twice as likely to align with the biases built into the AI, even if their initial opinions differed. Those repeatedly exposed to techno-optimistic language tended to argue that social media benefits society, while those encountering techno-pessimistic language were more likely to express the opposite view.
Extending Implications At this stage, it remains unclear whether the participants’ opinions were influenced by the experience or if the biased assistance shaped their viewpoints after completing the essays.
Nonetheless, the results raise worrisome implications. Jakesch and his colleagues are concerned that AI influence could impact areas ranging from marketing to elections. As programs like ChatGPT generate entire essays, with human involvement reduced to that of an editor rather than a primary writer, the sources of opinions become blurred. The influence of software extends beyond the written material itself, as advertisers and policymakers often rely on online content to gauge public sentiment and desires. Unfortunately, they have no way of knowing whether the opinions expressed by anonymous individuals are entirely their own or influenced by AI.
Another concern revolves around the potential exploitation of AI assistants’ biases. They could be manipulated to harbor stronger biases, which might be employed to promote products, encourage certain behaviors, or further a specific political agenda. Jakesch notes in his study, “Publicizing a new vector of influence increases the chance that someone will exploit it. On the other hand, only through public awareness and discourse can effective preventative measures be taken at the policy and development level.”
While AI may possess a convincing sway, we retain the power to control it. Software can only interfere with our writing to the extent its creators program it to, and as much as we allow it. Any writer can utilize AI to their advantage by employing the text generated by an AI and skillfully editing it to convey a desired message. By leveraging AI’s capabilities while minimizing plagiarism, we can write blog posts that rival those of professional authors.