Text Written by AI: Five Questions and Answers

Students are using generative AI to write papers. It seems scholars are as well. (https://www.thedailybeast.com/how-this-doctor-wrote-dozens-of-science-papers-with-chatgpt)

So, what are we to make of all of this? What is the future of intellectual works when they can be generated by AI?

I wonder, first, how is this different from the other tools we use to extend human cognition outside of our brains? I am one of many people who are much more capable at quantitative tasks when I have access to technology. It helps me with the computations that I don’t have the patience to complete accurately. With the cognition I am not using for computing, I can focus on the results and their interpretation.

When dealing with quantitative problems, we do depend on correct computations; we must both select the algorithm that will give us the answer we need and evaluate the answer to determine if it really is the correct answer. As I understand it, writing with AI assistance is similar: we select the prompt and evaluate the results.

I do see a difference between using technology to compute and using technology to write, however Computation finds humans following rules. If every human starts with the same inputs and follows the rule accurately, they will come up with the same result (I know… let’s ignore chaos for the time being.) Writing is a human activity in which individuals who start with the same inputs may end with vastly different results. Our experiences influence how we make sense of the inputs and prompts when we are writing in ways they do not when doing computations. For this reason, I believe there is a difference.

I wonder, second, how is this different from having conversations with other humans to clarify your thinking?  We know humans learn in social settings. Since the earliest days of personal computing (think Eliza), we have known computers can simulate human interaction.

For the writer, the conversation with AI may not be significantly different from a conversation with a human. I expect there are graduate students working on this are you read this post. For now, I am not sure this question can be answered. I do believe one can collaborate with AI; we can create something with it that we could without it.

I wonder, third, what is the big deal? If it is a good paper, then it is a good paper. If the purpose of academic writing was entertainment, then the fictions introduced by AI would not be problematic. When we write for academic purposes, the goal is to accurately describe the world (or at least a small part of it). Through writing, we discover how we understand the world. Through editing, we clarify our understanding. Discovering is different from clarifying; it is a big deal if an academic is simply clarifying.

I wonder does it save time? If other writers are like me, they find it is much more difficult to rewrite a piece than it is to start from scratch. The work of changing text so that it means what I want it to mean is much more difficult and time-consuming than writing exactly what I mean. For me, then, using AI as a starting point for writing is probably not a time saver.

I also value original insights in writing—both the insights I gain when I write, and the insights gained by authors that make it into their published works. The writing I have seen from AI generally lacks such insights (sure this may be because of the prompts I write, but I would prefer to think as a human that to decipher thinking as a machine).

One use of AI-generated text that I have heard colleague describe is composing grant applications. These tend to necessitate highly formalized writing and to make arguments that are rather simple (e.g. “for this program, we need support that we do not have”). Ostensibly, it saves us time, but we cannot assume it improves efficiency. If those sections of the grant application are so obvious that AI can write it and if the AI-writing can be deemed acceptable by the grant reviewers, then we can safely assume it isn’t meaningful for the grant application, and if it isn’t meaningful, then why do it?

Finally, I wonder does submitting AI-produced writing threaten credibility? In academic writing, credibility is more important than it is for other types of writing. If I am going to update by version of reality based on what someone has written (and reviewers and editors verify), then I want to be sure where the ides and words originated. At the very least, authors and editors must identify works that include AI-generated text. Collectively, we should treat a failure to make such disclosures as plagiarism.

For now, I am comfortable concluding that AI-generated text does not threaten credibility, but I want to know so that I can proceed as an informed consumer.