During my usual morning perusal of news, I came across an article in The Daily Beast entitled, “A Doctor Published Several Research Papers With Breakneck Speed. ChatGPT Wrote Them All.”
As a researcher, I was intrigued. I’m not against the idea of using AI to help in generating research or even in writing papers, but as I read through the article, a disheartened shudder arose throughout my body.
The focal researcher of the piece, University of Tennessee radiologist Som Biswas, had submitted an article to the journal Radiology that was written largely by ChatGPT. Biswas was upfront with the editor about the authorship; it underwent peer review and then was published. Following this initial success, he used OpenAI to write several more papers over four months and published five of them in various journals.
While I can certainly see the value in OpenAI assisting in research and even, perhaps, writing for those who struggle with producing quality prose, there is an undercurrent in the ideas of Biswas and others that is summed up well in an article written by a group of French rheumatologists called “ChatGPT: When Artificial Intelligence Replaces the Rheumatologist in Medical Writing.” With an optimistic tone, the authors state: “…for researchers who are already prolific now, one can imagine that with AI, their output could be doubled or even tripled.” The authors go on to conclude that “AI represents a major step forward in helping to produce original scientific work” (emphasis added).
I don’t argue that there is necessarily something wrong with using AI to help with writing research papers. (I’m simply unsure about this, at the moment.) However, I see the mindset in the article as representing yet another cog in the neoliberal machine aimed at quantifying research and emphasizing volume at the expense of quality.
It’s a truism to state that citation counts and numbers of publications are the primary metrics by which research faculty are measured—along with perceived prestige of journals attached to impact factors. The use of OpenAI to increase “productivity” thus assessed—which is only conceptualized in terms of volume of publications, rather than the quality of what’s published—serves to further support neoliberal practices that are polluting institutions of higher education and eroding the quality of research output.
If widespread use of OpenAI becomes the norm in research publication, the ongoing assault on quality will intensify. There will be more predatory journals chasing junior scholars, in particular, for research papers, so that they can appear productive—even if the journals in which they publish are worthless.
And the already exploitative for-profit academic publishing industry will become even more intense as the supply of papers overwhelms demand, pushing up fees publishers can charge authors for the fruits of their labor. Indeed, OpenAI will intensify the ways such publishers obtain much of the labor (in the form of peer-reviewers and authors) needed to produce their products without paying those doing the work.
Under the current neoliberal regime in higher education, the use of OpenAI will push research increasingly into a zero-sum game structure in which those who are most productive in terms of cranking out AI-generated research papers will be winners in the competition over merit increases, promotions, and research grants—all based on performance measures that overwhelmingly privilege quantity over quality.
In The Daily Beast article, Brett Karlan, a postdoctoral fellow in AI ethics at Stanford University, points out that, “[g]iven the truly crushing pressure to publish, I think academics are going to start relying on ChatGPT to automate some of the more boring parts of writing, and it would be very likely that the same people who churn out barely publishable papers and send them off to predatory journals are going to figure out workflows that automate this with ChatGPT.”
Karlan is right on the mark with this assessment. But the problem isn’t with the researchers nor with ChatGPT, it’s the system of performance assessment that reinforces the neoliberal obsession with profit and quantification.
As publishing companies profit from their source of free labor, research institutions “measure” performance of those same laborers on the basis of numbers of publications. This pushes researchers to publish more, which lowers quality and increases exploitation by publishing companies. OpenAI is perfectly positioned to reinforce a pernicious system that is already doing great damage to the research endeavor, as well as to the mental health of researchers under extreme stress to publish more, and more, and more.
OpenAI is being touted as an academic game changer. I agree. And the new game is one in which quality will be almost entirely subordinated to quantity as vast numbers of new journal articles are constantly churned out, many of which no one will have the time to read. That may be okay, since many of those same articles won’t be worth reading in any case.
But what needs to happen now is for institutions of higher education to begin developing a system of professional ethics that shapes how OpenAI is used in publication—and that is worked into how scholars think about academic integrity.
Open AI isn’t going away, but if academic institutions don’t address the problems with this in relation to research and publication now, it will do further damage to an already badly damaged system. Unfortunately, this will require higher education administrators to do something they may find both frightening and uncomfortable—they are going to need to assess not only OpenAI, but the neoliberal values and expectations that are profoundly undermining the quality of research and education.