Oxford University Press's
Academic Insights for the Thinking World

Are academic researchers embracing or resisting generative AI? And how should publishers respond?

The most interesting thing about any technology is how it affects humans: how it makes us more or less collaborative, how it accelerates discovery and communication, or how it distracts and frustrates us. We saw this in the 1990s. As the internet became more ubiquitous, researchers began experimenting with collaborative writing tools that allowed multiple authors to work on a single document simultaneously, regardless of their physical locations. Some of the earliest examples were the Collaboratories launched by researchers in the mid-1990s at the University of Michigan. These platforms enabled real-time co-authoring, annotation, and discussion, streamlining the research process and fostering international collaborations that would have been unimaginable just a few years earlier.

Most people, but not all, would agree that the internet has benefitted research and researchers’ working lives. But can we be so sure about the role of new technologies today, and, most immediately, generative AI?

Anyone with a stake in research—researchers, societies, and publishers, to name a few—should be considering an AI-enabled future and their role in it. As the largest not-for-profit research publisher, OUP is beginning to define the principles on which we are engaging with companies creating Large Language Models (LLMs). I wrote about this more extensively in the Times Higher Education, but important considerations for us include: a respect for intellectual property, understanding the importance of technology to support pedagogy and research, appropriate compensation and routes to attribution for authors, and robust escalation routes with developers to address errors or problems.

Ultimately, we want to understand what researchers consider important in the decision to engage with generative AI—what excites or concerns them, how they are using or imagining using AI tools, and the role they believe publishers (among other institutional stakeholders) can play in supporting and protecting their published research.

We recently carried out a global survey of researchers to explore how they felt about all aspects of AI—we heard from thousands of researchers across geographies, disciplines, and career stages. The results are revealing in many important ways, and we will be sharing these findings in more detail soon, but the point that struck me immediately was that many researchers are looking for guidance from their institutions, their scholarly societies, and publishers on how to make best use of AI.

Publishers like OUP are uniquely positioned to advocate for the protection of researchers and their research within LLMs. And we are beginning to do so in important ways, because Gen AI and LLM providers want unbiased, high-quality scholarly data to train their models, and the most responsible providers appreciate that seeking permission (and paying for that) is the most sustainable way of building models that will beat the competition. LLMs are not being built with the intention of replacing researchers, and nor should they be. However, such tools should benefit from using high quality scholarly literature, in addition to much of what sits on the public web. And since the Press, and other publishers, will use Gen AI technologies to make its own products and services better and more usable, we want LLMs to be as neutral and unbiased as possible.

As we enter discussions with LLM providers, we have important considerations to guide us. For example, we’d need assurances that there will be no intended verbatim reproduction rights or citation in connection with display (this includes not surfacing the content itself); that the content would not be used for the creation of substantially similar content, including reverse engineering; and that no services or products would be created for the sole purpose of creating original scholarship. The central theme guiding all of these discussions and potential agreements is to protect research authors against plagiarism in any of its forms.

We know this is a difficult challenge, particularly given how much research content has already been ingested into LLMs by users engaging with these conversational AI tools. But publishers like OUP are well positioned to take this on, and I believe we can make a difference as these tools evolve. And by taking this approach, we hope to ensure that researchers can either begin or continue to make use of the best of AI tools to improve their research outcomes.

Featured image by Alicia Perkins for OUP.

Recent Comments

There are currently no comments.

Leave a Comment

Your email address will not be published. Required fields are marked *