Oxford University Press's
Academic Insights for the Thinking World

Robot Pointing at a Wall

A generational divide: differences in researcher attitudes to AI

As part of our interest in helping academic researchers harness AI, we surveyed 2,300 researchers to understand their use of AI, as well as their attitudes and worries. Our findings show that a quarter (25%) of those in the early stages of their careers have reported having sceptical or challenging views of AI. However, this proportion falls to 19% among respondents who are later in their careers. Early Career Researchers also have more divisive opinions on AI, with fewer expressing neutral views than Later Career Researchers. 

In a roundtable discussion we asked Dr. Henry Shevlin (Associate Director at Leverhulme Centre for the Future of Intelligence, University of Cambridge) and Dr. Samantha-Kaye Johnston (Research Associate at the Department of Computer Science, University of Oxford) for their thoughts on why this might be:

Henry Shevlin 

The survey findings were empirical evidence for something I have observed for some time. My dad is almost 80, and I introduced him to ChatGPT’s voice functionality. He absolutely loves it and uses it every day; he says he has had enough of typing on tiny buttons on a tiny phone screen. Now he just chats to ChatGPT every day. 

So, it was really striking to me to see that the most enthusiasm (in terms of the largest number of “Pioneers”) was found in Boomers and Gen X, whilst younger generations were a lot more sceptical. 

There are all sorts of theories for why that could be, but one I will explore here is an analogy with cars and mechanical skills. When I think back to my parents’ generation who got their first cars in the 1950s and 1960s, they really needed to have mechanical skills to run a car because cars were really unreliable and broke down all the time. By contrast, I can just about change the oil in my car. My car is pretty reliable and so I haven’t ever needed any more skills than this. Also, if I wanted to do anything fancy or impressive, car mechanics are so advanced these days that I would probably need to plug in a computer to get the diagnostics information.  

I think we probably have seen something similar happen with the understanding of computers. For so many people these days (and particularly for young people), the primary way of accessing the internet is through a phone or a tablet. There’s far less need and far less opportunity for the kind of tinkering that my parents’ generation did in the 90s when you had to key in commands or use Boolean search. That kind of tinkering was crucial for building understanding of the fundamentals, and for giving people the confidence to experiment and learn.

We are seeing intergenerational differences in willingness to tinker showing up in the AI survey results.

These days, a lot of people try something on an LLM for the first time, find it doesn’t work as they expected or that they don’t find the output useful, and so they assume that the LLM can’t help as they needed. I think for most AI tools, there’s quite a steep skill curve because you need to spend a lot of time figuring out what questions to ask to get a reliable response. 

If you approach AI tools like my parents approached cars, then setbacks like unhelpful responses are less likely to put you off. If you anticipate that tinkering and experimentation will be required (and you are confident in testing out different approaches), you will have better experiences with AI. We are seeing intergenerational differences in willingness to tinker showing up in the AI survey results. 

Samantha-Kaye Johnston

It’s important to contextualize the survey’s results within the broader concerns that Early Career Researchers tend to face—even before we consider the impact of AI. ECRs, particularly those who are non-tenured or on short-term contracts, are oftentimes grappling with the significant issues of job insecurity and work life balance. With this broader context in mind, an understanding of why Early Career Researchers might be more sceptical about the role of AI becomes quite a bit clearer. 

Someone who is a Research Assistant might fear that AI could replace some of their research roles. These survey findings do show that Later Career Researchers are using AI for analysis that Research Assistants would normally take on. It’s not unreasonable for this to build on your worries around establishing yourself and getting recognition for your work. 

And those aren’t unreasonable worries. If you type in “do you know this person” to an LLM (albeit, perhaps an unpaid version) then you typically won’t get responses about ECRs, unless they have already made a huge impact on your field. Many ECRs have not yet had the time or the opportunity to establish their name, nor the internet presence needed to appear in responses. The nature of citation websites is that they lead back to established researchers, to well-known studies, or to landmark papers. This means that ECRs are less likely to have been included in the data training pool that LLMs are built upon, and as people are using AI tools to discover research or summarize a research topic, then it becomes harder for ECRs to establish themselves.

What defines an excellent researcher in the age of AI?

In other cases, ECRs might also worry about being perceived as overly reliant on tools like ChatGPT, which can carry negative connotations. They may fear that this reliance could jeopardize their professional identity and credibility in the academic community because we have developed a fear-based narrative around using AI. Importantly, I think this may stem from the broader concern of not knowing when and how the use of AI will be accepted, even if there are clear benefits of using AI-driven tools.   

If this is the case, and to quell fears around using AI, then I think it’s important for us to define as a community what excellence looks like when engaging with AI tools. What defines an excellent researcher in the age of AI? Is excellence characterized by seamlessly integrating AI into the academic workflow, enhancing productivity and innovation? Does an excellent researcher use AI selectively, leveraging it for specific tasks where it adds the most value? Is the mark of excellence knowing when not to use AI, maintaining a balance between human insight and technological assistance, or perhaps not using AI at all in certain contexts? Helping ECR researchers, and even more seasoned researchers, to understand these dynamics can help shape the future of AI acceptance, define what it means to excel in this evolving landscape, and reduce concerns around the role that AI plays in our academic journey—both presently and in the future.

Featured image by Tara Winstead via Pexels.

Recent Comments

There are currently no comments.