
Illustration created by Alfredo Casasola Vázquez.
Illustration created by Alfredo Casasola Vázquez.
When ChatGPT burst into the scene in 2022, I was one month into starting my masters’ degree. Back then, my classmates and I looked at generative AI as a fun oddity rather than a serious educational tool. I used it to help me generate title ideas for my thesis, but I ended up using my own.
At the Reuters Institute, we’ve delved into generative AI’s impact on journalism in the past three years. But GenAI is leaving a significant footprint on higher education as well.
With so much reporting on how GenAI is corrupting the critical thinking and writing skills of students, I was curious to know how this emerging technology is transforming journalism education. So I spoke with six professors from Cambodia, Peru, Serbia, Spain, the UK and the US to take stock of the state of journalism education in a world in which AI can create pitches for students, do their research, and even write their news articles – all with a well-crafted prompt.
All the professors I spoke to say students use AI for different tasks. In the past three years, their attitudes have shifted from a timid curiosity to full-fledged usage in a variety of ways. Some use AI for research or finding sources while others rely on it to write their own assignments.
Ponleu Soun, a professor and researcher at the Royal University of Phnom Penh in Cambodia, said that he has seen GenAI influencing his students’ work in courses that require critical thinking and analysis, with students increasingly using AI for generating ideas for their writing and even generating writing itself.
“At worst, some fail to fact-check or verify the accuracy of the content, resulting in flawed arguments and incorrect details,” said Soun. “There have also been isolated cases where major AI-generated content was detected in both group and individual assignments.”
Paul Bradshaw, a professor of Data Journalism at Birmingham City University in the UK, has also seen widespread use of generative AI in students’ work. But not every student uses AI in the same way.
“You've got someone who generates the whole article using AI,” he said. “But others might write the article but generate quotes and fabricate a source; and others who might write the article but then get AI to rewrite it; and others who get some feedback from AI… And some of those you wouldn’t necessarily say are bad things.”
Ainara Larrondo, a lecturer at the University of the Basque Country, has seen cases of plagiarism among her students. They see learning as a means to an end (a passing mark) rather than the journey of learning itself. So she has tried to reinforce within her students the importance of ethics in journalism.
“With all this AI stuff, it’s important to remind future journalists that the tool you use to translate or write an article is just an assistant for your work,” she said. “But your focus should be on your own work.”
Carolina Albornoz Falcón, professor at the Universidad Nacional Mayor de San Marcos in Peru, is following a similar approach as her students amp up their use of GenAI: “We stress they should use [these tools], but they should do so responsibly, and we must also monitor how they are using these tools. Right now students are often more proficient with these tools than the teachers themselves.”
None of the professors I spoke to demonise AI as a tool and many of them encourage students to use it responsibly. But some of the people I spoke to say some young people are turning against AI.
This is not entirely surprising given that over half of Gen Z adults say artificial intelligence makes them feel anxious, according to a poll by Gallup, with respondents being twice as likely to say AI will harm their critical thinking skills than help them. On the other hand, our own research also shows that younger people are more comfortable using AI chatbots than older cohorts and in fact use AI models more often to verify if something is true.
Dragana Pavlovic, professor at the University of Nis in Serbia, has even seen students write their masters thesis proposals with AI, but she also said that more and more students are now rejecting AI altogether.
Pavlovic, who’s conducted interviews with students about their use of AI, said that they were more optimistic about these tools when they came out in 2022. Now they are more critical about the pitfalls of artificial intelligence as issues like hallucinations continue to be widespread.
“They are more aware of the problems with using ChatGPT and they are avoiding it because sometimes it makes up quotes or references to studies and books that don’t exist,” she said.
This is a sentiment Zhao Peng has also perceived amongst her own students at Emerson College in the US. Peng told me that a little less than half her students are vehemently anti-AI, citing concerns that it will diminish their critical thinking skills, their writing and their ability to do their own research.
“When I introduce AI tools in my class, some of them are very eager to learn how AI can be incorporated in news production and others think of AI as an evil tool that will diminish their abilities, especially in journalism,” she said.
The concern that reliance on AI will diminish users’ critical thinking skills is not an unfounded one. Multiple studies have assessed the impact generative AI has on its users cognitive abilities.
A pilot study from MIT showed that ChatGPT users exhibited weaker neural engagement, poorer memory, reduced creativity, and more formulaic writing, while those relying on their own reasoning performed better. Another study from SBS Swiss Business School found that heavy AI reliance notably reduces critical thinking performance across all age groups, especially younger users, primarily due to cognitive offloading or shifting mental effort onto AI tools.
Similarly, Microsoft and Carnegie Mellon surveyed 319 knowledge workers concluding that higher confidence in AI predicted less critical thinking, while higher self-confidence in one’s own capacities predicted greater critical thinking effort.
Most of Pavlovic’s students acknowledge they have to learn how to use it to stay competitive when they enter the job market. “There will always be maybe 10% of a generation that wants to take a shortcut,” she said. “But most of them do want to learn, so once they are at their jobs, they know how to use ChatGPT.”
At the beginning of the semester, Peng sets the rules very clearly: using AI for writing an entire assignment is prohibited and she will use AI detection tools to enforce this rule. But she’s finding ways to incorporate GenAI into her own classes.
“I tell my students that I will teach them how to use it as an assistant and they should remain in control as an editor at all times,” she said. “I have only found one student using AI to write an entire assignment since 2023.”
While most professors have taken to explaining the perils of AI in journalism to their students, many have also adapted their modules to prevent cheating and plagiarism.
While AI detection tools, such as Quillbot or AI Detector, are now widely used and available to educators, their accuracy has been put into question. Multiple studies and experiments have concluded that these tools are not reliable and students can bypass these hurdles. Thus, professors have to get creative if they want to minimise the risk of AI-generated assignments being turned in.
Soun, the professor in Cambodia, said that to counter students' unethical and irresponsible use of AI in their schoolwork, he has implemented impromptu oral presentations and question-and-answer sessions to ensure students understand the ideas in their work. Others, like Albornoz Falcón from Peru, have reintroduced in-class handwritten assignments to assess students' creative and writing skills.
“We used to give them those activities to do at home or outside the class and we simply ran them through anti-plagiarism software. Now our work as teachers has to change as well,” she said.
Rather than trying to combat the use of AI, many educators have taken to implementing it in their work. Like newsrooms, professors have created their own AI guidelines outlining permissible uses.
Larrondo, the professor in the Basque Country, asked her students to cite every time they use AI in their assignments, for example.
Bradshaw, the professor from Birmingham City University, has introduced something he calls an ‘AI diary’ – essentially a record of a student’s every interaction (prompt and response) with any generative AI tool. In theory, this means that no selection or editing should be made by the student to be submitted to Bradshaw. The diary requires a log of every prompt and response, but it also requires some form of reflection from the student: why was the prompt written that way? Or did the response lead to a new thought?
This method has created an open environment where students feel comfortable disclosing how they used AI, allowing them to show their work. Bradshaw said this transparency mitigates plagiarism, as students are upfront about AI assistance rather than pretending they didn’t use it.
“I was very happy when I got to the first assignment where someone admitted they got ChatGPT to generate a listicle for them. They submitted this listicle that they generated and it wasn’t very good, but they weren’t plagiarising. They were being transparent about the fact that they’d done this and then they could reflect on it and they could think critically about it,” he said.
Since implementing AI diaries, Bradshaw has seen students being more critical and cautious about their use of these tools. The process of using AI (and documenting it in the diary) forced them to pause and think about their ideas more deeply, applying journalistic principles and techniques they have learned in class.
While AI is not formally integrated in any of the curricula so far, all of the professors I spoke to have already included it in their classes informally through modules rather than via individual courses.
Albornoz Falcón from Peru and Bradshaw from the UK emphasise that there needs to be an integration in courses to teach students how to use the technical aspects of AI to their advantage. Bradshaw, for example, highlights how he is teaching his students to use AI to help them code more quickly.
Peng, from the US, emphasises how journalism education has always been able to adapt to teach new technologies. “Journalism relies on new technologies. So you can think about the coming of television and the coming of social media,” she says. “After AI, of course we need to think of new ways of helping students understand this technology better.”
One of the biggest fears educators expressed is that students will lose their critical thinking skills, which are imperative in journalism, due to over-reliance on these technologies.
Peng, the professor from the US, gave an example of this from an assignment she gave her graduate class. In the assignment, she asked students to determine if a piece of fake news was untrue using AI to help them. All the students in her class thought the piece of fake news was true.
“AI can provide you with facts, but you still need to think about the logical sequence, the missing details,” she said. “Sometimes with AI hallucinations the beginning is true, the end is true but there are changes in the middle and it’s not true anymore. But this is very hard right now for human beings. If you did not look into it very closely, you will mistake it as true information.”
It is not only verification skills that are being put to the test. Creativity and story-generation might suffer too.
Peng pointed out that AI, being trained on existing data, might hinder creative thinking and the ability to find historically neglected areas when brainstorming for new angles. Bradshaw from the UK said one of his concerns is that using AI to draft articles might deny students the opportunity to develop their own writing.
“Writing things out is one way of thinking things through. If you get ChatGPT to do that for you, you are denying yourself an opportunity to work through your own thoughts. You do learn by writing,” he said.
When students submit work that is hyper-perfect in grammar, in spelling, and in the argument, Larrondo from Spain said, the learning process and the humanity of the output is lost.
“Students simply want to use the tool to test their work, and that's where the problem lies. When students say, ‘No, I want to do a piece of work and have someone who is supposedly qualified to evaluate it tell me if I’ve done it right or wrong and show me how to improve,’ they don’t understand that the end result is not the only goal,” she said.
Despite these concerns, all of the professors I spoke to think their duty is to properly teach young journalists the advantages and the pitfalls of this new technology.
“I always tell [students] that every medicine is poisonous if not taken properly,” said Pavlovic, the professor from Serbia. “Every technology brings some dangers with it and the only way to properly use it is to be aware of all possible dangers. My goal is to present them with both the advantages and the disadvantages.”
All the educators think that proper training and a focus on core journalistic principles can mitigate risks. The goal, they say, is to equip future journalists with the skills to effectively integrate AI into their work as “an assistant” rather than a replacement for human critical thinking.
“With proper training, there should be little concern,” said Soun from Cambodia. “AI is meant to help journalists realise their full potential, not hinder it.”
The journalism students of today will become the journalists of tomorrow. But what kind of industry will they be entering as AI is becoming more ubiquitous? The professors I spoke to are optimistic. They don’t think the rise of AI will bring about the end of journalism, but it will rather make journalism more important than ever.
Bradshaw from the UK points out how historically journalism education has focused disproportionately on the technical side of journalism as the news industry recruits first and foremost based on technical skill.
“If you can’t write well, it doesn't matter how dogged a news hound or investigator you are. This has the side effect of excluding potential journalists who don’t come from a background in writing and the cultural codes of news language in particular,” he said. “So my hope is that gen AI will shift that as it can help with the technical side of things, but it’s the interviewing and verification where we will need to recruit and where our teaching should shift.”
When it comes to the work ecosystem young journalists will be entering, however, he still thinks there will be a place for journalists in the AI world. Bradshaw expects that it will not impact content generation as much but rather problem solving.
“Typically with technological change the short-term change is overestimated and the long-term change underestimated, so I don't expect there to be much change in a few years' time,” he says. “We’ve already seen most news organisations introduce special teams and projects around AI, so students will either use their new skills to get roles in those, or they will go into more traditional parts of the business where routines are well established and ecosystems more resistant to change.”
Larrondo from Spain thinks that current developments suggest that the practice of journalism will be deployed in an information ecosystem that will demand analysis, research and context on the part of journalists, as well as data visualization skills, leaving the most mechanical tasks to algorithms.
She thinks young journalists should focus on verification and information diversity as misinformation becomes more prevalent and news offerings more personalised
“They are expected to have the technological, creative, critical and ethical skills to generate quality information products in different formats,” she said. “They should be capable of responding to the information needs of all audiences.”
Albornoz Falcón from Peru undoubtedly thinks AI will transform journalism as the creation and distribution of journalistic content will increasingly fall to AI. In addition, journalists will have to stay ahead of AI, for example, by telling stories in different ways to reach their audience's emotions. While she thinks that journalists today face fierce competition from AI, new professions and other ways of approaching journalism will emerge.
“In this sense, human supervision by journalists is key, as it is a moral duty to provide the audience with responsible information based on ethics and all the journalistic principles required by journalistic practice,” she says.
In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.