![Illustration created by Eduardo Suárez using a picture of Walter Lippman from the Los Angeles Times archive. CC BY 4.0](/sites/default/files/2025-01/lippmann_illustration_0.jpg)
Is it a good idea to use AI to clone real journalists’ voices? It depends how you interpret the question
![Illustration created by Eduardo Suárez using a picture of Walter Lippman from the Los Angeles Times archive. CC BY 4.0](/sites/default/files/2025-01/lippmann_illustration_0.jpg)
A Scandinavian news editor recently asked me a pointed question: ‘Do you think it’s a good idea to create AI versions of real journalists’ voices?’
Examples of this have recently popped up in various places. A replica of the late presenter Sir Michael Parkinson in the UK and an AI-version of Norwegian news presenter Kjetil H. Dale are just two prominent examples.
At the time, I didn’t manage to give this person a good answer – partially because I didn’t have the time, and partially because my instinct was to duck a direct answer and think about the question itself. Hopefully, this article will explain why.
When I hear a question like this, I find it useful to first look more closely at what it asks of us. In this case, it wants us to land on one side or the other from a moral perspective. This gives us several ways to address it.
First, we can look at it from a deontological perspective (after the Greek god Deontos, duty), where the question is about what is due to whom, based on notions of justice, rights, and duties. But we can take a utilitarian approach and focus on the outcomes – what benefits and harms might arise.
The former views the question through the lens of a judge, the latter with that of an engineer or policy-maker. Of course, there are other perspectives one could take. But for the sake of argument let us stick with these two.
Any answer is pretty complicated. But why let the fun stop there? We can throw a third level of complexity into the mix by asking about the addressee: for whom is the use of AI versions of real journalists’ voices supposed to be a good (or bad) idea?
If your head wasn’t spinning already, congratulations – you probably did a degree in philosophy (in which case, you may want to stop reading here because I will likely embarrass myself).
But let’s start to take this question apart by looking at the addressee first. Three to five (by no means exhaustive) categories come immediately to mind: the individual journalist providing the voice, the news outlets putting it to use, the audience exposed to it. If one is so inclined, one can add the developers of the technology and the whole of society into the mix, too.
If we take the deontological approach first, we can ask what is owed to each of these when AI voices of real journalists are deployed – and therefore if the use of the same is a good or bad idea, because it conforms to a greater or lesser degree with these categories.
One can quickly spot areas where AI-cloned voices might violate several of these points. Poor use – for instance, deploying such technology for a salacious story – could harm the dignity of the journalist, especially if they do not agree with its use.
Audiences may balk at the notion that using an AI clone still counts as accurate or transparent news work that they deserve. At the same time, if its use is tightly controlled, journalists might see it as an additional source of income and a new way of reaching audiences with journalism.
News outlets could view it as a useful addition to their toolkit for achieving their aims. Audiences might even appreciate it. On a macro level, there might not be a significant risk to what the news ‘owes’ to society by using the technology for this particular purpose.
Naturally, we should also look at the other side of the coin. Going down the utilitarian road, another tree of possibilities opens up.
AI-cloned voices could bring some benefits. Journalists could grow their recognition and do more with less. Outlets could create new products that serve audiences in more exciting and engaging ways, potentially increasing their revenue prospects, too.
And yet, risks loom. The careless use of such voices could tarnish the reputation of the journalist, have a negative impact on the news outlet if audiences come to dislike it and in a worst case scenario increase distrust in the news as a whole. Many people currently feel uncomfortable with too overt uses of AI in journalism, especially where it is used to re-recreate reality. Some research also suggests that people have a hard time identifying AI-voice clones, potentially raising the risk of harm if such voices are used in nefarious ways.
Why make things so complicated, you might ask?
As an academic, I’d be inclined to say that part of my job is to poke the bear and get at the complexity behind seemingly simple questions. But I prefer to answer with American journalist Walter Lippmann. He once wrote that we all “stand at some point in space and time and can see not the whole world but only the world as seen from that point.”
If nothing else, taking a question like this apart helps us think through not only where we stand in relation to it but also who else might be affected by the answer we give. Ultimately, how you answer that question says more about your own positions and preferences than it does about the ‘right’ or ‘wrong’ answer.
Just because there is no one definitive answer doesn’t mean a good answer cannot be found – there will always be better and worse responses, as with any question that requires multiple trade-offs.
So where does that leave us? My personal take would be to say, ‘Cloning journalists’ voices with AI is probably fine for all involved if certain things are adhered to.’ For example, if there are agreed parameters on the kinds of stories it can be used for, a defined expiry date for the use of the voice after retirement or job changes, controlled access with human oversight to ensure responsible use, and clear labelling in some scenarios.
But part of the answer is also, ‘We don’t know if it’s a good idea,’ as the verdict on the effects is still out. Audiences seem fine with some uses of AI, if done transparently; they are already accustomed to digital voices and might grow to accept this too. After all, social norms are not set in stone.
However, it is not impossible that there could be a form of backlash. Audiences often form a parasocial bond with news presenters and journalists. A major reason why outlets are toying with the idea of AI-generated voice clones is because they hope to exploit that pre-existing bond and the trust that comes with it. Yet, it’s far from guaranteed that the same won’t simply evaporate if audiences see through the trick and react poorly to this attempt at artificial authenticity.
This article originally appeared on NIKKEI Digital Governance. It represents the views of the author, not the view of the Reuters Institute.
In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.
- Twice a week
- More than 20,000 people receive it
- Unsubscribe any time
signup block
In every email we send you'll find original reporting, evidence-based insights, online seminars and readings curated from 100s of sources - all in 5 minutes.
- Twice a week
- More than 20,000 people receive it
- Unsubscribe any time