I first met Robert “Crash” Craddock in 1981, when we were both working in Toowoomba, and we’ve worked together a few times in Brisbane over the years in newspapers and, once, on radio.
I reckon I’d know his voice anywhere.
Except that the next time I hear it, it may not actually be him.
According to this report, Crash — who is the senior cricket journalist for The Courier-Mail and other News Corporation newspapers — has “trained” an AI robot to clone his voice. That means that when I, or you, next listen to his voice, the words may not be coming directly from his mouth.
It could be a script that he has written but hasn’t had time to record to broadcast standards, so the technology has done it for him. The immediate plan is to use it for voiceovers on cricket highlight reels, but there are other possible applications.
His logic for doing so is that he can work remotely, and it allows him to do more with his time.
As News Corporation Australia’s general manager of editorial innovation, Rod Savage, explains: “It now means Crash can deliver his expert insights to a broad audience across all formats — be that print, digital, video or audio — without having to run around everywhere.
“There’s just not enough time in the day for reporters to produce content in all formats, they need a bit of help from technology.”
Which is all very well, I suppose, because sadly the preferable alternative of hiring more reporters isn’t going to happen in the current economic climate.
And while it could be said to be misleading, I don’t think I have a problem with it, because if Crash wrote the script, then it’s essentially his voice delivering his words.
But what if somebody else writes the script and they use the Crashbot to “read” it?
Does that create an ethical dilemma?
I suppose it’s no different to when I was writing “readers” for radio presenters — except that they were free to add and subtract their own words or go off-script completely.
But what if the technology gets to the point where there’s little or no actual human input at all?
Oh, hang on, it already has. Another great cricket fan, Michael Parkinson, is still interviewing people for a podcast more than a year after his death.
This particular bot, which has been trained on thousands of hours of recordings of the great man, can “think” to the extent that it can interact with the guest in a way Parky may have done when he was alive.
The implications for the media, and in other areas of life, are disturbing. Imagine phone scammers getting their hands on this tech and using it to pretend to be a person trusted by their victim.
With the Parkinson example, people know that it’s not him — he is, after all, dead. But is it possible that this technology will become so good that it’s a viable replacement for human beings?
AI newsreaders have been operating in Asia for some time, with various levels of success and acceptance. But, so far, we haven’t seen, or heard the technology used here.
Or have we?
Hi Brett, thanks for the post and you highlight many of the ethical and moral dilemmas we wrestle with when it comes to AI use in newsrooms and journalism. With the example of Crash, he'll write the scripts and we will always declare up front to the audience that it's AI Crash doing the reading. We don't want to trick people ... as you accurately quoted me in your post, we want to help him do more content for more platforms. His words, his voice, but he doesn't have to take time out to go into a recording studio to do it. It's all new stuff, we'll see how it goes and assess its success (or otherwise!) after the Tests are done. And I'd be interested to know more of what you think of it as the summer progresses. Thanks again for your interest. Rod Savage.
A friend says he's sure Karl Stefanovic is a robot.