We recently conducted some informal research, where we asked journalists who cover electronics, embedded and industrial automation sectors whether they would accept AI-generated content. We promised to keep the identities of the journalists confidential so we can’t tell you who made any of the comments.
Admittedly, when we started the process our general consensus was that a journalist accepting AI content was rather like a turkey voting for Christmas. But we were wrong.
Few Journalists Completely Reject AI
The big surprise was that the vast majority (9 out of 10) do not have a policy of rejecting AI-generated content. In fact, there are a few journalists actually using AI, although generally for image generation rather than writing text. Some also use AI for research, summarising content or even for SEO (presumably for generating meta tags).
In publications where journalists routinely edit content before it is published, there was little resistance to AI-generated content. Journalists in non-English speaking countries were also generally happy to accept AI translation (of human-written content), particularly in titles where they edit as a matter of course.
Of course, the content must be unique: even journalists who will accept AI-generated press releases and feature articles will check for duplication/plagiarism.
Journalists Know What’s AI
Almost all the journalists we surveyed said they regularly saw and recognised AI content, but although they said they were open to non-human generated content, the responses suggested that they do so reluctantly and that they still didn’t like publishing text generated solely by AI.
Quality is What Matters
Almost every journalist talked about quality. Not just the quality of AI-generated content, but also the quality of some of the human-generated content they receive. One journalist, who had clearly received many poorly written releases and articles over the years, said that no one needs AI to generate an awful press release!
Journalists we spoke with feel that AI generates dull and formulaic content, which makes it easy to spot. Even if it’s better than the worst writers, it’s still not as engaging as a good human writer. The point about AI content being obviously formulaic and clichéd was highlighted by several journalists.
Although AI is not necessarily the worst content that crosses a tech journalist’s desk, neither is it the best. Almost all the journalists talked about the problems with artificially written text, particularly around awkward turns of phrase or technical inaccuracies that make it impossible for the publication to use the text, which damages the credibility of the contributor.
Put simply, AI is better than some humans,
The Need for Different Voices
A significant proportion of the editors made the point that AI-generated content sounds very similar. The core reason that they have, for generations, accepted content from human beings representing brands is to showcase the spectrum of voices in the industry. Their view is that, if you’re using AI to generate text, you just sound like every other AI-generated “voice”. This lack of authenticity is clearly a big issue.
Even worse is that AI articles created about the same topic are often unacceptably similar, particularly if you are writing about a niche issue, product or service that is relatively new that isn’t covered by much existing content, so AI draws its content from a very shallow pool of information. Editors are seeing different companies – some directly competitive, others not – delivering articles that are very similar. AI is well known for churning out the same content for different users, and clearly some companies are using similar prompts to generate that content, which means they are producing articles that are not only not unique, but certainly nothing exceptional and, at worse, bland and wholly lacking in imagination.
We also saw a wider concern around AI-generated articles that are derivative and sound like content that already exists on the web. Even if the words themselves were not identical, journalists have seen enough to know that some articles are just an AI-regurgitated version of someone else’s work.
Lack of Trust
Journalists are concerned about the accuracy of AI-generated content. They have good reason to think this way: several said they had seen errors due to hallucinations, which is when a generative AI tool perceives patterns or objects that either don’t exist or are imperceptible to human observers in the physical world, creating outputs that are nonsensical, completely inaccurate, or both. This means that if an article looks like it is unedited, especially if it contains a single hallucination, it makes the editor far less likely to consider running the piece due to the high risk of inaccuracy, which calls the truthfulness of the entire article into question.
Our 20-plus years of working with tech journalists is that they absolutely hate factual errors, so their concerns around the presence of hallucinations are not at all surprising.
Should You Use AI When Writing for Tech Publications?
Our answer, based on our research, is a definite “yes”: you should use AI to help you write a better article or release. However, let’s clarify that statement.
AI is great for research, summarising notes, brainstorming and even helping to structure a framework for articles. But the reason a journalist wants to publish your content is that your voice matters. If it’s not in your voice, the content is either the same old vanilla or chocolate, not Baskin Robbins.
Simply spewing words onto a page and sending them without reading or reviewing them is wholly unacceptable. You’ll get found out, quickly, and it will damage your credibility with journalists, and therefore diminish the amount of coverage you’ll achieve and your hard-won access to those corridors of acceptance.
However, using something from an AI model as a first draft that you heavily edit in your own voice is probably OK and is going to be acceptable to most journalists. But don’t make the mistake of trying to convince editors that don’t accept AI-generated content that your content is human written if it’s not. Good writers can generally create better articles in less time by simply writing it themselves, but if you really struggle to write good prose, as even the best writers do on occasion, then using AI to prime the pump is probably a good way to refresh the stream of consciousness.
Translation, however, is a little different. If you are using machine translation on a human-written article, it’s not going to suffer quite as much from the problems that afflict the content AI machines generate from scratch. The translation will not yet be as good as a human translator, and a subject matter expert to review the translation for accuracy (particularly around technical terminology) is essential.
AI is amazing technology. It’s incredibly helpful as a writing assistant, but journalists are almost unanimous in the view that it doesn’t generate content they are proud to publish or that really enhances your brand. Although, given the rate of change in AI, this may not be true in the future, if you want to generate content that gets published in the tech media and helps boost your brand, you probably shouldn’t fire your writers today.
If you’re a journalist and you have a strong view on this topic, please let us know. We’d be happy to publish your views in our blog and newsletter!
Author
-
In 2001 Mike acquired Napier with Suzy Kenyon. Since that time he has directed major PR and marketing programmes for a wide range of technology clients. He is actively involved in developing the PR and marketing industries, and is Chair of the PRCA B2B Group, and lectures in PR at Southampton Solent University. Mike offers a unique blend of technical and marketing expertise, and was awarded a Masters Degree in Electronic and Electrical Engineering from the University of Surrey and an MBA from Kingston University.
View all posts