Pretty much everyone reading this post will have tried AI, and we’ve all had a similar experience. The first time you see AI-generated content for a technical B2B campaign, it’s amazing: the fact that a computer can generate content about such a specific topic is something most of us never expected to see. Unfortunately, the thrill of seeing content generated by AI large language models (LLMs) is usually short-lived.

There are a few problems with AI-generated articles for a niche technical topic. Firstly the second and third times you try the technology tends to sound a little repetitive. It’s hard to get an AI model to generate multiple pieces of content that feel fresh and new. It’s also hard to get AI to generate great content about the newest innovations. This shouldn’t be surprising, as the model hasn’t been trained on them and therefore doesn’t have that knowledge. And finally, there is always the concern about whether you can trust the output of AI: does it include “facts” that are simply untrue or worse, does it plagiarise someone else’s work?

These issues can make the use of AI feel like a lot of hard work, and we haven’t even considered the fact that AI content tends to have a style that is easily recognised and therefore tends to devalue the writing.

Fixing the Problems

It’s possible to address the problems with AI with the right strategy, allowing AI to be that writing assistant we’d all love to have beside us. Here’s how we do it at Napier.

Generating content directly from the AI model rarely produces great material for deep-technology topics. This isn’t surprising: the AI model can only hold so much information, which is why it tends to produce quite repetitive content if asked for more than one article on a topic. What works well is using other content as source material – in the same way that human writers write great content.

Using source material doesn’t mean re-training the model with new information. A technique called RAG (retrieval-augmented generation) lets LLMs access and use content without needing retraining. This is important as it eliminates the time and cost that re-training would require.

Using RAG also helps with new products and innovations: you can provide data about things that simply weren’t public when the LLM was created, and therefore couldn’t be included in the model.

Now you have content, you need to ensure that it’s not plagiarized and is true. Many tools exist to check for plagiarism, so that’s a quick and easy fix. Although RAG reduces the number of hallucinations, the name adopted for errors made by LLMs, it doesn’t necessarily eliminate them. The best solution for fact-checking is the one that has been used by serious publications for some time: having a human checker.

Finally, the output of a LLM can usually be improved with a light edit from a human editor, making it sound less like an AI and more like a subject matter expert.

Although it sounds like a lot of work, if you have the tools then using an AI to generate the first draft can reduce the amount of time it takes to generate written content.

Napier’s AI-Driven Content Service

In many cases our clients want humans to generate their written content, and this is particularly the case with high-value content that could impact a significant value of business. An expert human can generate better research-driven content than an AI – at least that’s true today.

But we recognise that speed and cost are sometimes important factors when generating content, so we’ve launched our AI-driven content generation service. Built from countless hours of experimentation and the leading tools in the industry (we’re not simply entering a prompt into ChatGPT!), and with the safeguards of plagiarism checking and human editors to optimise the writing and check the facts, our AI-driven content service is designed for the most demanding B2B technology clients.

Currently, the service is in beta, but if you would like to try it to see how it could generate content for your organization, send me an email.