Catch-up with Napier’s webinar and find out whether AI is good enough to replace humans for some marketing tasks. We’ll present the results of our bake-off between humans and AI, You’ll find out:
- Whether AI is a better marketer than a human
- If creating custom tuned AI models improves performance
- How CustomGPTs can give you new insights
- What tasks are best suited to AI
- How to avoid embarassing AI disasters
Register to view our webinar on demand by clicking here, and why not get in touch to let us know if our insights helped you.
Napier Webinar: ‘Human vs AI Machine’ Transcript
Speakers: Mike Maynard
Hi everyone. Welcome to the latest Napier webinar.
Today we’re going to explore whether AI is better than humans when it comes to marketing. Obviously, this isn’t entirely serious, but we hope you’ll learn a little about the benefits of using AI as well as some of the drawbacks. So let’s get started.
In terms of objectives, we’ve got a human racing a robot, and that’s basically what we’re going to do. We’ve set a number of marketing tasks and we’re going to see whether the human or the robot generally does better. Our human and robot are ready to go, and we’ll be looking at a whole range of activities, from speed and image generation through to translation and content creation. Hopefully, by the end, we’ll have a winner, and we’ll also look back at what we’ve learned.
Almost everything we’ve done in this webinar is based on real projects. In many cases, we didn’t originally use AI, but we’ve gone back and tried it afterwards. As far as possible, these are genuine marketing projects we’ve worked on, so they should provide a good test.
Before we start, just to be clear: when we’re talking about AI and looking at prompts, the prompt text will be shown in blue, and the AI’s response will be shown in red. If you see blue italic text, that means it’s a prompt we typed in. Anything in red is the reply we got back. We’ve used ChatGPT 5.2. You can argue about the best or worst generative AI tools, but this is pretty good and fairly representative of what most marketers are using.
The first thing we’re going to look at is speed. This one is easy: the robot wins. AI is unbelievably fast, particularly when it comes to processing large amounts of data. If you want an AI to summarise a long white paper, it can do it in a tiny fraction of the time it would take a human.
One of AI’s biggest benefits is speed. That said, there are some tasks where humans can be quicker. If you’re writing a very short email, it’s often faster to type it yourself than to wait for AI to generate something. But for long, complex emails, AI is definitely much faster. So AI wins the speed race, and we’re not even going to compete against humans on that one.
Next, let’s look at images. We’re going to focus on the kinds of industries most of our clients are in, particularly industrial and engineering sectors. Recently, we created a report analysing how well pneumatics companies run email marketing. We asked a number of pneumatics companies to add us to their mailing lists and then reviewed what they sent.
One thing we wanted was a nice cover image for the report. The AI actually did pretty well here. You can argue about the level of realism, and an expert in pneumatics might look at it and say it’s obviously AI-generated. But for people outside the industry, it looks like a decent industrial image. It’s also generated extremely quickly.
What we’ve found is that AI image generation is much faster than humans creating images, and also much faster than searching image libraries. You type in what you want, and AI often gives you something closer to what you had in mind than stock libraries do. However, there are limits. Images can be very clichéd and stereotypical, and they often ignore diversity. If you ask for an engineer, you’ll almost always get a white man unless you explicitly ask for diversity.
Humans need to think carefully about what they want and not just accept stereotypical outputs. We’ve also found that technical images can be wrong, and intellectual property rights are a real issue. In many countries, AI-generated images can’t be copyrighted because the prompt isn’t considered human creativity, which means you don’t truly own the image.
One of my favourite examples of technical problems is American football playbooks. If you ask an AI to draw a football play, it may look okay at first glance, but it’s often completely wrong. Players are offside, positioned incorrectly, or even missing. It looks plausible, but it’s actually rubbish. This is a real risk with technical imagery.
So AI can generate great images when accuracy isn’t critical. With a good prompt, you can get diversity, and it’s fast. But it needs a lot of human input. For this category, there isn’t really a clear winner. It’s a draw. AI is great for ideas, storyboards, and situations where precision doesn’t matter. But when accuracy or ownership matters, you need humans involved. It’s a partnership, not a replacement.
Next, let’s look at Google Ads. We recently ran a campaign for a client at Microchip, one of the leading PCI vendors in the world. We provided the same briefing materials to both a human and the AI: a detailed brief, a press release, and a presentation.
Hayden, a member of our digital team, created the human ads. The AI-generated headlines were generally okay. They weren’t particularly attention-grabbing and felt a bit bland, but they did pull out some important features like low latency and security. The descriptions were mostly technically correct, which isn’t always guaranteed.
One headline stood out as over the top: “Sample today and design next-generation AI data center platforms.” To judge which ads were better, I asked the AI to compare its ads to Hayden’s. ChatGPT immediately said that the AI-generated ads were not better, and that Hayden’s ads were stronger.
The reasons were clear. Hayden’s ads had sharper technical signalling, clearer specificity, and less vague language. They aligned better with the client’s goals. Even as a human reader, you’d be more likely to engage with Hayden’s ads, and the AI agreed. So humans win this round.
Next, we tried strategy. We gave the AI a fairly general strategic task. It returned ideas like running video ads, website visit ads, and lead generation campaigns. Nothing was obviously wrong, but it made a simple maths error. It claimed that attracting 10,000 visitors at $15 per subscriber would cost $50,000, when it would actually cost $150,000.
This highlights a key issue with AI: it can sound very confident while getting basic things wrong. We then tried a more realistic strategy exercise around gallium nitride technology. The AI suggested advertising in specialist power publications, broader electronics publications, and adding regional support.
On the surface, this wasn’t terrible, but it lacked depth. It assumed global markets without justification, ignored verticals, didn’t consider language differences, and largely repeated publication marketing blurbs. Even when we refined the prompt to focus on Europe and automotive markets, the recommendations barely changed.
It became clear that the AI had latched onto a small set of obvious publications and wasn’t really thinking strategically. Given the maths errors and lack of insight, humans win again on marketing strategy.
We then looked at messaging. AI is very good at processing and summarising large amounts of technical information. It can produce tables, comparisons, and structured insights. However, the messaging often felt clichéd and generic. It didn’t really differentiate between vendors or create standout positioning.
AI is a useful tool for developing messaging, but it shouldn’t be the sole source. This is another draw: humans using AI will outperform humans alone, but AI alone isn’t enough.
Language and translation is another area where AI performs well. Translation quality is generally excellent as a first draft, though it needs checking. Language advice can be odd, however. For example, it suggested that German automotive engineers prefer English, which is questionable. It also listed English words to avoid when advising on German messaging, which didn’t make much sense.
Overall, translation is a tie. AI is incredibly fast, but without human oversight, it can give questionable advice.
Finally, we looked at content generation. Writing articles is hard, and AI is excellent at repurposing content. It can rewrite white papers as articles or adapt tone and style very effectively. But when asked to create something new, it often produces generic content.
This is a serious problem. Journalists have told us they receive near-identical AI-generated articles from different companies, with only the brand names changed. In niche technical areas, where training data is limited, this problem is even worse.
AI also makes factual and stylistic errors. We showed an example involving quantum computing that looked plausible but was wrong in facts, tone, and terminology. While this was an extreme case, it illustrates the need for validation.
Some people suggest using custom GPTs trained on client data. We tried this, but when the client lacked content on data centers, the AI simply avoided the topic altogether. AI struggles to make conceptual leaps. To get good results, you need to provide detailed, well-structured briefs that include the new concepts you want it to address.
So again, this is a draw. AI is incredibly helpful, especially for summarising interviews and accelerating content creation, but it needs human guidance.
So what’s the final score? Humans win two rounds, AI wins one, with several draws. But in reality, there is no winner. AI is a tool. It’s limited in niche markets and struggles with new concepts, but humans are slow and make mistakes too.
The future is about working together. AI will produce a lot of low-quality content, and people will eventually realise that doesn’t work. Just as with outsourcing purely on cost, quality matters. Marketers who understand how to use AI well will continue to add value.
Thank you very much for your time. I hope this session has been thought-provoking and has shown where AI helps and where human input is essential. Our next webinar will be about generative engine optimisation, or GEO, and how to get mentioned by large language models when people ask technical questions. This is becoming the new SEO gold rush.
Please join us on Wednesday the 18th of February. If you have any questions, feel free to put them in the Q&A or chat. Thank you again for listening.
Author
-
Hannah is Director of Business Development and Marketing at Napier. She has a passion for marketing and sales, and implements activities to drive the growth of Napier.
View all posts