16 Jun 2025

Navigating the AI divide

A hand with a graphic representation of a compass superimposed over it. The hand is held out over a city in the background.

The recent boom in artificial intelligence is of particular significance to the media industry. After all, if the hype around generative AI is to be believed, ChatGPT and its fellows can produce fast, cheap copy that will soon render slow, unreliable human writers obsolete.

While the real picture is far more complex, the implications of AI to publishers of content are no less significant. Organisations are at a crossroads where they have to decide if and how to embrace AI, and what the implications are likely to be for their employees and audiences.

At Beettoo, we’ve noticed that publishers are falling into two broad camps. What they are, and which side comes out ahead, will play a huge role in shaping the future of both the media and AI industries. Understanding this divide will be key to navigating the changes ahead.

Embracing AI

From a purely financial perspective, a fundamental lure of generative AI for publishers is clear: cost saving through reduced headcount. On a basic level, it’s true that tools like ChatGPT and Gemini (formerly Bard) can produce content at a rate many orders of magnitude faster than a human being, and at a comparatively negligible cost.

We’re not saying that all publishers invested in the current crop of gen AI tools are seeking to drastically cut their workforces, but from a business perspective, cost-cutting and efficiency are always going to be desirable and understandably so.

If generative AI is the future of content creation, there is certainly an argument to get on board early rather than waiting too long – to be involved in its development rather than struggling against the tide.

Perhaps this is part of what has motivated some major names in publishing, including Axel Springer and the Associated Press, to enter into deals with OpenAI. These agreements give ChatGPT authorised access to these publishers’ content for AI training purposes, in return for the opportunity to “leverage OpenAI’s technology and product expertise”. While “financial terms of the deals were not disclosed”, we can assume that they were compensated for this access to their valuable archives.

Of course, generative AI isn’t new, and it doesn’t necessarily have to be used to generate content wholesale. “Even five years ago, we were talking about generative AI,” Matt Egan, editorial director at Foundry, tells us. “We used a lot of tools quite early on to do things like copy editing and machine translation, because we publish in 16 languages. Always with a human overseeing the final product.”

We’ve explored the limitations of generative AI before, which are best illustrated by the debacle in which CNET was found to have been publishing AI written articles in secret – more than half of which contained factual errors. Responsible use of AI needs human oversight – especially tools like ChatGPT that are trained on the open internet, where quality control of these sources is almost impossible.

But it’s clear that many publishers are willing to overlook these limitations and bet on the future of ChatGPT and similar platforms.

Keeping control

The New York Times has become a symbol of the push back against the boom in generative AI tools, thanks to its lawsuit against OpenAI and Microsoft, accusing the companies of unauthorised use of its content to train their technologies. Claiming “billions of dollars in statutory and actual damages”, it’s clear that the NYT fully understands the value of its 172 years worth of journalism.

But this second approach is not necessarily one of rejecting artificial intelligence outright. Indeed, the way systems such as large language models (LLMs) and AI-driven data analytics have permeated the technology and tools that power our businesses, it’s not clear that would even be possible. But what some publishers are doing is taking a more cautious approach to adopting these tools and granting access to their content.

A natural outcome of this attitude is that organisations are beginning to launch their own AI tools, such as the image generators developed by Getty Images and Adobe. The advantage that these companies have is not particularly in the technology they employ but the quality of their data: huge libraries of images that they own and control, perfect for training AI models. In these cases, the issues of ownership and quality that dog the likes of OpenAI are much less of a concern. Companies that control quality data are in a much better position to train AI platforms, safe in the knowledge that the sources are accurate and free of any copyright issues.

On the publishing side, Foundry recently collaborated with miso.ai to launch an AI chatbot for both its B2C and B2B audiences called Smart Answers – a tool trained on Foundry content that answers users’ tech questions. By leveraging their content from a position of power, publishers can take advantage of the AI boom without compromising control of their IP – and potentially create tools more reliable and trustworthy than those trawling the entire internet for data.

“We were talking to a customer recently who’s been interviewing CIOs for 10 years,” Egan continues. “There are hundreds of articles – not enough to make a model but potentially they could blend it with our content. The New York Times is doing good work for all of us. I could see a model in which premium publishers such as the NYT seek similar partners with whom to create modular LLMs containing only trusted independent editorial content.” As organisations that control reams of quality content, they are in many ways in a better position to develop more useful and trustworthy gen AI tools than OpenAI, which faces all the challenges that come with training ChatGPT on the open internet.

This approach is giving Foundry the opportunity to double down on what it does best: quality content created by human beings. “We’re talking to B2B end users and IT buyers all day, every day,” says Egan. “That is producing a huge volume of content that can be used for insights. The way we generate these insights is that we write articles based on those conversations. It’s quite traditional. We’re not going to stop doing that, I don’t think, ever.”

If publishers and content owners build AI tools that enhance what they do best, rather than replace the creative talent that has underpinned the industry for centuries, the potential outcome looks both positive and sustainable.

Defining your approach

At this stage of the generative AI journey, the split in attitudes between publishers is stark. Egan was invited to speak about artificial intelligence at an event to an audience of publishers, but noted that in the current climate “it’s kind of hard to know where you pitch your generative AI keynote when the two sides of the room are focused on very different things. Are you for investing in accelerating quality content experiences and human insight, or cutting costs and increasing efficiency in a race to the bottom?”

Today, all publishers can do as they define their approach to using AI is their due diligence: identifying the opportunities and potential risks of adopting the existing tools into their workflows, and carefully scrutinising what will be best for the long-term health of their business.

At Beettoo, we see a strong case for protecting IP and being wary of the current crop of generative AI tools, but not everyone is in the position to develop their own technologies and may feel they can’t afford to fall behind on AI adoption. Wherever you sit within the media and marketing landscape, it’s important to understand the limitations of current gen AI platforms – especially in terms of the quality of data that they are trained on – while also being aware of how they can add operational value. Ultimately, this is a time of evaluation and experimentation.