We stopped employing fifteen writers in early 2024. Not because AI was better, but because readers stopped coming in volumes that justified the editorial investment. Since then, we have completely reshaped our publishing operation around AI agents covering research, writing, editorial review, SEO, and GEO optimisation. The initial setup took about four months to build. Operational costs dropped by 80%. Publishing velocity tripled. But revenue is still lower than before.

This article discusses that transformation in detail and lays out what works and what still needs fixing.

What we experienced in digital publishing is at the forefront of what most industries will experience at some point when AI gets better at solving their customers’ needs. I think our experience will be valuable for anyone thinking about how to adapt and recalibrate in the face of dramatic technological changes.

Why did we stop employing writers?

We stopped employing human writers because our audience had moved, not because AI had improved.

Revenue for our publishing businesses fell sharply in early 2024 as readers increasingly turned to AI chatbots for the online searches that used to bring them to our websites. The queries that had driven millions of visitors to our sites were being answered in a chatbot interface, with our content often cited as a source but our pages never visited.

Traffic, revenue, and AI assistant adoption — 2020 to 2030
Organic traffic Revenue AI assistant adoption
Traffic and revenue are indexed (100 = the 2020–2023 baseline) and reflect the seasonality of our publishing businesses (Q4 strongest, Q1 lowest, modest June and September peaks). AI adoption is the share of US online adults using AI assistants for search-style tasks; 2020–2025 anchored on Pew Research and reported ChatGPT user counts, 2026–2030 extrapolated from McKinsey State of AI projections. 0 33 65 98 130 0%25%50%75%100% 2020 2022 2024 2026 2028 2030 Indexed (100 = 2020–2023 baseline) AI adoption (% of online adults) ChatGPT launch Editorial team replaced 0 33 65 98 130 0%25%50%75%100% 2020 2022 2024 2026 2028 2030
Traffic and revenue are indexed (100 = the 2020–2023 baseline) and reflect the seasonality of our publishing businesses (Q4 strongest, Q1 lowest, modest June and September peaks). AI adoption is the share of US online adults using AI assistants for search-style tasks; 2020–2025 anchored on Pew Research and reported ChatGPT user counts, 2026–2030 extrapolated from McKinsey State of AI projections.

Cutting expenses was the only way to preserve profits.

It wasn’t easy. We had worked with most of our writers for several years and valued their work and contributions. We had 15 writers on staff at the time, people we onboarded during a hiring bet in 2020 when we received over 800 applications in a single week, interviewed 30 candidates, and confirmed 15 long-term hires. That bet had paid off spectacularly. Revenue grew 10x in six months and 20x in a year. We were publishing over 30 articles per month at 95% margins.

But in 2024, the economics had changed. Revenue was falling. After trying everything we could to improve our ranking, we decided that the issue wasn’t our content. It was the fact that our audience was moving to different platforms. Our entire revenue model had to be recalibrated accordingly.

We stopped employing writers not because AI was better. We stopped because readers stopped coming to read what writers had written.

That distinction matters. This isn’t a story about AI replacing humans because AI writes better. It’s a story about human habits evolving and distribution channels collapsing. We had to move with them.

How did we build the AI infrastructure?

We began replacing our editorial team with an AI pipeline. The first version took about four months to build from the ground up. The infrastructure covers market research, writing, editorial review, SEO, and GEO, structured as a series of specialist agents with quality gates between each stage.

While we are still iterating on the workflow, one non-engineer operator now manages a pipeline that previously required an entire team to execute.

From 15 writers to one operator: the AI pipeline Pipeline
From 15 writers to one operator: the AI pipeline Specialist agents with quality gates at each stage. The reviewer is the asset everything upstream learns from. Research agent market + audience signals Writer agent drafts against voice guide Editorial review quality gate · notes loop SEO + GEO agent search + AI surfaces Publish one operator ships Research agent market + audience signals Writer agent drafts against voice guide Editorial review quality gate · notes loop SEO + GEO agent search + AI surfaces Publish one operator ships
Specialist agents with quality gates at each stage. The reviewer is the asset everything upstream learns from.

Once we’d made the decision, the question became: what replaces a 15-person editorial operation? Not “what tool do we buy?” but “what system do we build?”

The editorial review agent took the longest to build in a way that met our editorial standards. As you might expect, the very first version produced generic content. AI slop that readers are now quickly able to flag and turn away from. That is not the kind of content that meets our editorial standards and gets published on our websites.

Over the course of several weeks, we used our entire library of content and the publication’s style guide as training materials for our AI agents. We wrote custom skills and instructions to produce articles tied to specific categories or content types. Every new article became better and better as we fed our internal models with more data and feedback.

We also cut scope along the way. The original build plan included a dedicated fact-checking AI agent but it ended up adding complexity without any real benefit. Instead, we made the human operator at the end of the pipeline responsible for accuracy and fact-checking. The AI agents each prepare a note that lists their sources to make that step easier to complete by the human operator.

Every architectural decision came back to the same question: if we were designing this today, from zero, with AI as part of the foundation, would we build it this way?

That is the logic of Zero-Base Operations, the discipline of justifying every workflow and tool from scratch rather than layering AI onto what existed before. Applied to editorial operations, the answer looked nothing like the team it replaced.

What we created over those four months is what I now call AI-native publishing: a model where specialist agents handle research, drafting, editorial review, SEO, and GEO by default, with human operators overseeing strategy and the final quality calls rather than doing the production work themselves.

The key architectural decision was treating it as a pipeline, not a tool.

Each stage has its own agent with specific instructions, quality gates, and feedback loops. The editorial review agent, for example, doesn’t just check grammar. It evaluates against the publication’s style guide, verifies factual claims against source material, and rejects articles that don’t meet the threshold. Rejected content goes back through the writing stage with specific notes on what needs to change. The operator can review the entire history of an article when it gets to them and decide what is worth committing to the framework’s long-term memory to further improve future outputs.

Each agent uses the latest proven AI models as its foundation, with custom system prompts that encode years of editorial knowledge. The voice style guides alone are 3,000+ words per publication, covering tone, vocabulary, sentence structure, and specific rules about what to include and exclude. When new AI models are released, we run them in parallel with our current models to evaluate which performs best and then switch the API to our preferred option.

A single operator can now manage what previously required a team of 15.

Did the rebuild actually work?

The honest numbers: operational costs for our publishing businesses are down by 80%, publishing velocity is three to four times higher when we need it to be, but revenue is still lower than before the 2024 decline began. While costs improved dramatically, revenue has not recovered.

Before (2020–2024)After (2024 onwards)
Team15 writers + editorsSmall team of operators
Monthly output30+ articles3–4x capacity when needed
Operational costHigh editorial payrollDown 80%
WritingHuman-authoredAI-drafted, quality-gated
Style enforcementEditorial oversight3,000+ word encoded prompts
RevenueGrowingStill rebuilding
Primary bottleneckWriting throughputHuman time (visuals + review)

What improved:

  • Operational cost: Down by 80%. The infrastructure costs a fraction of the editorial team payroll.
  • Publishing velocity: We can produce 3–4x the content volume when we need to, though volume is rarely the goal.
  • Consistency: The AI agents follow style guides more reliably than any human team. Every article goes through the same quality gates.
  • Coverage: We can now cover topics that weren’t economically viable with human writers. The marginal cost of an additional article is near zero.

What didn’t improve, or got harder:

  • Original reporting: AI can’t do interviews, attend events, or build source relationships. For publications that depend on original reporting, the entire model doesn’t work. We need additional humans in the loop.
  • Voice distinctiveness: Even with 3,000-word style guides, the output tends toward a median. Getting genuine personality into AI-generated content still requires human editing.
  • The bottleneck shift — the phenomenon where compressing one workflow stage moves the constraint to the next rather than removing it — played out here directly. AI compressed the writing stage but didn’t remove the human review requirement. We now have hundreds of articles ready to publish across Worthbury and a stealth e-commerce brand on Shopify. The constraint is no longer writing. It’s visual creation and human-in-the-loop review for brand safety. The bottleneck moved. It didn’t disappear. I’ve gone into the mechanics of this in The bottleneck shift: why AI in publishing is now a human-time problem.
  • Revenue: This is the big one. The traffic decline that forced the transformation hasn’t reversed. We’re producing better content more efficiently, and we’ve successfully protected our margins, but the audience is still migrating to AI interfaces.

The honest conclusion is that we’ve fundamentally reimagined the operating model, but we haven’t solved the revenue problem yet. Beyond just our operating model, this is a business model question now. What is the future of online publishing when users are gated by LLM companies and AI agents are trained on content that they scrape from the web without compensating the source for their work and research? I don’t have the answer to that question yet.

What would I do differently?

Three decisions I’d make differently: build the AI infrastructure before revenue pressure forces it, design the quality gates before the writing agent, and plan visual production before writing stops being the bottleneck. The pattern across all three is the same. What isn’t a constraint today becomes one the moment AI removes the one that was.

If I were starting today, three things would change.

Start before the revenue forces you. We integrated AI into our workflows defensively, under financial pressure and with a short timeline. It worked because we’re a small and agile team, but the same probably wouldn’t be true for large organisations. Instead, you should build the AI infrastructure while revenue is stable. The learning curve is steep. You want to climb it without the clock running.

Design the quality gates before the writing agent. The writer agent is the easy part. Any reasonable system prompt gets you to a decent first draft. What separates a great AI publishing pipeline from one that produces noise at scale is the editorial review stage. We spent a lot of time on the writer and not enough, initially, on the reviewer. The investment that paid off most was the opposite of that.

We underestimated how much the quality gates teach the rest of the pipeline over time. Every draft the editorial agent rejects with clear notes and feedback helps refine the next workflow. Systematically feeding those notes back into the brief-generation and voice-guide stages is what closes the gap between decent output and output that actually sounds like the publication and is ready to be published.

Plan for visual production before you need it. When writing is the bottleneck, visual production is barely an issue. By the time a story is ready, our photos and visuals are too. But once AI removes writing as the constraint, every article still needs images, and your visual workflow wasn’t designed to scale that. This is the ceiling we’re working against now across several of our properties.

What I'd do differently

Zero-Base Operations

Three principles I'd lock in before writing a line of code or hiring a first reviewer.

  1. Start before pressure.

    Build the AI infrastructure while revenue is stable, not while it's falling. A zero-base operations read of the org, done early, beats a defensive rebuild under pressure.

  2. Quality gates first.

    The reviewer is the asset. Everything upstream improves at the rate the reviewer gets better.

  3. Plan for visuals early.

    Once AI removes writing as the constraint, visual production becomes the new ceiling.

The AI transformation isn’t done when AI can write. It’s done when the entire human workflow has been rebuilt around what AI has changed.

With everyone having access to the same models and tools, differentiation comes down to what unique data and perspective you feed in. The tools are commoditised. The application isn’t.

I’m still figuring this out, but I’m now confident that we’ve taken steps in the right direction. Get in touch if that’s a problem you’re working on, or if you want help fine-tuning the organisational changes AI requires.

I’ll keep writing about this topic as the picture gets clearer. The org-level argument for why most publishers are solving the wrong problem is in a companion piece: Most media publishers are solving the wrong AI problem.

Ask Simon

Bring your own AI

Pick a model, and your question opens there with this article as context. You use your plan; I never see the conversation.

The byline

Simon Beauloye

Twenty years building digital businesses globally. A decade at Google. An $80M+ media portfolio bootstrapped without VC. Now rebuilding with AI at the core, and writing about what works, what doesn't, and what nobody talks about.