# We rebuilt our publishing operation around AI agents. Here's what actually happened. Source: https://simonbeauloye.com/writing/ai-publishing/ai-rebuild-retrospective/ Published: 2026-04-13 Pillar: ai-publishing Author: Simon Beauloye (https://simonbeauloye.com) License: CC-BY-4.0 (attribution required) Cite as: Simon Beauloye, "We rebuilt our publishing operation around AI agents. Here's what actually happened.", https://simonbeauloye.com/writing/ai-publishing/ai-rebuild-retrospective/ AI-use policy: https://simonbeauloye.com/ai-policy.txt > When readers moved to AI chatbots for answers, we stopped employing writers and rebuilt our entire operation around an AI infrastructure. Here's what worked, what didn't, and what transformation looks like from the inside. ## Takeaways - The transformation was defensive, not aspirational. Revenue fell when readers moved to AI chatbots, and we had to cut expenses to preserve margins. - The rebuild is a pipeline of specialist agents with quality gates, not a single tool. Research, outline, writer, editor, SEO — each stage has its own agent and its own standards. - Operational costs are down 80% and publishing velocity is three to four times higher when needed. One operator now manages what previously required a team of 15. - Revenue is still lower than before. Traffic hasn't gone back to prior peaks. While AI orchestration helped protect our margins, users' shift to AI chatbots is affecting the entire business model of online publishing. - Tools and software are commoditised. Differentiation now comes from unique data, domain expertise, and how individuals apply AI, not from the AI itself. ## Signals - Claim: 15 writers on staff before the 2024 AI rebuild Year: 2024 Source: mOOnshot digital operating data - Claim: 800 resumes reviewed in one week during the 2020 COVID hiring bet Year: 2020 Source: mOOnshot digital operating data - Claim: Revenue 10x in 6 months and 20x in one year following the 2020 hiring bet Year: 2020 Source: mOOnshot digital operating data - Claim: 30 or more articles per month at 95% editorial margins Year: 2020 Source: mOOnshot digital operating data - Claim: Operational costs down 80% following the AI infrastructure rebuild Year: 2024 Source: mOOnshot digital operating data - Claim: 3 to 4 times publishing velocity achievable when needed post-AI rebuild Year: 2024 Source: mOOnshot digital operating data - Claim: 3,000 or more words per publication for voice style guides in the AI pipeline Year: 2024 Source: mOOnshot digital operating data - Claim: Initial AI publishing pipeline to replace a 15-person editorial team built in roughly four months, with ongoing iteration since Year: 2024 Source: mOOnshot digital operating data ## Citations - https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/ - https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai - https://openai.com/index/how-people-are-using-chatgpt/ ## Article We stopped employing fifteen writers in early 2024. Not because AI was better, but because readers stopped coming in volumes that justified the editorial investment. Since then, we have completely reshaped our publishing operation around AI agents covering research, writing, editorial review, SEO, and [GEO optimisation](/glossary/geo/). The initial setup took about four months to build. Operational costs dropped by 80%. Publishing velocity tripled. But revenue is still lower than before. This article discusses that transformation in detail and lays out what works and what still needs fixing. What we experienced in digital publishing is at the forefront of what most industries will experience at some point when AI gets better at solving their customers' needs. I think our experience will be valuable for anyone thinking about how to adapt and recalibrate in the face of dramatic technological changes. ## Why did we stop *employing writers?* We stopped employing human writers because our audience had moved, not because AI had improved. Revenue for our publishing businesses fell sharply in early 2024 as readers increasingly turned to AI chatbots for the online searches that used to bring them to our websites. The queries that had driven millions of visitors to our sites were being answered in a chatbot interface, with our content often cited as a source but our pages never visited. Cutting expenses was the only way to preserve profits. **It wasn't easy.** We had worked with most of our writers for several years and valued their work and contributions. We had 15 writers on staff at the time, people we onboarded during a hiring bet in 2020 when we received over 800 applications in a single week, interviewed 30 candidates, and confirmed 15 long-term hires. That bet had paid off spectacularly. Revenue grew 10x in six months and 20x in a year. We were publishing over 30 articles per month at 95% margins. But in 2024, the economics had changed. Revenue was falling. After trying everything we could to improve our ranking, we decided that the issue wasn't our content. It was the fact that our audience was moving to different platforms. Our entire revenue model had to be recalibrated accordingly. We stopped employing writers not because AI was better. We stopped because readers stopped coming to read what writers had written. That distinction matters. This isn't a story about AI replacing humans because AI writes better. It's a story about human habits evolving and distribution channels collapsing. We had to move with them. ## How did we build the *AI infrastructure?* We began replacing our editorial team with an AI pipeline. The first version took about four months to build from the ground up. The infrastructure covers market research, writing, editorial review, SEO, and GEO, structured as a series of specialist agents with quality gates between each stage. While we are still iterating on the workflow, one [non-engineer operator](/glossary/non-engineer-operator/) now manages a pipeline that previously required an entire team to execute. Once we'd made the decision, the question became: what replaces a 15-person editorial operation? Not *"what tool do we buy?"* but *"what system do we build?"* The editorial review agent took the longest to build in a way that met our editorial standards. As you might expect, the very first version produced generic content. AI slop that readers are now quickly able to flag and turn away from. That is not the kind of content that meets our editorial standards and gets published on our websites. Over the course of several weeks, we used our entire library of content and the publication's style guide as training materials for our AI agents. We wrote custom skills and instructions to produce articles tied to specific categories or content types. Every new article became better and better as we fed our internal models with more data and feedback. We also cut scope along the way. The original build plan included a dedicated fact-checking AI agent but it ended up adding complexity without any real benefit. Instead, we made the human operator at the end of the pipeline responsible for accuracy and fact-checking. The AI agents each prepare a note that lists their sources to make that step easier to complete by the human operator. Every architectural decision came back to the same question: if we were designing this today, from zero, with AI as part of the foundation, would we build it this way? That is the logic of [Zero-Base Operations](/glossary/zero-base-operations/), the discipline of justifying every workflow and tool from scratch rather than layering AI onto what existed before. Applied to editorial operations, the answer looked nothing like the team it replaced. What we created over those four months is what I now call [AI-native publishing](/glossary/ai-native-publishing/): a model where specialist agents handle research, drafting, editorial review, SEO, and GEO by default, with human operators overseeing strategy and the final quality calls rather than doing the production work themselves. The key architectural decision was **treating it as a pipeline, not a tool**. Each stage has its own agent with specific instructions, quality gates, and feedback loops. The editorial review agent, for example, doesn't just check grammar. It evaluates against the publication's style guide, verifies factual claims against source material, and rejects articles that don't meet the threshold. Rejected content goes back through the writing stage with specific notes on what needs to change. The operator can review the entire history of an article when it gets to them and decide what is worth committing to the framework's long-term memory to further improve future outputs. Each agent uses the latest proven AI models as its foundation, with custom system prompts that encode years of editorial knowledge. The voice style guides alone are 3,000+ words per publication, covering tone, vocabulary, sentence structure, and specific rules about what to include and exclude. When new AI models are released, we run them in parallel with our current models to evaluate which performs best and then switch the API to our preferred option. A single operator can now manage what previously required a team of 15. ## Did the rebuild *actually work?* The honest numbers: operational costs for our publishing businesses are down by 80%, publishing velocity is three to four times higher when we need it to be, but revenue is still lower than before the 2024 decline began. While costs improved dramatically, revenue has not recovered. | | Before (2020–2024) | After (2024 onwards) | |---|---|---| | **Team** | 15 writers + editors | Small team of operators | | **Monthly output** | 30+ articles | 3–4x capacity when needed | | **Operational cost** | High editorial payroll | Down 80% | | **Writing** | Human-authored | AI-drafted, quality-gated | | **Style enforcement** | Editorial oversight | 3,000+ word encoded prompts | | **Revenue** | Growing | Still rebuilding | | **Primary bottleneck** | Writing throughput | Human time (visuals + review) | What improved: - **Operational cost:** Down by 80%. The infrastructure costs a fraction of the editorial team payroll. - **Publishing velocity:** We can produce 3–4x the content volume when we need to, though volume is rarely the goal. - **Consistency:** The AI agents follow style guides more reliably than any human team. Every article goes through the same quality gates. - **Coverage:** We can now cover topics that weren't economically viable with human writers. The marginal cost of an additional article is near zero. What didn't improve, or got harder: - **Original reporting:** AI can't do interviews, attend events, or build source relationships. For publications that depend on original reporting, the entire model doesn't work. We need additional humans in the loop. - **Voice distinctiveness:** Even with 3,000-word style guides, the output tends toward a median. Getting genuine personality into AI-generated content still requires human editing. - **[The bottleneck shift](/glossary/bottleneck-shift/)** — the phenomenon where compressing one workflow stage moves the constraint to the next rather than removing it — played out here directly. AI compressed the writing stage but didn't remove the human review requirement. We now have hundreds of articles ready to publish across [Worthbury](https://worthbury.com) and a stealth e-commerce brand on Shopify. The constraint is no longer writing. It's visual creation and human-in-the-loop review for brand safety. The bottleneck moved. It didn't disappear. I've gone into the mechanics of this in [The bottleneck shift: why AI in publishing is now a human-time problem](/writing/ai-publishing/ai-publishing-bottleneck-shift/). - **Revenue:** This is the big one. The traffic decline that forced the transformation hasn't reversed. We're producing better content more efficiently, and we've successfully protected our margins, but the audience is still migrating to AI interfaces. The honest conclusion is that we've fundamentally reimagined the operating model, but we haven't solved the revenue problem yet. Beyond just our operating model, this is a business model question now. What is the future of online publishing when users are gated by LLM companies and AI agents are trained on content that they scrape from the web without compensating the source for their work and research? I don't have the answer to that question yet. ## What would I *do differently?* Three decisions I'd make differently: build the AI infrastructure before revenue pressure forces it, design the quality gates before the writing agent, and plan visual production before writing stops being the bottleneck. The pattern across all three is the same. What isn't a constraint today becomes one the moment AI removes the one that was. If I were starting today, three things would change. **Start before the revenue forces you.** We integrated AI into our workflows defensively, under financial pressure and with a short timeline. It worked because we're a small and agile team, but the same probably wouldn't be true for large organisations. Instead, you should build the AI infrastructure while revenue is stable. The learning curve is steep. You want to climb it without the clock running. **Design the quality gates before the writing agent.** The writer agent is the easy part. Any reasonable system prompt gets you to a decent first draft. What separates a great AI publishing pipeline from one that produces noise at scale is the editorial review stage. We spent a lot of time on the writer and not enough, initially, on the reviewer. The investment that paid off most was the opposite of that. We underestimated how much the quality gates teach the rest of the pipeline over time. Every draft the editorial agent rejects with clear notes and feedback helps refine the next workflow. Systematically feeding those notes back into the brief-generation and voice-guide stages is what closes the gap between decent output and output that actually sounds like the publication and is ready to be published. **Plan for visual production before you need it.** When writing is the bottleneck, visual production is barely an issue. By the time a story is ready, our photos and visuals are too. But once AI removes writing as the constraint, every article still needs images, and your visual workflow wasn't designed to scale that. This is the ceiling we're working against now across several of our properties. zero-base operations read of the org, done early, beats a defensive rebuild under pressure.` }, { n: 2, title: "Quality gates first.", body: "The reviewer is the asset. Everything upstream improves at the rate the reviewer gets better." }, { n: 3, title: "Plan for visuals early.", body: "Once AI removes writing as the constraint, visual production becomes the new ceiling." } ]} /> The AI transformation isn't done when AI can write. It's done when the entire human workflow has been rebuilt around what AI has changed. With everyone having access to the same models and tools, differentiation comes down to what [unique data and perspective](/glossary/data-as-moat/) you feed in. The tools are commoditised. The application isn't. I'm still figuring this out, but I'm now confident that we've taken steps in the right direction. **[Get in touch](/contact/)** if that's a problem you're working on, or if you want help fine-tuning the organisational changes AI requires. I'll keep writing about this topic as the picture gets clearer. The org-level argument for why most publishers are solving the wrong problem is in a companion piece: [Most media publishers are solving the wrong AI problem](/writing/future-media/ai-restructures-publishing/). ## See also - [Site index](https://simonbeauloye.com/llms.txt) - [Full corpus](https://simonbeauloye.com/llms-full.txt) - [Pillar index (ai-publishing)](https://simonbeauloye.com/llms/ai-publishing/llms.txt) - [Pillar hub (ai-publishing)](https://simonbeauloye.com/writing/ai-publishing/) - [AI-use policy](https://simonbeauloye.com/ai-policy.txt) ### Related essays - [The bottleneck shift: why AI in publishing is now a human-time problem](https://simonbeauloye.com/writing/ai-publishing/ai-publishing-bottleneck-shift/) - [Most media publishers are solving the wrong AI problem](https://simonbeauloye.com/writing/future-media/ai-restructures-publishing/) - [Zero-Base Operations: How to build a bootstrapped business with AI as the operating system](https://simonbeauloye.com/writing/bootstrapping/zero-base-operations/) ### Glossary terms referenced - [AI-native publishing](https://simonbeauloye.com/glossary/ai-native-publishing/) — A publishing operating model where AI agents handle research, drafting, editorial review, SEO/GEO, and programming as default, with human operators overseeing strategy and judgement calls. - [Generative Engine Optimisation (GEO)](https://simonbeauloye.com/glossary/geo/) — The practice of structuring a website so AI answer engines (ChatGPT, Claude, Perplexity, Google AI Overviews) can ingest, ground, and cite its content reliably. - [Zero-Base Operations](https://simonbeauloye.com/glossary/zero-base-operations/) — Zero-Base Operations is Simon Beauloye's framework for building businesses by justifying every process, tool, and hire from zero, with AI as the foundation rather than an add-on. - [The bottleneck shift](https://simonbeauloye.com/glossary/bottleneck-shift/) — The bottleneck shift is Simon Beauloye's framing for what happens after AI compresses one stage of a workflow: the constraint doesn't disappear, it moves. - [The non-engineer operator](https://simonbeauloye.com/glossary/non-engineer-operator/) — The non-engineer operator is the persona of someone who ships production software without an engineering background, using AI-assisted development tools to build the systems they would previously have hired engineers to build.