The content atom pipeline: one article feeds every platform

TL;DR


Many content teams I have observed are solving the wrong problem.

They have a blog post. They want social content. So they extract a quote, post it on LinkedIn, screenshot it for Instagram, and call that a repurposing strategy. Each derivative loses a bit of the original argument. The Instagram post is a weaker version of the LinkedIn post, which is a weaker version of the article. Nobody notices because the audience differs on each platform. But the internal logic is gone.

The underlying issue is that there is no canonical unit of content. The blog post is too long to use directly. The quote is too thin to build from. Everything in between is a judgment call made at speed, and judgment calls made at speed produce inconsistency.

I solved this by building what I call a content atom pipeline.

An atom is a distilled insight from a long-form piece — a specific claim, decision, or proof point that can stand alone. “At ALTHERR, we replaced two marketing roles with N8N workflows in December. Headcount dropped. Output held.” That is an atom. It contains a subject, a claim, and a proof. It is not a quote; it is a compressed argument.

Every blog post I publish now gets run through an extraction step. Gemini 2.5 Flash reads the article and identifies 6–10 atoms. Each atom lands in a Notion database with metadata: source post, type (data point / provocation / decision / mechanism), platform fit, and status. That database is the source of truth for all content planning.

How the pipeline works

When Gary — my content agent, running on OpenClaw — picks an atom to build from, the first step is always reading the full source article. The atom is the entry point, not the brief. A good content brief without the surrounding context produces thin copy. The article carries the detail, the failure, the specific reasoning that makes a post worth reading.

From there, the brief has seven fields: hook, tension label, tension body, proof label, proof body, reframe, CTA headline. Before any slide gets generated, a validator runs against the actual character limits derived from the Google Slides template — font sizes and box dimensions measured in EMUs.

The limits are not arbitrary. TENSION_BODY has a 160-character ceiling at 28pt in a 3.28-inch box. HOOK and REFRAME have 80 characters at 50pt. If the brief is over, the generator stops and reports the exact overage, field by field.

We caught a real problem on the first run. TENSION_BODY was 253 characters. PROOF_BODY was 307. Google Slides would auto-shrink both and break the design. The validator found both before a single API call hit the presentation.

What actually happened: First carousel brief failed validation — TENSION_BODY 93 chars over limit, PROOF_BODY 147 chars over. Both fields were full paragraphs that needed to compress to 2–3 short sentences. The validator caught this before generation, not after.

The double-click principle

The rule I care most about: slides and caption are two different information layers.

Slides stand alone. Someone who never reads the caption gets the full idea. The hook stops the scroll, the tension frames the problem, the proof is specific and named, the reframe lands the insight.

The caption carries what the slides do not show. Not a written summary of what is already visible. The story behind the decision, the failure that preceded the success, the context that makes the proof meaningful. Someone who swipes through the carousel and then reads the caption should learn something new.

Many content teams do the opposite. They write the slides, then summarize them in the caption. The result is redundant — people who read captions stop reading because they already saw the point.

LinkedIn and Instagram also get separate copy. Different platform, different character range, different hashtag logic, different tone. The atom is the shared idea. The execution is platform-native.

The review flow

When Gary finishes a carousel, he builds a review page with a swipeable slide preview — the same swipe mechanic as LinkedIn and Instagram. He sends a link via Telegram. Not the content itself — the link. I click through, swipe the slides, read both captions, and approve or request a revision.

On approval, the Netlify function publishes directly to LinkedIn and Instagram via their respective APIs. A Content Library entry is created in Notion linking the published piece back to the source atom. Over time, that data shows which argument types perform best on which formats — not just whether a post performed, but whether this type of insight works on this type of format.

What this solves

The alternative — generating content without a source of truth — produces output that degrades on every iteration. By the time an idea has been through a LinkedIn post, an Instagram caption, and a carousel brief written from scratch, the original argument is mostly gone.

The atom preserves it. Every piece traces back. The paper trail makes the data meaningful.

This is still early — 46 atoms extracted from 5 posts, two carousels generated, one approval flow end-to-end tested. But the architecture is there and the logic holds.

If you want to understand where your content operation sits on the automation curve, the AI Readiness Assessment shows which parts of your marketing are candidates for this kind of system. Takes three minutes.



Where does your marketing operation stand on the automation curve?

The AI Readiness Assessment maps your current setup across five dimensions and shows which parts of your marketing are candidates for this kind of system. Takes three minutes.

→ Take the AI Readiness Assessment


FAQ

2-min check · free · instant result
AI ROI Curve Locator

Is your AI investment paying off already? 3 questions. 30 seconds. See where you sit on the ROTS curve.

1 / 3
/ 100

Enter a valid email address.

No spam. Unsubscribe anytime.

Your full breakdown is on its way. Want the complete picture across all 3 dimensions?

Take the full assessment →