The bait, then the rug-pull.
Nate Herk opens with the promise stated as a fait accompli: Higgsfield has every state-of-the-art image and video model, Claude knows how to talk to it, and together they ideate and generate 'a 100 times faster than the average human could.' Then he immediately cuts to a wall of finished ads — headphone halos, sleep-supplement bottles, hyper-motion launch videos — generated 'in literally five minutes with one prompt.' Promise + proof, in the first thirty seconds.
What the video promised.
stated at 00:12“When we combine these tools, we're able to actually scale up our content because we can ideate and we can generate a 100 times faster than the average human could. So in today's video, I'm gonna show you guys how we're able to turn Claude into a creative agency.”delivered at 31:00
Where the time goes.

01 · Cold open + promise wall
Promise stated (Claude + Higgsfield = scale-up creative agency) and proven by walling up five-minute outputs: Murmur headphone ads, Sleep Support bottle ads, hyper-motion videos. Sets the bait: 'all of this from one prompt.'

02 · Connect Higgsfield to Claude.ai (custom connector / MCP)
Walks through claude.ai → Settings → Connectors → Add custom connector. Paste the Higgsfield MCP URL, OAuth in, set permissions. Demonstrates the connector is live and ready to prompt against.

03 · One-prompt brand generation in Claude.ai
Single prompt: 'Build me a headphone brand from scratch — do research, branding, catalog, and for each product generate a product photo, IG ad, and UGC video via Higgsfield MCP.' Claude returns a brand called Murmur with positioning, target buyer, voice, visual identity, and three SKUs (over-ear Halo, wireless earbuds Drift, open-back wired).

04 · Review the auto-generated catalog + iterate inline
Walks the Halo / Drift / open-back outputs: product photos, Instagram ads, UGC videos. Shows how to fix mistakes by replying in-thread ('two headers — remove one'), how 'Animate' takes the same prompt graph into a video, and what the video model gets right (realism) vs wrong (duplicated text).

05 · Marketing Studio: hyper-motion launch video
Asks Claude to use Higgsfield's Marketing Studio to make a launch video for the Halo. First pass is too quiet/intimate; second pass with 'hyper-motion variant, 16x9, more engaging' lands the cinematic ad seen in the intro. Brief detour about a 'sensitive content' block and how to debug the prompt with Claude.

06 · Second product run: Sleep Support from a reference image
Drops in an existing product photo (blue Sleep Support bottle), asks for Instagram-ready ads. First pass loses the on-bottle text — lesson surfaced: 'be more specific about telling it not to change the reference image.' Iterates to ads with headlines like 'Asleep in 12 minutes' and 'Stop counting sheep — start sleeping through the night.'

07 · Pivot to Claude Code (desktop) for power features
Why move off claude.ai: more control, reusable skills, true automations. Important nuance — not the CLI terminal, the Claude Code desktop app: still has chat + project, with skills/files/routines layered on top. Sets the scene for the Claude Code build-out.
08 · Install the Higgsfield CLI + agent skills
Open a blank folder ('Higgsfield Studio'). Grab three commands from Higgsfield > MCP & CLI: install CLI, run hf auth (browser OAuth), install Higgsfield agent skills. Paste all three into Claude Code in one prompt, let it run. Side-explainer: CLI > MCP for token cost and agent efficiency.
09 · Bring in outside expertise as a research markdown
Pre-built advertising_masterclass.md (617 lines) via a deep-research prompt: best organic ad strategies for 2026 across TikTok / Meta / X, what captures attention, what converts, platform differences. Lives in the project so agents reference it when ideating. Joe-relevant: 'utilize other people's expertise' — swipe-file thinking.
10 · Master Sheet: log every Higgsfield generation via GWS CLI
Asks Claude to read all 45 assets from the Higgsfield account and write them into a Google Sheet (tabs: generations, by product, by style, planning). Builds a creative-ops database — product, style, image/video, model, prompt, status, result URL, job ID. GWS CLI is the unlock: lets the agent move across Sheets/Docs/Gmail/Calendar/Drive without MCP overhead.
11 · Test matrix: 30+ variants ideated from masterclass + data
@-tags advertising_masterclass.md and the existing generations, asks Claude to mix-and-match variables (header, style, content type) into testable variants. Sheet gets a new 'creative slate' tab with priority-ranked ideas across products. Frames the philosophy: 'we’re not the bottleneck on creativity or production anymore.'
12 · Generate rows 3–7 + add status tracking
'Create prompts for rows 3-7, add a status column, generate them in Higgsfield, then mark them complete on the sheet.' Sponsor break for Glido (Nate is joining the Glido team, switched from Whisper). When the batch returns, the Sleep Support bottle drifts off-brand because the reference wasn’t locked.
13 · Lock the brand asset + regenerate
Drags the canonical Sleep Support product image directly into Claude Code: 'every advertisement must show the product exactly like this — same color, same text, don’t change anything.' Regenerates. New batch comes back consistent and on-brand, mixing nano-banana-2 and gpt-image-2 across angles (curiosity, contrarian, pattern interrupt, stat flash).
14 · Skills: reverse-engineer a recipe from a winning prompt
Definition: a skill is a recipe for an agent (the pancake analogy). Workflow: find a generation you love, copy its exact prompt back into Claude, say 'turn this prompt into a skill that lives in .claude/skills so anytime I ask for a hyper-motion video you use this.' Highlights the meta-loop: bake learned constraints (e.g. words that get flagged for sensitive content) into the skill so the agent improves over time.
15 · Invoke the new skill on the locked reference
@-tags the saved Sleep Support reference plus tries /hypermotion. First run uses the wrong skill (default higgsfield_generate). Restarts the Claude Code session; new run reads the .claude/skills/HyperMotion-Video file, asks the right clarifying question ('model in the ad, UGC, or product only?'), and produces the cinematic hyper-motion clip he wanted.
16 · Output review + honest critique
Output is strong — cinematic feel, real-looking product — but the label text is mangled in image-to-video. Nate names the limitation directly: 'this is the worst AI video generation will ever be,' suggests a workaround (simpler label for hero shots), and pushes back on knee-jerk 'AI slop' framing.
17 · Routines: schedule the agency to run while you sleep
Claude Code Routines inject prompts on a cadence. The proposed pipeline: Sunday night routine — analyze the Sheet + platform data, add 50 new generation ideas. Monday morning routine — pick 30 blank-status rows, generate, mark complete. Scale to twice-weekly planning + generation. Optional final step: pipe winners into Potato or Meta Ads Manager for full auto-posting. CTA: like + watch the routines deep-dive.
Visual structure at a glance.
Named ideas worth stealing.
MCP vs CLI for Agents
MCP exposes every tool by default — token-heavy. The CLI is the lean alternative: same capabilities, faster, cheaper, better for agents that loop. Default to CLI when wiring tools into Claude Code; reserve MCP for ad-hoc claude.ai exploration.
Skill = Recipe (pancake analogy)
A skill is to an AI agent what a recipe is to a cook: lock the inputs, ordering, and constraints so output stays consistent run-over-run. Bake in negative constraints too (words that triggered moderation flags, brand assets that must lock).
Reverse-engineer a skill from a winning prompt
- Generate 5+ variants of a creative
- Pick the 1–3 outputs you actually love
- Copy the exact prompt that produced them
- Paste back into a fresh chat and say: 'turn this into a skill in .claude/skills so anytime I ask for X, use this'
- Iterate the skill every run — tell it what you liked/didn't and have it self-update
Don’t write skills from scratch — mine them from outputs that already worked. Outputs first, recipes second.
Bring outside expertise into the project as Markdown
Don’t expect the base model to be a master copywriter. Run a deep-research prompt against Twitter threads / YouTube / Perplexity / books, save the result as `advertising_masterclass.md` in the project, then @-tag it whenever you ideate. The model now has subject-matter expertise on tap.
Master Sheet + status column = creative ops database
- Tabs: generations, by-product, by-style, planning, creative slate
- Columns: product, style, image/video, model, prompt, status, result URL, job ID
- Status drives which rows the next routine picks up
- Sheet is queryable by the agent for analysis later
One source of truth makes the rest of the automation possible. The status column is the glue — it’s what stops agents from duplicating work across routine runs.
Routines: ideate Sunday → generate Monday
- Sunday 9pm — 'analyze sheet + platform data, add 50 new generation ideas with empty status'
- Monday 8am — 'pick 30 blank-status rows, generate prompts + assets, mark complete'
- Add Thursday/Friday as a second cycle to double throughput
- Optional: pipe completed assets to Potato or Meta Ads Manager for posting
Two cron-style prompts and you have an autonomous ad-generation loop. The cadence (Sun plan / Mon execute) prevents the agent from generating against stale ideas.
Lines you could clip.
“We can ideate and we can generate a hundred times faster than the average human could.”
“We're pulling the lever on the slot machine, which is AI. If we don't have guidelines, if we don't have recipes — skills — then they're not gonna be super consistent.”
“A skill is essentially a recipe for an AI agent.”
“This is the worst that AI video generation models will ever be. Every day, every month, they're going to get better.”
“I could set an agent off to generate all this stuff, and then I could go to bed, and I could wake up with a 100 different ad copies and creatives ready to go.”
“We're not the bottleneck on creativity, and we're also not the bottleneck on production.”
How they spent the runtime.
- 20:33–21:05 · Glido (voice dictation — Nate disclosed he joined the team)
Things they pointed at.
How they asked for the click.
“If you guys wanna dive a little bit deeper into routines, I'll tag a full video right here where I dive into how you set them up… if you guys enjoyed, please give it a like.”
Soft CTA — anchors on the next-video tag for routines and a like ask. No newsletter push, no product (other than the embedded Glido sponsor at 20:33). Trade-off: zero conversion pressure, but also no lead capture from a 35-min watch.
Word for word.
Steal the Claude Code creative agency stack.
Wire Higgsfield (or any media-gen) into Claude Code via CLI, drop your swipe files in as project markdown, log every generation to a Sheet with a status column, then let two routines plan and execute on a weekly cadence.
- Default to the CLI, not the MCP, whenever an agent will loop — token cost compounds across 100-row batches.
- Land your swipe files (Maria Wendt 255-email, HTSS, Hoffmann's Reward Funnel) inside the project as `.md` so every agent run has the subject-matter expertise pre-loaded.
- Mine skills from outputs you already love — copy the winning prompt back into chat, ask Claude to convert it into `.claude/skills/<name>.md`. Don't author skills from scratch.
- Bake negative constraints into skills too (banned words that triggered moderation, brand assets that must lock to a reference image). Skills should encode failure modes, not just successes.
- A Google Sheet with status / result URL / job ID columns is a v1 creative-ops database. Don't build a custom DB until the Sheet is the bottleneck.
- Two routines beat a permanent agent: Sunday 'plan 50 new ideas,' Monday 'execute 30 blank-status rows.' Scale by adding more weekly slots, not by making routines smarter.
- Reframe AI-generated weirdness with the 'this is the worst it'll ever be' line — it's a softer counter to AI-slop critiques than arguing on quality.
What this means if you're trying to ship more creative without a team.
You don't need to be technical to copy this — you need one Claude subscription, a Higgsfield account, and the patience to run the same prompt five times until you have one output good enough to turn into a skill.
- Start in claude.ai (not Claude Code) — add Higgsfield as a custom connector under Settings > Connectors and prove the workflow with one prompt before you touch any CLI.
- Use one starter prompt: 'Build me a [product type] brand from scratch — do research, branding, catalog, and generate a product photo, IG ad, and UGC video for each item via Higgsfield.' Edit the noun, keep the structure.
- When an image is wrong, reply in the same thread with the fix ('two headers — remove one,' 'use this reference image, don't change the bottle') instead of starting over. Iteration is the workflow.
- Always drop in a reference image when the product already exists — it cuts the 'looks nothing like my product' problem in half.
- Keep a running Google Sheet of every prompt + result + which one converted. The sheet is the moat — it's how you compound across weeks.
- Treat the first hyper-motion video as a draft, not a deliverable. Save the prompts that work; throw away the ones that don't.









































































