The bait, then the rug-pull.
Tao Prompts opens by playing the finished pieces — a 44-second fight scene and a 71-second short film — before he names a single tool. The pitch is the proof, not the promise. Then the two title cards drop: GPT Image 2 for the storyboard, Seedance 2.0 for the motion.
What the video promised.
stated at 00:05“I'm showing you exactly how I generated this realistic forty four second long fight scene and how I made the seventy one second long short film, and I'll show you how to extend them even more to be as long as you want.”delivered at 15:00
Where the time goes.

01 · Cold open
Plays both finished AI films (44s fight scene, 71s short) before naming any tools. Proof-first hook.

02 · Step 1: AI storyboard with GPT Image 2
Inside Higgsfield, uploads two reference photos (scientist in hazmat suit + robot companion) and prompts for a 12-panel storyboard at 16:9. Discusses fixing repetition by re-prompting individual panels.

03 · Step 2: Animate rows with Seedance 2.0
Seedance maxes at 15s, so the storyboard is cropped into rows of 4 shots layered on a 16:9 canvas. Each prompt assigns a per-shot time range using the storyboard's own text descriptions. Adds 'no music, no subtitles' tail. Introduces the 4-column character reference sheet for cross-clip consistency.

04 · First 45 seconds assembled
Three 15-second clips animated and combined into a continuous 45-second sequence covering the full 12-panel storyboard.

05 · Step 3: Extend the storyboard
Upload original storyboard + character reference sheets back into GPT Image 2; prompt for the next 12 panels with a continuation hint. Animate the new page the same way for a 90s total (trimmed to 71s for repetition).

06 · Step 4: Seamless transitions for action scenes
For motion that crosses clip boundaries (a chokehold mid-fight), use the Video Frame Extractor tool to save the last frame of clip N and upload it as the first-frame seed for clip N+1. Demonstrated on the fight-scene example.

07 · Higgsfield eligibility note + outro CTA
Side note about Higgsfield rejecting some uploaded reference images for copyright reasons (retry tends to work). Plays the full fight-scene example. Ends pointing to his 10-practical-tips video.
Visual structure at a glance.
Named ideas worth stealing.
Storyboard → Rows of 4 → Chain by Last Frame
The whole pipeline. Generate a 12-panel storyboard, split into rows of 4 shots, animate each row inside one 15-second Seedance clip with per-shot time ranges, chain clips together using last-frame seeds.
Four Shots Per 15-Second Clip
Turn Seedance's 15-second cap into a structural feature by cropping the storyboard into 4-shot strips and prompting the model with explicit time ranges ([00:00-00:04], [00:04-00:07], ...) plus each panel's text description verbatim.
4-Column Character Reference Sheet
Use GPT Image 2 with a prompt that specifies four vertical columns showing front view, left profile, right profile, back view of the character. Plain barren background. Reusable as a character bible across image and video generations.
Tagged References in Prompts
Higgsfield lets you @-tag uploaded images inside the prompt (@image_1, @image_2). Lets you say 'the robot @char_ref_sheet walks along @storyboard_1' so the model knows which reference grounds which entity.
Last-Frame Seed for Seamless Transitions
When motion crosses a clip boundary (chokehold, fall, swing), extract the literal last frame of clip N with the Video Frame Extractor tool and upload it as the first-frame anchor for clip N+1. Eliminates the jump-cut tell.
Tail-Append 'No Music, No Subtitles'
Standard tail string added to every Seedance prompt so the model doesn't bake music or subtitle artifacts into the clip that you'd have to remove in post.
Lines you could clip.
“There just isn't enough time inside those fifteen seconds to animate this entire storyboard. So I'm gonna split the storyboard up.”
“We need an additional character reference sheet so that when we generate the long AI video sequence of them, they actually stay consistent throughout the entire scene.”
“Just a one sentence description is gonna be enough to create a full storyboard.”
“Tell the AI to use the screenshot I just saved as the first frame and generate those next four shots starting from that initial screenshot.”
“Using this method, you can generate endless continuous shots for your AI films.”
How they spent the runtime.
Things they pointed at.
How they asked for the click.
“I'm gonna put a link in the description for Higgsfield AI if you wanna go and generate your own long video sequences using GPT image two and Seedance two point o.”
Soft affiliate drop right after the 71-second payoff lands — value-delivered first, ask second. End screen at 15:21 cross-promotes his '10 practical tips' video.
Word for word.
Steal the 'turn the limit into the structure' move.
The 15-second cap was Seedance's biggest problem. Tao turns it into the unit of the entire workflow — and the whole tutorial has a shape because of it.
- When demoing an AI tool with a hard ceiling (15s clips, 4 image refs, 4k token context), build the lesson around how that ceiling becomes the structural unit. 'Four shots per 15-second clip' is the entire video's spine.
- Open with the receipts. Play the finished 44s + 71s clips before naming a single tool. Demo-first hook earns the 15 minutes that follow.
- Name your workflows. 'Character reference sheet,' 'last-frame seed,' 'four-shots-per-clip' are all coinable phrases viewers can re-use. This is how a tutorial becomes a meme that other creators cite.
- Bury the most valuable thing 70% in. The transition-fix at 10:44 is the part of this video most likely to go viral as a short — Tao left it where the algorithm has to reward the watch-time first. Mod-Boss / JoeFlow tutorials should do the same.
- Tail-append your prompt boilerplate. 'No music, no subtitles' is a copy-paste rule that gets used 100% of the time. JoeFlow vocab + Mod-Boss session templates already do this; lean harder.
- Use ratio-of-effort to amplify the pitch: 'one sentence prompt → 12-panel storyboard' is the math that hooks the audience. Always state the leverage explicitly.
If you want to make a 30-second-to-2-minute AI film yourself.
Here's the recipe for making short AI films with consistent characters and no jump cuts.
- Use GPT Image 2 (inside Higgsfield) with two reference photos and a one-sentence prompt to generate a 12-panel storyboard at 16:9.
- Make a 4-column character reference sheet (front / left profile / right profile / back) for any character that needs to stay consistent across shots.
- Crop the storyboard into rows of 4 shots. For each row, prompt Seedance 2.0 with explicit time ranges per shot ([00:00-00:04], [00:04-00:07], etc.) and the panel's own text description.
- Always append 'No music, no subtitles' to the prompt.
- If a shot's motion crosses into the next clip (someone falls, gets grabbed, swings), save the last frame of the previous clip with a frame-extractor tool and upload it as the first frame for the next generation. This is what kills the 'AI demo reel' look.
- To go longer than ~45 seconds, upload your original storyboard back into GPT Image 2 with the character refs and ask for 'the next 12 panels' — then animate that page the same way.
- Plan on Higgsfield occasionally rejecting your reference uploads for eligibility — just retry the upload, it usually works on the second try.









































































