Modern Creator Network
Mansel Scheffel · YouTube · 14:22

Claude Skills VS Agents Finally Solved

A 14-minute reframe that says you've been asking the wrong question — skills live inside agents, not next to them.

Posted
6 days ago
Duration
Format
Tutorial
educational
Channel
MS
Mansel Scheffel
§ 01 · The Hook

The bait, then the rug-pull.

The whole framing is a misdirection. Mansel opens by promising to answer 'skill vs agent,' then pulls the rug — the question itself is wrong because skills don't sit next to agents, they sit inside them. Fourteen minutes later you walk away with a six-rung decision ladder, a CPU/OS analogy that finally makes 'harness' click, and the asymmetry sentence everyone will quote: agents work without skills, skills do not work without agents.

§ · Stated Promise

What the video promised.

stated at 00:14I'm gonna demystify that question because it's not actually the right question as you'll soon see why.delivered at 12:21
§ · Chapters

Where the time goes.

00:0000:22

01 · Pattern interrupt

Promises to settle 'skill vs agent' then says it's not the right question.

00:2201:30

02 · What is an agent

Agent = LLM + tools + agentic loop (ReAct). Claude is always an agent.

01:3002:55

03 · What is a harness

Harness = the OS around the model. Six parts: guides, sensors, tools, memory, state, file system. Same model + different harness = different output quality.

02:5503:55

04 · Skill defined

A skill lives in the 'guides' slot of the harness — it specializes a general agent for one task.

03:5506:14

05 · Live demo: news brief

Two VS Code windows, same 'find me AI news' prompt — one with a daily-AI-news-brief skill, one without. Skill version is slower but scored, niched, repeatable. Vanilla version is faster but generic.

06:1407:22

06 · Anthropic's rule

Known path → use a skill (deterministic, repeatable). Unknown path → let the agentic loop figure it out.

07:2208:54

07 · The hybrid case

Workflow shell with agentic guts: intake → classify → pull record → RESOLVE (agent yolo) → log → close. Most production systems are workflow shells with agentic guts.

08:5409:24

08 · The asymmetry

Agents work without skills. Skills do not work without agents — a skill is just a markdown file until something cooks the recipe.

09:2410:14

09 · What is a subagent

A subagent is a saved agent configuration on disk (.claude/agents/<name>.md) — system prompt + tool allowlist + pinned model. Not a magic concept, just a fork off the main chat.

10:1410:57

10 · When a subagent earns its keep

Three reasons: preserve context (don't flood main session), tool boundary (security/pin allowed tools at identity), orthogonal job (genuinely different tool belts). Everything else = skill + references.

10:5712:21

11 · Pre-engineered harnesses

Anthropic Managed Agents (developer-first, SDK/CLI) vs. Airtable HyperAgent (UI-first, no-code dashboards/Slack/MCP/cron). Same trichotomy, two roads — depends on whether you write code or configure through a UI.

12:2114:22

12 · Cheat-sheet ladder

Decision ladder: pure determinism → n8n/Make/Zapier; known repeatable → skill on agent; unknown repeatable → agentic loop + notes; mostly known with edge cases → hybrid scaffold; need context isolation/tool boundary → subagent; don't want to manage any of it → pre-engineered harness.

§ · Storyboard

Visual structure at a glance.

cold open studio
hookcold open studio00:00
Skills vs Agents title slide
hookSkills vs Agents title slide00:10
What is an agent
valueWhat is an agent00:20
What is a harness
valueWhat is a harness01:35
Live demo VS Code
valueLive demo VS Code04:00
Side-by-side output
valueSide-by-side output06:00
Anthropic's Rule
valueAnthropic's Rule06:20
Hybrid Case
valueHybrid Case07:24
The asymmetry
valueThe asymmetry08:50
What is a subagent
valueWhat is a subagent09:25
Subagent triggers
valueSubagent triggers10:14
Pre-engineered harnesses
valuePre-engineered harnesses10:57
Cheat-sheet ladder + Claude Console
ctaCheat-sheet ladder + Claude Console12:31
§ · Frameworks

Named ideas worth stealing.

00:22model

Agent anatomy

  1. LLM (the brain)
  2. Tools (functions it can call)
  3. Agentic loop (think, act, observe, repeat)

Three-part definition of an agent, anchored to the ReAct paper (Princeton + Google, 2022).

Steal forAny explainer that needs a one-line, three-icon definition of 'agent' — kills the buzzword fog instantly.
01:40model

The Harness (six-part OS around the model)

  1. Guides
  2. Sensors
  3. Tools
  4. Memory
  5. State
  6. File system

CPU/OS analogy: model = CPU, harness = OS. Same CPU + different OS = different output quality. Skills are one entry in the 'Guides' slot.

Steal forJoe's $6 Stack framing — 'own your harness, not just your model.' Could be a whole sales angle.
06:14concept

Anthropic's Rule

  1. Known path → skill
  2. Unknown path → agentic loop
  3. Both happen INSIDE an agent

The simplest decision rule in the video. Examples on slide: cold email = known, why isn't it converting = unknown, lead research = mixed.

Steal forDecision-tree opener for any 'when do I use X vs Y' content.
07:22model

Hybrid Case (workflow shell + agentic guts)

  1. 1. Intake
  2. 2. Classify
  3. 3. Pull record
  4. 4. Resolve (agentic loop fills in)
  5. 5. Log
  6. 6. Close

Most production systems are deterministic workflow shells with one or two agentic steps for the unpredictable bits (password reset, billing, refund).

Steal forDrop into any 'build your own ops system' pitch — explains why pure-agent demos don't survive production.
10:14list

When a Subagent Earns Its Keep

  1. Preserve context (keep main session clean)
  2. Tool boundary (security, allowed tools pinned at identity)
  3. Orthogonal job (genuinely different tool belt)

Default is NOT a subagent — default is skill + references. Subagent only when one of these three triggers fires.

Steal forAnti-overengineering frame — gives permission to NOT spin up a subagent for every use case.
12:21list

The Cheat-Sheet Ladder

  1. Pure determinism, high volume → n8n / Make / Zapier (no AI needed)
  2. Known repeatable task → add a skill on top of an agent
  3. Repeatable but path unknown → agentic loop + self-learning notes
  4. Mostly known with unpredictable steps → hybrid skill scaffold + agentic sub-steps
  5. Need security boundary or context isolation → custom subagent
  6. Don't want to manage any of this → pre-engineered harness (HyperAgent / managed agents)

Six-rung decision ladder, ordered from least-AI to most-AI. The entire video compressed into one screen.

Steal forLast-30-seconds template — end every explainer with a single-screen ladder viewers can screenshot.
§ · Quotables

Lines you could clip.

00:14
It's not actually the right question.
Pattern interrupt + curiosity gap in one sentence.TikTok hook
02:55
Same model plus a specific harness means you're going to get different output quality.
Quotable thesis statement, kills the 'best model' debate in one line.IG reel cold open
06:14
If the path is known, use a skill. If you don't know the path, that's when you rely on the agentic loop.
The actual decision rule, screenshot-ready.newsletter pull-quote
06:28
Bridge the gap between the probabilistic nature of AI and the determinism we need as a business.
One-line business rationale for everything Joe sells.newsletter pull-quote
08:54
An agent can work without a skill, but a skill cannot work without an agent.
The asymmetry punchline — should be the title of a follow-up post.TikTok hook
07:40
Most production systems are workflow shells with agentic guts.
Sharp, contrarian, business-y. Reframes the whole 'AI agent' hype cycle.IG reel cold open
13:13
You don't need AI at all for that. It's dumb plumbing. Don't waste AI where it doesn't need to be.
Permission-giving line for builders who feel they have to AI-everything.TikTok hook
§ · Pacing

How they spent the runtime.

Hook length22s
Info densityhigh
Filler8%
§ · Resources Mentioned

Things they pointed at.

00:32toolReAct paper (Princeton + Google, 2022)
09:40tool.claude/agents/<name>.md
10:57productAirtable HyperAgent
11:52productAnthropic Managed Agents
12:50tooln8n
12:50toolMake
12:50toolZapier
§ · CTA Breakdown

How they asked for the click.

13:52next-video
Obviously, there are far more complexities behind each of these. I have got deep dives in the description below. Otherwise, you can leave some comments and I'll get back to you as soon as possible or check out the videos on the screen now.

Soft three-way: descriptions / comments / end-screen videos. No subscribe ask, no product pitch — content credibility play.

§ · The Script

Word for word.

HOOKopening / re-engagementCTAthe pitchmetaphoranalogy
00:00HOOKSo over the last few weeks, I've been asked the same question over and over again. What is the difference between a skill and an agent and how do I know when to use which one? In this video, I'm gonna demystify that question because it's not actually the right question as you'll soon see why. First things first, what is an agent? So if we look at our image over here, you'll see that it's comprised of three parts. We have our large language model over here, is essentially the brain. That's Claude, Codex, Gemini, whatever it is that you're using. Then we have tools and these are the functions that it can call. So in our agentic landscape, it might be something like web search. It might be an MCP server or using API to go out there and do something. Tools are what the LLM uses to get the job done. If we didn't have tools, all we would have is this really smart brain trapped inside its own little ecosystem. And if you remember when AI first came out, that was kind of what we had. We didn't even have Internet access that the large language model could guide there and do. We didn't have MCP. We couldn't guide into external systems back in the early days. So tools added to the functionality of what an LLM can do. Then the third component here is this react loop, the agentic loop that the agent goes through in order to achieve its goals. This gives it the ability to think, act, observe, and repeat, and it will carry on doing this thing until it has achieved its goal. So by modern standards, whenever you are using Claude or Codex or something, you are already using an agent.
01:12But there's a lot more nuance to this because sometimes when people go out there and try to connect to an external system, they think, my agent's gonna go and do all this work for me. We need to understand how to expand the functionality of an agent to actually do the task that we want to do. And that's where a harness comes in. So I'm sure you've heard this terminology before. If you haven't, you can just think of a harness as something that wraps around the agent or the large language model. So on the left over here, the CPU in this analogy would be the model itself, so Claude or Gemini. And the harness is the operating system, and this provides the context and the structure and the capabilities
01:44that govern the way that the AI model is going to behave. So if you've been watching my channel for a while, you've seen we've been building our operating systems for business for a few months now. And inside there, all we're essentially doing is encapsulating Claude in its own harness so it understands the business context, how to operate within the business, and how to achieve specific goals based on whatever it is that our business does. That in itself is a harness. Equally, you could wrap it in any harness that makes sense to you because all it's doing is orchestrating how this model thinks, uses its tools in order to achieve those specific goals. And if we look down here, what's inside the harness, you can see that it's comprised of six parts. You wouldn't have to use every single one of these. Not every single harness does. The point is these are just the main ones that are out there. And each one of these plays a different role in helping the agent be better at whatever task it's trying to achieve. And inside these six components, you'll see we have guides, sensors, tools, memory, state, and file system. And we've spoken about various aspects of each of these things on this channel before.
02:39HOOKParticularly, when we're talking about the viewpoint of agents versus skills, you'll soon see that this is no longer the argument because on our guides section over here, the skills actually form part of this harness that we're talking about here in order to make our agent even more specialized at a certain task. So the TLDR from this slide is that the same model
02:59HOOKplus a specific harness means you're going to get different output quality. It all depends on whatever you've baked inside your harness. So like I said in the previous slide, all a skill does is transform a general purpose agent into something that is specialized in order to achieve a specific goal. If we look at this from a practical perspective, on the left over here, we have one of my environments that has specific skills in it, and on the right, we have one that doesn't have those skills. Now let's say I wanted to go and get all the AI news for today. On the left side, I have a skill purpose built for that that walks Claude through this step by step so it knows exactly what type of news I want, exactly what format to present it to me, so on and so forth. It's exactly what I want essentially. So I have turned Claude into the specialized news fetcher for this thing. So if I come over here and say,
03:42run my daily AI news brief, it will go and run that. Now in the same sense, over here on the right, I don't have that skill. But because I've given this thing the capability to access the web and go out there and do what it needs to do, I can also say to this thing, go out there and find me daily AI news. But if I had to ask this exact question, it wouldn't know what the hell I'm talking about because it doesn't have a skill. So it doesn't have the context for what a daily news brief would look like, and I suspect if I asked it, it would say, don't know what that is. We first need to talk about it. Can you explain more?
04:14And then I would give it more context. But equally, I wouldn't need to do that if I just changed my original question for the agent and told it to go out there and find me AI News and gave it perhaps a specific prompt. Go online and research the latest news from Reddit, GitHub and Hacker News on releases that Anthropic has for today. Now, I was a little bit specific there and obviously the skill that I have on the left is way more specific than that. But if I didn't give it even just this little bit of context, it would pull out news that is completely irrelevant for me. And for the sake of this video, all I'm trying to do here is illustrate the difference. You see this thing is going to do very specific steps. It is highly specialized as grabbing me the news. This thing is going out there based on this little bit of information I gave it and you can see it's using tools to achieve the goal that we need. I don't need to give it any more than that. The only time I need to give it more is if I want something very, very specific. So more importantly, you can see something that's happened here is on the right, we finished much quicker. On the left, this thing is still running because it doesn't just look for news. It still profiles it for me. It scores it based on how relevant it is to me. I've made it more specialized. And that is the key takeaway from here. This doesn't really have the context or anything else that it needs in order to be better than just a news finder. Obviously, if my original prompt was way better, I would have had better output and I could obviously just give it a step by step guide as a prompt instead of a skill, but that's not the right way to do this because it's a repeatable workflow. It should be stashed. You shouldn't type this in manually every time. And so here you can see the skills output where it has scored everything for my most important things related to my niche and what I want to talk about. There is obviously some overlap because news is news, but that's not really the point of this part of the video. All I'm trying to do here is show you that an agent can do this regardless of whether we have a skill or not. It all just comes down to the quality and the type of output that you need. That's why you start to go down a skill path. If we flip on back over here, you'll see the rule that Anthropic has for using this to between whether to use a skill or an agent for the type of work that you have. If the path is known, use a skill because you're gonna get much better output. It's far more deterministic.
06:14The agent is never gonna YOLO, which is somewhat inefficient. If we tell this thing exactly what we need to do step by step, our output is going to be far better, exactly the same every single time, and we need that reliability and that predictability in business. That is the whole goal here, to bridge the gap between probabilistic nature of AI and the determinism that we need as a business. So if the known path exists, we take it. If we don't know the path, that's when we might rely on the Sagenetic Loop like I did in the second window of my Versus Code. If we didn't have a skill for the daily AI news brief, the agent could go out there and figure out what it needs to do. Of course, the more complex this gets, the harder it'll be for the agent to do that and the more room for hallucination there is, which is why it's much better if there is a known path to use a skill. There are obviously several other things that come into play here like quality of output. Again, the agent doesn't know what your definition of good looks like. Without that, your DMs would probably look a bit sloppy. They wouldn't sound like you. Things like that. So using an agent without any form of harness or any form of skill behind it for me is a last resort.
07:14But there is a little caveat to that. And that's where the hybrid case comes in, where we have our skill scaffolding, where we walk through each step as much as we can, and then parts where there is a little bit of unknown or edge cases or unpredictability, that is where we would let the agent kind of yellow its way through it. So imagine we had this customer facing system with a chatbot that was trying to solve queries.
07:34We would give it as much context and information and everything it needed about the business and how things work. But if it got to a point where a customer asked perhaps a question that it didn't know an answer to, it could think for a minute and it could say, okay, this person could be talking about a password reset. I should probably go and access that system. If we had a predetermined process for this step, that would obviously be way better and increase accuracy.
07:55But if we didn't have something for password resets, the agent could figure out what the customer was talking about and then say, this is probably a password reset case. Let me go to that database and reset this customer's password based on the information that they've given me. Solve that for you, log it, and then it would go back to the customer and say that this has been done. So that's just one example where you could have this hybrid case for most things that are known and some things that are unknown. But something else to think about here, if there are unknown steps in your workflow,
08:22HOOKbefore you result to just having the agentic platform YOLO it for you, do as much research as you can breaking this thing down to understand why you cannot resolve it with a more determined step as opposed to having an agent do this for you. Because again, you want as much determinism as possible and the AI is just there to orchestrate the show. That is how we get to our reliable system. Something to note though, it is a very one-sided relationship. An agent can work without a skill, but a skill cannot work without an agent. All you would have is this really smart looking markdown file that is essentially a business operating procedure
08:54HOOKwith no one to actually go and do the work. Whereas an agent doesn't need the skill in order to do his job as you've seen in this video. Okay. So we now understand agents. We understand harness and we understand what a skill is. But I'm sure you've also heard the term of a sub agent. And a sub agent is literally just an agent like we have in our chat window here in Versus Code. But what we're doing with a sub agent is we are creating a fork from the main branch. So instead of us having this main chat window over here where we've been talking about our daily news, if I wanted to, I could run a sub agent that would essentially
09:24clip off of this main chat over here and go and do a specific task or something that I needed it to do so that it doesn't bloat our main context window. Because remember, the context window is very important. We don't want to bloat it with things that we don't need. But if we can be way more efficient and put certain parts of our workflow process in their own little isolated windows,
09:44that means we don't bloat this main context window. So it's a lot more efficient from a token usage perspective, especially if it's part of a really long skill chain. And I'm not gonna go to that too much depth in this video. I have a whole one that I'll link in the description below for skill chaining and we talk about sub agents in-depth. Just know that a sub agent is literally the same thing. You can define a specific model. So if you didn't want it to run-in Opus, you could make it run-in Haiku or Sonnet. It will run-in its own little context window. We do that to preserve the session context from our main window. Or we might want to do it for a specific tool boundary, so we could give it the scope of only accessing specific tools which helps with security and things like that. Or we could have it when we have specific sub agents that go and run tasks. If I'm running an entire lead gen workflow,
10:28I might have something that goes and writes DMs for me, one for LinkedIn, one for YouTube, however it is that I'm reaching out to people. We could have specific little sub agents that run-in their own context window as part of a larger workflow. But for the most part, it is the exact same thing as the main agent that we're using inside Claude or Codec. Okay. Then there's one final layer to this. What about Anthropic Managed Agents? And what about this new thing called HyperAgent
10:50that Airtable just brought out? If you go back to the beginning of our video, this will make a lot more sense because you remember I spoke about harnesses. If we start with HyperAgent, all this thing is doing is taking that agentic loop that we spoke about and it's packaged it up in a way that is way more user friendly for the average person out there to go and complete a specific goal. So they have their own version of that harness, their own version of that agentic loop. And if you remember, we had those six components that form part of that agent harness.
11:19All they've done here is packaged that up into their own version of that for non technical people. So it's way easier to just come in here and do whatever it is. It's like this real estate listing kit. If we had to preview this, and this thing would probably be tailored specifically to help people build these kinds of projects. Same thing as Lovable. It's really good at building web apps and things like that because that is mostly what it's been trained on. Of course, can use it for other things and it will figure that out too, but its main goal was for building apps with no code required. So their harness was developed specifically for that which made it really good at that specific thing. When it comes to Claude managed agents, that's a little bit of a different story. This is managed infrastructure
11:56CTAfor more technical people who are coming out here and perhaps building workflows or building specific types of software stores where they need agents to come and do tasks for them. You would host that on Anthropix Cloud and just like we had before, we would have our agents with specific tools that they use in order to get the job done. We have our prompt, we have our tools, and then we have our environments that we set up for them. We are more in control of this than we are with HyperAgent.
12:21CTAHyperAgent, we don't have absolute control over everything in there like we would inside Claude code or inside managed agents over here because this is for more technically firm people who want that form of control. So that I can come in here and set this thing up exactly where I needed to for my infrastructure, for my business, or for my clients in larger enterprises or mid market. Fantastic. You made it to the end of the video. Now I'm gonna give you a very simple cheat sheet on how to know what to use and when. So the first thing here, if you are looking for pure determinism plumbing on high volume, you don't need AI at all for that. You wanna focus on NNN Make or Zapier. They were built long before AI came along for that specific use case. It's dumb plumbing. Don't waste AI where it doesn't need to be. The next step of this ladder, if you needed more than that pure determinism,
13:03CTAif you had a repeatable task and the path was known, it should be clear by now that you need to add a skill on top of your agent. If you still have that repeatable task but you don't know the path, you're going to have to rely on this agentic loop and just give it as much information as you can and ideally, whatever this thing learns along the way, you're going to get it to make notes of that so it can start training its own methodology around the unknowns to turn them into knowns. And that would help you build a more reliable workflow process or a skill, which is what we want. But then if you had a mostly known path but it had unpredictable steps inside it, again, that's where you would mix the two and take the hybrid approach of a skill scaffolding with the agentic sub step, again, tying into the part where it should self learn and ultimately build something more predictable.
13:44CTAThen if you needed a security boundary or you wanted context isolation as part of skill chaining, that's where you want to start bringing in custom sub agents for specific tasks that you would then feed back up to the top if they needed to as a part of that chain. And then finally, if you didn't want to manage any of this stuff yourself or you weren't really technically inclined to do it, go with a pre engineered harness because that's going be your fastest way to get you where you want to go just by using natural language to get there. And that's pretty much it in a nutshell. Obviously, are far more complexities behind each these. I have got deep dives in the description below. Otherwise, you can leave some comments and I'll get back to you as soon as possible or check out the videos on the screen now. They will definitely help you on your journey. Thanks very much for watching. See you guys later.
§ · For Joe

Steal the reframe-then-ladder structure.

Mansel Scheffel playbook

Open by killing the question viewers came in with; close with a six-rung ladder they'll screenshot.

  • Lead with 'you're asking the wrong question' — it earns 14 minutes of attention faster than 'here are the 5 differences.'
  • Build one core analogy and keep returning to it. CPU/OS is doing the heavy lifting every time he says 'harness' — Joe should pick one (e.g., 'plumbing you own vs. utilities you rent') and use it ten times.
  • Hand-drawn ribbon-titled sketch slides outperform polished Figma graphics for technical concepts. They feel like a whiteboard session, not a pitch deck.
  • When showing 'with vs without,' run them side-by-side in real time. The skill-less version finishing FASTER but worse is the actual punchline — it would be a worse video without that timing.
  • End every long explainer with a single-screen decision ladder ordered least-effort to most-effort. That's the only frame viewers will rewind to.
  • Coin one asymmetry sentence per video. 'Agents work without skills; skills do not work without agents' is the line that gets screenshotted.
  • Use the 'workflow shells with agentic guts' frame as a slide in any 'how I built it' video — it instantly elevates the conversation past 'AI agent' hype.
§ · For You

If you're trying to actually use Claude Code in your business.

Decision rules from the video

Don't pick 'skill' or 'agent' — pick which slot you're filling in your harness, then walk the ladder.

  • If the work is dumb, high-volume, and the steps never change — use n8n / Make / Zapier. No AI needed. Don't pay LLM tokens for plumbing.
  • If the path is known and repeatable (cold email, daily news brief, customer intake), write a SKILL. You'll get the same good output every time.
  • If the path is unknown (debugging, research, edge cases), let the agentic loop work it out — and have the agent take notes so unknowns become knowns over time.
  • If most of the workflow is known but one step is unpredictable (e.g., resolve a customer ticket), build a hybrid: deterministic shell, one agentic step inside it.
  • Only spin up a subagent when you need (a) to keep your main chat context clean, (b) a hard security boundary on tools, or (c) a genuinely different tool belt. Otherwise just write a skill.
  • If you don't want to manage any of this, use a pre-engineered harness — HyperAgent if you'd rather click a UI, Anthropic Managed Agents if you'd rather write code.
§ · Frame Gallery

Visual moments.