Your Skills Folder Is a Junk Drawer
I live in these tools. AI coding assistants are most of my workflow at this point. I have a few skills. I don't like them. And I think the energy the community is pouring into building and collecting them is mostly wasted.
Not a shortage of skills, an excess. People are building skills for everything. Frontend design skills. Auth skills. Testing skills. Deployment skills. Code review skills. There are curated lists of "20 skills you should install" and tutorials on building skill libraries for your team. Whole marketplaces now. Community repos. Essential skill packs.
Most of these skills shouldn't exist.
What Skills Actually Are
Let's be honest about what a skill is. It's a prompt you got tired of typing. You kept pasting the same instructions into the chat, "use our component patterns," "follow this test structure," "review for these things," and at some point you saved it to a markdown file so you wouldn't have to keep repeating yourself. That's all a skill is. A saved prompt with a name.
I'm going to talk about this in terms of Claude Code skills because that's what I use every day, but the problem isn't specific to Claude Code. Cursor has rules files. Windsurf has its own configuration. Claude.ai's chat interface has custom instructions and project knowledge. Every AI tool now has some mechanism for "write markdown, shape behavior," and every community around these tools is building and sharing libraries of them. The failure mode is the same everywhere.
Skills are supposed to be smart about it. The pitch is that the LLM recognizes when to invoke a skill based on its description. You're working on frontend code, it notices it has a frontend-design skill, and reaches for it automatically. This doesn't work. Skills get ignored when they're relevant and occasionally triggered when they're not. The only reliable way to use a skill is to invoke it manually, which makes the "auto-invocation" promise meaningless. You don't have an intelligent tool. You have a slash command with extra steps.
And the chat interface variants are even worse. Custom instructions and project knowledge don't have any invocation mechanism at all. They're either always loaded into context or they're not. You're burning context window on instructions that may or may not be relevant to what you're doing right now, with no way to scope them to specific tasks.
The feature isn't broken in some dramatic way. It just doesn't do what people think it does. And that gap between expectation and reality is driving a lot of misplaced effort.
The Accumulation Problem
Here's what happens: someone discovers skills, gets excited, and starts building them for everything. "I keep giving the AI the same instructions for code review" → skill. "It doesn't follow our design system" → skill. "I need it to handle deployments" → skill.
Before long you've got a .claude/skills/ directory with fifteen markdown files, a personal ~/.claude/skills/ with ten more, and you're browsing GitHub repos for skill packs to install. It feels productive. You're customizing your tools. You're building a workflow.
And the dynamic is the same one that gave us left-pad as an npm package: someone solved a problem, packaged it, and now thousands of people are importing it without asking whether they actually have that problem, or whether it's even hard to solve. If the first thing you do when creating a new project is npm install lodash, skill marketplaces are going to feel like home. That's not a compliment.
But step back. You now have twenty-five markdown files that the LLM will almost certainly never invoke at the right time. You have to remember which skills exist and when to use them. You have to maintain them as your codebase changes. And the instructions in those files are doing work that should be happening somewhere else entirely.
Skills are the junk drawer of AI coding tool configuration. They're the path of least resistance for "I want the AI to do X better," so everything gets thrown in there without asking whether a skill is actually the right solution.
The Workflow You're Actually Building
Think about what a skill-heavy workflow looks like in practice:
/understand-project /implement-feature XYZ /security-review XYZ /code-review XYZ /write-tests XYZ /deploy
You're the orchestrator. You're deciding what to do next, in what order, invoking the right command at the right time. The LLM does the work inside each step, but you're managing the process.
You've become a human scheduler for an AI worker. You've turned an autonomous agent back into a manual tool.
My sessions don't look like that. I describe what I want, the feature, the problem, the goal, and the LLM works until it's done. It reads the codebase, figures out the approach, implements it, runs the tests, fixes what breaks. If something needs to happen automatically, hooks handle that. If there are project conventions it should know, they're in a config file. The system is set up so that it can operate autonomously, not so that I can feed it one micro-task at a time.
"Add rate limiting to the API endpoints. We're using Express and Redis is already in the stack. Check how the existing middleware is structured and follow that pattern. The tests should cover the rate limit headers and the 429 response."
One prompt. The LLM reads the middleware directory, sees the patterns, implements rate limiting, writes the tests, runs them, fixes what fails. No skills invoked. The codebase provided the conventions. The hooks ran the linter. The config told it to run tests after changes.
The difference isn't smarter prompting. It's putting the work into the system instead of the instructions.
The task: add rate limiting to the API endpoints.
Where Skills Go Wrong
Most skills I've seen are compensating for a problem that should be solved somewhere else.
The "frontend design" skill is the canonical example, a 200-line markdown file full of generic instructions about component patterns, design tokens, and naming conventions. And the tell is right there: it's generic. If someone had actually sat down and written a skill for their project, they would have naturally ended up pointing at their own components, their own patterns, their own code. "Follow the pattern in components/Button.tsx" is a better instruction than 200 lines of prose about how components should theoretically be structured. But that's not what people do. They grab a skill pack off GitHub that describes a hypothetical component library instead of their component library, and wonder why the output feels off.
The code is the specification. It's always up to date, it can't drift from reality, and the LLM can read it directly. And if you don't have a good example to point at, write one. A template file, templates/component.tsx, templates/api-route.ts, will get you further than any amount of borrowed instructions about naming conventions.
Same pattern elsewhere. "Always use our logging format" and "follow our commit conventions" aren't task-specific workflows. They're project norms. That's what your config file is for. If the instruction is always relevant, it doesn't belong in a skill you have to remember to invoke. "Lint before committing" is a hook. A hook that runs your linter on every save is infinitely more reliable than a skill that says "remember to lint." The hook doesn't forget. It doesn't decide this one's probably fine.
In every case, the skill is hiding the problem, not solving it.
Should This Be a Skill?
Before you build your next one, run it through this.
Most things land in the first three buckets. The fourth is real but rare.
I have a code review skill. I invoke it manually before PRs. It works because code review is a discrete, intentional action with instructions different from default behavior. That's the only shape where skills make sense: deliberate, infrequent workflows. Not ambient behavior. Not standing instructions. Not automation.
The Real Issue
The skills obsession is a symptom of a deeper pattern: people reaching for configuration when they should be reaching for architecture.
"The LLM doesn't write good tests." Don't write a testing skill. Are your existing tests inconsistent? Is the test setup complex? Fix those things and it'll write good tests without being told how. Point it at a test file you're proud of. Code is clearer than English.
"The LLM gets confused by our architecture." Don't write an architecture skill. Why is it confusing? Unclear boundaries? Too many layers of indirection? A well-named directory structure and a template file will get you further than any amount of markdown.
The best setup is one where you barely need to configure the LLM at all. A clean codebase with clear patterns, a short project config for the non-obvious stuff, hooks for automation, and maybe one or two skills for specific workflows you run intentionally. That's it.
Stop accumulating skills. Start building a system you can walk away from.