Skip to content
SaaS4Builders
Extending

Writing Custom Skills

Advanced patterns for creating custom Claude Code skills: multi-phase workflows, interactive discovery, diagnostic skills, agent integration, and design principles for effective AI-assisted development.

The Skills System page covers the basics — what skills are, where they live, and the standard anatomy. This page goes deeper into advanced patterns for writing skills that reliably produce correct output across complex domains.


Skill Design Principles

Before writing a skill, understand the three categories of skills in the project and choose the right one for your use case.

Category 1 — Context Skills

Context skills provide domain knowledge without prescribing a workflow. They load facts, invariants, and rules that the agent internalizes for the duration of the session.

Examples: /docs/billing, /domain-model, /api-contracts, /docs/multi-tenancy

When to use: When the domain has non-obvious rules or invariants that the agent would violate without explicit guidance. If a junior developer would make specific, predictable mistakes in this area, a context skill prevents the same mistakes from an AI agent.

Structure:

.claude/skills/your-domain/SKILL.md
# Domain Title

## Mental Model
Key invariants — what is always true. State the boundaries clearly.

## Critical Rules
NEVER/MUST/ALWAYS rules. Each prevents a specific class of mistake.

## Code Architecture
Where relevant code lives. Directory structure, key classes, data flow.

## Common Mistakes
Anti-patterns. Each mistake should be plausible without the skill's guidance.

## References
Pointers to convention files the skill depends on.
Start with the Common Mistakes section. Think about what goes wrong when someone unfamiliar with the domain writes code in this area. Those mistakes become your guardrails. Work backwards to define the Mental Model and Critical Rules.

Category 2 — Workflow Skills

Workflow skills guide the agent through a multi-step process with explicit phases, decision points, and quality gates. They produce code as output.

Examples: /new-feature, /new-api-endpoint, /new-migration, /write-tests, /refactor

When to use: When a task follows a repeatable pattern with multiple steps that must happen in a specific order. The skill prevents the agent from skipping steps (especially tests and quality checks).

Structure:

.claude/skills/your-workflow/SKILL.md
# Workflow Title

## Before Starting
Preconditions to verify. Questions to ask the user.

## Step-by-Step Checklist
Ordered steps with checkboxes. Group by layer or phase.

## Common Mistakes
Task-specific anti-patterns.

## References
Convention files and pattern examples.

Category 3 — Orchestration Skills

Orchestration skills manage complex, multi-phase processes with user interaction, codebase analysis, and artifact generation. They coordinate multiple steps across sessions.

Examples: /create-workplan, /implement-wu

When to use: When the task involves discovery (gathering requirements from the user), analysis (reading the codebase to inform decisions), and generation (producing structured output). These are the most complex skills.

Structure:

.claude/skills/your-orchestration/SKILL.md
# Orchestration Title

## Phase 1 — Discovery (Interactive)
Questions to ask the user. Validation checkpoint.

## Phase 2 — Analysis (Read-only)
Codebase exploration. Pattern identification. Conflict detection.

## Phase 3 — Generation
Artifact creation based on validated inputs and analysis results.

## Phase 4 — Quality Gate
Verification of outputs against acceptance criteria.

## Critical Rules
Hard constraints on the workflow itself (not just the domain).

Advanced Pattern: Interactive Discovery

The /create-workplan skill demonstrates the interactive discovery pattern. The key technique is asking questions sequentially, using answers from prior steps to inform the next question.

## Phase 1 — Discovery

### Step 1: Overview
Ask the user:
1. Feature name
2. Description (2-3 sentences)
3. Milestone ID (suggest the next available)

### Step 2: Technical Scope
First, scan the codebase to present real options:
- List existing domains in `backend/app/Application/`
- List existing features in `frontend/features/core/`

Then ask:
1. Stack scope — Backend only / Frontend only / Full-stack?
2. Impacted domains — (present real directory listing as options)
3. New models/tables?
4. New API endpoints?

### Step 3: Constraints
1. Prerequisites?
2. New packages?
3. Non-goals (require at least 2-3 items)

### Step 4: Validation
Present a structured summary. Wait for explicit user confirmation
before proceeding to Phase 2.

Why sequential questions matter: If you ask everything at once, the user provides less context and the agent cannot tailor follow-up questions. Step 2 above scans the codebase first so it can present actual directory names as options — this is impossible if all questions are asked upfront.

The validation checkpoint: Phase 1 always ends with a summary that the user must explicitly approve. This prevents the agent from generating artifacts based on misunderstood requirements.


Advanced Pattern: Reference-Based Implementation

The /implement-wu skill demonstrates the reference-based implementation pattern. Instead of generating code from scratch, the agent finds an existing sibling implementation and matches its structure.

## Phase 2 — Plan

### 2.2 Find reference patterns
For each file to create, find a sibling or similar file in the project.
This is your pattern reference — follow its structure, naming,
and conventions.

Examples:
- New Action? Find an existing Action in the same or similar domain.
- New Resource? Find an existing Resource with similar fields.
- New composable? Look at `features/core/docs/billing/composables/`.
- New Zod schema? Look at `features/core/docs/billing/schemas.ts`.

Why this works: Reference-based implementation produces consistent code because the agent copies patterns from the codebase rather than inventing them. It also catches convention changes — if the project's patterns have evolved since the skill was written, the agent uses the current version, not the skill's potentially outdated examples.

How to encode it in your skill: Don't provide code templates directly in the skill. Instead, tell the agent where to find the reference implementation:

## Code Patterns

For each file type, use these reference implementations:

| File Type | Reference | Location |
|-----------|-----------|----------|
| Action | `CreateSubscription` | `backend/app/Application/Billing/Actions/CreateSubscription.php` |
| Query | `ListInvoices` | `backend/app/Application/Billing/Queries/ListInvoices.php` |
| Resource | `SubscriptionResource` | `backend/app/Http/Resources/Api/V1/Tenant/SubscriptionResource.php` |
| Schema | `subscriptionSchema` | `frontend/features/core/docs/billing/schemas.ts` |

Advanced Pattern: Phased Execution with Gates

The /implement-wu skill enforces a strict 5-phase workflow where each phase has a gate that must pass before the next begins.

## Phase 1 — Context (READ ONLY)
Read workplan, tracking file, conventions. Check dependencies.
Gate: All dependencies completed? If not → STOP.

## Phase 2 — Plan (PRESENT TO USER)
List files, find patterns, identify tests.
Gate: User explicitly validates? If not → adjust plan.

## Phase 3 — Implementation
Code + tests, following the validated plan.
Gate: All files created? Tests written alongside code?

## Phase 4 — Quality Gate
Run formatters, static analysis, tests.
Gate: All checks pass? If not → fix and re-run.

## Phase 5 — Tracking
Update progress file with completion record.

The critical gates:

  1. Dependency gate (Phase 1): Prevents starting work when prerequisites are incomplete. This catches ordering errors in workplan execution.
  2. User validation gate (Phase 2): The agent presents its plan and waits for explicit approval. This is the most important gate — it prevents the agent from implementing the wrong thing.
  3. Quality gate (Phase 4): Automated checks that must pass before the WU is marked complete. Encode the specific commands:
### Backend checks
- `docker compose exec php vendor/bin/pint --dirty`
- `docker compose exec php vendor/bin/phpstan analyse --memory-limit=1G`
- `docker compose exec php php artisan test --compact --filter=<Relevant>`

### Frontend checks
- `docker compose exec node pnpm typecheck`
- `docker compose exec node pnpm lint`
- `docker compose exec node pnpm test -- --run`

Advanced Pattern: Diagnostic Skills

The /debug skill demonstrates the diagnostic pattern — a skill designed to identify and fix problems rather than create new code.

## Process
1. **Understand** — What's the symptom? What's expected?
2. **Reproduce** — Write a failing test or reproduce manually
3. **Isolate** — Narrow down the root cause
4. **Fix** — Implement the smallest correct fix
5. **Verify** — Run tests, check manually
6. **Prevent** — Add regression test

## Common Bug Patterns

### Tenant Data Leakage (CRITICAL)
**Symptom:** User sees data from another tenant.
**Check:**
1. Model uses `BelongsToTenant` trait?
2. Global scope applied?
3. Background job preserves tenant context?
4. Cache keys include `tenant_id`?

### SSR Silent Failures (Frontend)
**Symptom:** Data is null on page refresh but works on client navigation.
**Cause:** Using `useAsyncData` instead of `useAuthenticatedAsyncData`.
**Fix:** Use `useAuthenticatedAsyncData()` which forces `server: false`.

Why the pattern catalog matters: Each entry in "Common Bug Patterns" matches a symptom to a diagnosis and fix. When the agent encounters the symptom, it jumps directly to the relevant pattern instead of debugging from scratch. Add patterns as you discover recurring issues in your domain.


Integrating Skills with Agents

Agent personas (stored in .claude/agents/) can declare which skills they have pre-loaded. This creates specialized agents with domain expertise.

.claude/agents/backend-dev.md
---
name: backend-dev
description: "Laravel backend developer for SaaS4Builders."
tools: Read, Write, Edit, Bash, Grep, Glob
model: sonnet
skills:
  - billing
  - api-contracts
  - domain-model
  - multi-tenancy
memory: project
---

The skills field lists skill names that the agent loads automatically. When backend-dev starts a session, it has the billing, API contracts, domain model, and multi-tenancy context already loaded — without the user needing to invoke /docs/billing manually.

When to add skills to an agent:

  • The agent always needs this domain context (billing knowledge for a backend dev)
  • The agent would make mistakes without the skill's guardrails
  • The skill is a context skill, not a workflow skill (agents should not auto-execute workflows)

When NOT to add skills to an agent:

  • The skill is a workflow (like /implement-wu) — these should be user-invoked
  • The domain context is only occasionally needed
  • Adding the skill would consume too much of the agent's context window

Skill Design Checklist

When creating a new skill, verify these items:

Content Quality

  • Mental Model section states invariants clearly, not implementation details
  • Critical Rules use NEVER/MUST/ALWAYS and each prevents a specific mistake
  • Common Mistakes lists plausible errors, not obscure edge cases
  • Code references point to real files that exist in the repository
  • Workflow steps are ordered correctly with explicit gates

Metadata

  • name is kebab-case and unique across all skills
  • description includes keywords for discovery (Claude Code searches descriptions)
  • argument-hint is set if the skill accepts arguments
  • context: fork is set for multi-step workflow skills
  • allowed-tools restricts tools to what the skill actually needs

Integration

  • The skill is tested by invoking it with Claude Code and verifying the output
  • Related agents list the skill in their skills field (if applicable)
  • The root CLAUDE.md lists the skill in the available skills section

What's Next