Claude Code is Anthropic's agentic coding assistant that lives in your terminal. It reads your codebase, writes files, runs commands, and iterates — autonomously.
◈
Core insight
Claude Code is an agent, not a chatbot.
Most people use it like autocomplete. The 100× engineer uses it as an autonomous collaborator that plans, implements, tests, and fixes code across thousands of files — while they focus on architecture and decisions.
🗂️
Core
Full Codebase Awareness
Reads your entire repo — every file, every import, every relationship. Not just what you paste.
🤖
Core
Autonomous Execution
Give it a goal. It creates files, installs packages, writes tests, updates configs — in a loop.
🔧
Core
Real Tool Use
Runs shell commands, sees actual output, installs dependencies, reads error logs. Not guesses.
🔄
Core
Self-Correction Loop
Runs tests, sees failures, fixes them. Same loop a senior engineer uses to debug.
💬
Core
Natural Language
No special syntax. Describe what you want in plain English. It handles the translation.
🛡️
Core
Permission Controls
Asks before destructive actions. Restrict to read-only, whitelist commands, or go fully automated in CI.
◈
Philosophy
Claude is the horse. Claude Code is the harness.
Claude (the model) has immense raw capability. Claude Code wraps it with file access, tool use, memory, and autonomy loops — giving you precise control over that power. You're the rider: you set the direction, the pace, and the guardrails. The horse does the heavy lifting.
Claude Code vs. Regular AI Chat
✓ Claude Code
Reads your entire project
Runs code, sees real output
Writes to multiple files
Fixes its own test failures
Persistent context across sessions
Learns your team conventions via CLAUDE.md
✗ Regular AI Chat
Only sees what you paste
Guesses — can't verify output
One file, no cross-file refactors
No self-correction loop
Forgets everything on refresh
Generic style — doesn't know your patterns
The Autonomy Spectrum — Where Claude Code Fits
Tool
Autonomy
Controllability
Best for
GitHub Copilot
Low — inline autocomplete
High — you drive every keystroke
Line-level speed boost, boilerplate
Cursor / Windsurf
Medium — chat + edit-in-place
Medium — IDE-integrated suggestions
File-level edits, quick refactors
Claude Code
High — autonomous agent loop
High — CLAUDE.md, permissions, hooks
Multi-file features, architecture, CI/CD
Devin
Very High — full autopilot
Low — fire and forget
Greenfield tasks, isolated tickets
◈
Claude Code's sweet spot
Maximum autonomy with maximum controllability. You can let it run fully autonomous in CI, or keep it on a tight leash interactively. No other tool gives you both ends of the spectrum.
When to Use Which Model
🧠
Model
Opus — The Architect
Complex architecture decisions, multi-file refactors, system design, deep debugging. Use --model opus. Slower but significantly deeper reasoning.
⚡
Model
Sonnet — The Workhorse
Daily coding, feature implementation, test writing, code review. The default. Best balance of speed and quality for 90% of tasks.
🏃
Model
Haiku — The Sprinter
Quick lookups, simple edits, commit messages, file explanations. Use --model haiku. Fastest response, lowest cost, great for simple tasks.
Chapter 02
Install & Configure
From zero to a fully configured Claude Code environment in under 10 minutes. Don't skip CLAUDE.md — it's the most impactful thing you'll do.
1
Install — pick your method
Three ways to install. npm is the most common. All require Node 18+.
option A — npm (recommended)
npm install -g @anthropic-ai/claude-code
option B — curl (one-liner)
curl -fsSL https://claude.ai/install.sh | sh
option C — homebrew (macOS)
brew install --cask claude-code
◈
IDE Extensions & Claude Desktop
Claude Code also integrates as a VSCode extension and JetBrains plugin. Install from the respective marketplaces. Claude Desktop (macOS/Windows) includes Claude Code built-in — great for non-terminal users.
2
Authenticate
Run claude — a browser OAuth flow opens. Log in with your Anthropic account. Your key is stored in system keychain.
terminal
claude
3
Create CLAUDE.md — the most important step
This file lives in your project root. Claude reads it at the start of every session. It's your team's onboarding doc for Claude. A well-written CLAUDE.md eliminates 80% of correction loops. Use /init inside the REPL to auto-generate one from your codebase.
CLAUDE.md — copy and customize
# Project: [App Name]## Stack
- Next.js 14 (App Router) · TypeScript strict
- Tailwind CSS · shadcn/ui
- PostgreSQL via Prisma · NextAuth.js v5
- Vitest + Playwright
## Architecture
- Repository pattern for data access
- Server components by default; client only when needed
- Zod for all input validation
- API routes in /app/api only — no direct DB in components
## Conventions
- Named exports (not default) for components
- All async functions need try/catch
- No `any` — use `unknown` and narrow
- TDD: write tests before implementation
- Conventional commits: feat/fix/chore/docs
## DO NOT
- Use class components
- Install lodash (use native JS)
- Write raw SQL (use Prisma)
- Use console.log in production (use logger)
## Structure
src/app → Next.js pages
src/components → Reusable UI
src/lib → Utilities
src/server → Server-only (DB, auth)
src/types → Shared TypeScript types
◈
Pro tip
Go deeper in your CLAUDE.md
The best teams include: feature specs for current sprint work, API definitions with endpoint contracts, data flow diagrams (even as ASCII), security guardrails (never expose PII, always sanitize), and testing procedures (coverage thresholds, required test types).
4
CLAUDE.md Memory Hierarchy
Claude Code loads memory files in a specific priority order. Higher priority overrides lower. Understanding this hierarchy is key for teams.
Priority Stack (highest → lowest)
Enterprise
Admin console — security / IT controls
Project
./CLAUDE.md — repo root, team-owned
Shared
.claude/settings.json — committed to git
User
~/.claude/CLAUDE.md — individual prefs
Level
File
Path
Who controls
1. Enterprise
Enterprise policy
Admin console (managed)
Security / IT admin
2. Project
CLAUDE.md
./CLAUDE.md (repo root)
Tech lead / team
3. Shared
.claude/settings.json
.claude/settings.json
Team (committed to git)
4. User global
~/.claude/CLAUDE.md
~/.claude/CLAUDE.md
Individual developer
◈
Use /init to bootstrap
Run /init inside the REPL and Claude will scan your project, detect your stack, and auto-generate a CLAUDE.md. It's the fastest way to get a solid starting point — then customize from there.
claude "Read CLAUDE.md and src/. Describe this codebase in 3 sentences, then list the top 3 areas to improve."
◈
If Claude gives an accurate description of your project, you're set up correctly. The quality of this first response is a direct measure of how good your CLAUDE.md is.
Chapter 03
Commands & CLI Reference
Every launch mode, slash command, and flag — with notes on when to actually use them.
Launch Modes
claude
Start interactive REPL — the primary way to work. Full context, conversational.
claude "task here"
One-shot mode. Run a single task and exit. Great for scripts and CI pipelines.
claude -p "prompt"
Print mode — output to stdout only. Pipe into other commands.
claude --continue
Resume most recent session. Never lose context between terminal sessions.
claude --resume [id]
Resume a specific session by ID. Use claude --list-sessions to find IDs.
claude --model opus
Force a specific model. Opus for complex architecture, Sonnet for daily work.
Slash Commands (inside REPL)
Command
Action
When to use
/plan
Plan before acting — lists files, steps, risks
Before any complex task
/review
Show all pending changes before applying
Before every git commit
/undo
Revert the last change Claude made
Quick rollback without git
/compact
Compress conversation to save context window
Long sessions getting slow
/status
Show token usage, model, session ID
Monitor context health
/cost
Show accumulated cost for this session
Budget tracking
/clear
Clear conversation, keep file context
Start a new sub-task
/add-file [path]
Explicitly add a file to context
Files outside auto-detection
/add-dir [path]
Add an entire directory to context
Bulk context loading
/remove-file [path]
Remove file from context
Reduce noise, focus attention
/context
Show all files currently in context
Debug context issues
/init
Auto-generate CLAUDE.md from your codebase
First-time setup
/model
Switch model mid-session (opus/sonnet/haiku)
Adjust speed vs depth
/agents
List and manage subagent definitions
Orchestration setup
/hooks
Manage event-driven hooks
Automation configuration
/mcp
Manage MCP server connections
Tool integration
/permissions
View/modify tool permission settings
Security control
/sandbox
Toggle sandbox mode on/off
Safe experimentation
/security-review
Run security audit on recent changes
Pre-commit security check
/doctor
Diagnose Claude Code installation issues
Troubleshooting
/terminal-setup
Configure terminal integration
Shell integration setup
/pr-comments
Load PR review comments into context
Address PR feedback
/help
All available commands
Exploring new features
/bug
Report a bug with full session context
Something behaving strangely
Plan Mode — Deep Dive
◈
Press Ctrl+G to toggle Plan Mode
Plan Mode changes Claude's behavior: instead of immediately writing code, it explores the codebase, asks clarifying questions, and produces a detailed implementation plan for your approval. Toggle it on before complex tasks, toggle it off to execute.
1
Enter Plan Mode
Press Ctrl+G or type /plan. Claude will read files, search code, and map dependencies — but won't write anything.
2
Interactive Questioning
Claude asks you clarifying questions about requirements, edge cases, and preferences before committing to an approach. This prevents wasted work.
3
Review the Plan
Claude presents a step-by-step implementation plan: files to create/modify, dependencies, risks, and estimated complexity.
4
Approve & Execute
Press Ctrl+G again to exit Plan Mode. Say "execute the plan" and Claude implements everything it planned — with full context.
Custom Slash Commands
📂 .claude/commands/ — Team-Wide Reusable Prompts
Create markdown files in .claude/commands/ to define custom slash commands your entire team can use. Each file becomes a /command-name in the REPL. Commit them to git for team-wide sharing.
.claude/commands/review-pr.md
# Custom command: /review-pr
Review the current git diff as a Staff Engineer.
Check for:
- Bugs, logic errors, race conditions
- Security vulnerabilities (OWASP Top 10)
- Performance issues (N+1, missing indexes, re-renders)
- Missing error handling and edge cases
- Test coverage gaps
For each issue: file:line, severity, exact fix.
End with a go/no-go recommendation.
.claude/commands/deploy-check.md
# Custom command: /deploy-check
Pre-deployment checklist:
1. Run all tests and report failures
2. Check for console.log / debugger statements
3. Verify no hardcoded secrets or API keys
4. Check for missing environment variables
5. Review database migrations are reversible
6. Confirm no TODO/FIXME in changed files
Output: READY TO DEPLOY or BLOCKED (with reasons).
Key CLI Flags
--allowedTools "Bash,Read"
Restrict tools Claude can use. Max safety for sensitive environments.
--disallowedTools "Bash"
Block specific tools. Prevent shell execution if you only want file edits.
--max-turns 20
Limit autonomous loop turns. Prevents runaway agents on complex tasks.
--output-format json
Structured JSON output. Perfect for piping into scripts or CI systems.
--dangerously-skip-permissions
Auto-approve all. Use only in sandboxed CI/CD — never locally.
Chapter 04
Shortcuts & Shell Aliases
The keyboard shortcuts that eliminate friction, and shell aliases that automate your most common Claude Code patterns.
Shortcut
Action
Note
↑↓
Navigate command history
Re-run or modify past prompts fast
Ctrl+C
Stop current action
Safe interrupt — won't corrupt edits
Ctrl+R
Fuzzy search command history
Find prompts from past sessions
Ctrl+L
Clear terminal display
Visual only — context preserved
Tab
Autocomplete file paths
Works for /add-file and inline refs
Ctrl+D
Exit Claude Code REPL
Saves session automatically
Ctrl+K
Delete from cursor to end of line
Faster than holding backspace
Alt+←/→
Jump word by word
Precise editing in long prompts
Ctrl+A / E
Jump to start / end of line
Fast navigation without mouse
Esc
Cancel current input
Changed your mind mid-prompt
Shell Aliases — Add to .zshrc or .bashrc
~/.zshrc
# Core aliases
alias cc="claude"
alias ccr="claude --continue"
alias ccp="claude -p"
# Review staged diff before committing
alias cc-review='git diff --staged | claude -p "Review this diff. List: bugs, security issues, style violations. Be specific."'
# Generate conventional commit message → clipboard
alias cc-commit='git diff --staged | claude -p "Write a conventional commit message. Format: type(scope): description. One line only." | pbcopy && echo "Commit message copied"'
# Explain any file
cc-explain() { claude -p "Explain this file in detail — purpose, key functions, dependencies, gotchas: $(cat $1)" }
# Find and fix bugs in a file
cc-fix() { claude "Find and fix all bugs in $1. Write tests for each fix. Explain every change." }
# Generate tests for a file
cc-test() { claude "Write comprehensive Vitest unit tests for $1. Cover happy path, edge cases, errors." }
◈
cc-commit is a workflow changer
This alias pipes your staged diff to Claude, generates a perfect conventional commit message, and puts it in your clipboard. Your git history becomes readable documentation instantly.
Chapter 05
Prompting Mastery
The difference between a 1× and 100× user is almost entirely in how they communicate intent. This is the most important chapter in the guide.
◈
The golden rule
Be specific about outcomes, not steps. Don't say "add a button." Say "Add a primary CTA button to the hero section — text: 'Get Started Free', links to /signup, has a loading state, tracks click in Segment, accessible." The more context, the fewer revisions.
The 5 Prompt Structures That Always Work
1
Context → Goal → Constraints
The most reliable universal structure. Give context before asking. Works for code, docs, planning — everything.
template
Context: SaaS dashboard. Users are data analysts. Stack: React + TypeScript + Recharts.
Goal: Add a line chart showing daily active users for the last 30 days.
Constraints:
- Responsive (mobile-first)
- Loading skeleton state required
- Use existing design tokens from tokens.css
- Vitest unit tests included
- No new dependencies
2
Plan First, Then Execute
For any non-trivial task, get the plan approved before code gets written. Surfaces misunderstandings early.
template
Before writing code, give me a step-by-step plan for [feature]:
1. Files to create or modify
2. Dependencies needed
3. Edge cases to handle
4. Estimated complexity (1–5)
Wait for my approval before starting.
3
Role Assignment
Assigning an expert role sets implicit expectations about depth, rigor, and what to prioritize.
examples
# Code review:
"Act as a Staff Engineer at a FAANG doing a thorough PR review..."
# Security:
"Act as a pentester focused on OWASP Top 10. Find every vulnerability..."
# Architecture:
"Act as a Principal Architect who has designed systems for 10M+ users..."
# UX empathy:
"Act as a non-technical user who is frustrated. Try to use this feature..."
4
Explicit DO / DO NOT
Negative constraints are often more precise than positive ones. They prevent the most common failure modes.
template
Refactor auth logic in src/auth.ts.
DO NOT:
- Change function signatures (other files depend on them)
- Add dependencies
- Add obvious commentsDO:
- Extract token validation into a pure function
- Add proper TypeScript types (no `any`)
- Handle the expired token edge case
5
Specify Output Format
Tell Claude exactly what you want back — structure, length, tone. Removes ambiguity and makes output immediately usable.
template
Analyze homepage performance issues.
Return as:
## Critical (must fix before launch)
## Moderate (next sprint)
## Nice to have (backlog)
Each item: description (2 sentences max) · Impact: High/Med/Low · Effort: [hours]
Be direct. No padding.
✗ Weak prompts
"Fix this" — no context, no outcome
"Make it better" — meaningless
"Write tests" — what kind? coverage target?
"Do the login feature" — no spec
✓ Strong prompts
"Fix race condition in payment flow where two requests..."
"Improve API response time — target <100ms p99"
"Vitest unit tests, 100% branch coverage, null/undefined inputs"
"Email login per this spec. NextAuth. Rate limit 5 attempts."
Chapter 06
Context & Memory
Claude Code has a context window. Mastering how to manage it is the difference between intermediate and expert users.
📊
Monitor Usage
Run /status to see tokens used vs. available. At 80%, start compacting.
🗜️
/compact Strategically
Summarizes the conversation without losing codebase awareness. Run before new sub-tasks.
🎯
Target File References
/add-file path/to/file.ts — be surgical. Only add what's needed for the current task.
🔄
Session Continuity
--continue resumes yesterday's session. Full conversation memory restored.
📝
Scratchpad Files
Ask Claude to log progress to scratch.md. Persists across context resets and sessions.
🏗️
Task Decomposition
Break large features into context-sized chunks. Each sub-task gets its own focused session.
long session workflow
# 1. Orient at session start
"Read CLAUDE.md, scan src/. Give a 2-sentence summary of this codebase."
# 2. Set the session goal
"Today: implement user dashboard. 4 sub-tasks. Starting with data layer."
# 3. Check context health mid-session
/status
# 4. When context fills up
/compact
"Summarize progress to scratch.md, then continue with sub-task 3."
# 5. End of session
"Write a progress note to scratch.md: what we completed, what's next, decisions made."
Chapter 07 · Power Feature
Skills — Reusable AI Capabilities
Skills are saved, versioned prompt templates that encode your team's best practices into reusable units. Think of them as functions for Claude — parameterized, shareable, and composable.
✦
What Skills unlock
Your best prompts become team assets.
Instead of retyping complex instructions every session, you define a Skill once — with parameters, examples, and guardrails — and invoke it with a single command. Skills accumulate institutional knowledge about how your team wants Claude to behave.
📖 What a Skill looks like
Skills are defined as YAML or markdown files in .claude/skills/. They contain a prompt template with typed parameters, examples, and output format specifications.
.claude/skills/pr-review.md
---
name: pr-review
description: "Thorough PR review as a Staff Engineer"
params:
focus: string # e.g. "security", "performance", "all"
severity: enum [critical, major, minor, all]
---
You are a Staff Engineer doing a rigorous PR review.
Review the current git diff. Focus on: {{focus}}
Report issues of severity: {{severity}} and above.
For each issue:
- Location (file:line)
- What's wrong and why it matters
- The exact fix (show code)
- Severity: Critical / Major / Minor
End with: overall assessment and top 3 must-fix items.
⚡ Invoking Skills
terminal
# Run a skill with parameters
claude skill pr-review --focus security --severity critical
# List all available skills
claude skill list
# Run skill on specific files
claude skill pr-review --files src/auth.ts src/api/payments.ts
Skills to Build for Your Team
🔍
pr-review
Staff engineer review: bugs, security, perf, style. Parameterized by focus area and severity threshold.
🧪
generate-tests
Comprehensive test generation with your exact framework, patterns, and coverage requirements baked in.
📝
write-prd
Structured PRD generation from raw notes. Encodes your team's exact PRD template and quality bar.
Auto-documentation: JSDoc, README, architecture notes. Matches your team's doc standards.
🔒
security-audit
OWASP Top 10 check on any file or module. Reports with exact line numbers and fixes.
SKILL.md File Structure
📂 Recommended Skill Directory Layout
A well-structured skill has multiple supporting files. The SKILL.md is the entrypoint, but reference docs, examples, and scripts make the skill significantly more effective.
skill directory structure
.claude/skills/pr-review/
├── SKILL.md# Main prompt template with params
├── reference.md# Team standards, style guide excerpts
├── examples.md# Good/bad examples for few-shot learning
├── scripts/
│ └── post-review.sh # Post-processing scripts
└── templates/
└── report.md # Output format template
◈
Skills demonstrably improve output quality
Teams that define domain-specific skills see measurably better results than ad-hoc prompting. The skill encodes your team's hard-won knowledge — naming conventions, error handling patterns, security checklists — so every team member gets expert-level output, regardless of their prompting ability.
Plugin Marketplace
🔌 Discover & Install Community Skills
The plugin marketplace lets you browse, install, and share skills created by the community. Install pre-built skills for common workflows without writing them yourself.
plugin commands
# Browse the marketplace
/plugin marketplace
# Search for a specific skill type
/plugin marketplace search "code review"
# Install a community plugin
/plugin install @claude-community/pr-review
# List installed plugins
/plugin list
Chapter 08 · Power Feature
Subagents — Parallel AI Workers
Subagents let Claude spawn parallel workers to tackle parts of a problem simultaneously. Instead of one agent working linearly, you orchestrate a team of specialized AI workers.
✦
The paradigm shift
Think in parallel, not in sequence.
A single agent writes the frontend, then the backend, then the tests — sequentially. With subagents, Claude orchestrates a frontend agent, a backend agent, and a test agent working simultaneously. Complex features that take hours become minutes.
Subagent Fan-out / Fan-in
Orchestratordecomposes task
Agent 1backend
Agent 2frontend
Agent 3tests
Mergeassemble result
🏗️ How Subagents Work
You describe a complex goal. Claude (the orchestrator) decomposes it into parallel workstreams, spawns subagents for each, monitors their progress, resolves conflicts, and assembles the final result.
orchestration prompt
Use subagents to implement the user notifications feature in parallel:
Agent 1 — Backend:
- Notifications table (Prisma schema)
- CRUD API endpoints in /app/api/notifications
- Real-time via Server-Sent Events
- Email queue integration (Resend)
Agent 2 — Frontend:
- NotificationBell component (unread count badge)
- NotificationPanel (dropdown list, mark as read)
- Real-time SSE subscription hook
- Optimistic UI updates
Agent 3 — Tests:
- API endpoint tests (happy path + errors)
- Component tests (render, interactions)
- E2E test for the full notification flow
Coordinate: Backend completes schema first, then Frontend and Tests can proceed in parallel.
Report conflicts immediately. Merge when all three are green.
Best Use Cases for Subagents
⚡
Feature Sprints
Decompose a full feature into frontend, backend, tests, and docs — built in parallel. Ship in hours, not days.
🔄
Codebase Migrations
Split a large migration across file groups. Each subagent migrates a module, parent validates consistency.
🔍
Multi-Angle Audits
Run security, performance, and accessibility audits in parallel. Get a unified report combining all findings.
📚
Bulk Documentation
Split your codebase into modules. Each subagent documents a module. Orchestrator ensures consistency.
◈
Orchestrator prompt tip
Always define the dependency order explicitly. Tell Claude which agents must complete first before others can start. Specify exactly how conflicts should be resolved — don't leave it ambiguous.
The /agents Command & Agent Definitions
📂 .claude/agents/ — Persistent Agent Definitions
Define reusable agent types in .claude/agents/. Each agent gets its own system prompt, tool restrictions, and permission scope — isolated from the orchestrator and other agents.
.claude/agents/security-auditor.md
---
name: security-auditor
tools: [Read, Glob, Grep] # Read-only — no writes
model: opus
---
You are a security auditor. Your job is to find vulnerabilities.
Rules:
- Focus on OWASP Top 10
- Check for injection, XSS, CSRF, auth bypass
- Report file:line for every finding
- Classify: Critical / High / Medium / Low
- Suggest exact code fixes
You cannot:
- Write or modify files
- Run shell commands
- Access network resources
agent management
# List all defined agents
/agents
# Spawn a specific agent
/agents spawn security-auditor
# Orchestrate multiple agents from a prompt
"Use the security-auditor agent on src/auth/ and the test-writer agent on src/api/"
Advanced Orchestration Patterns
🔀
Fan-Out / Fan-In
Spawn N agents for N modules (fan-out). Orchestrator collects all results and merges into a unified report (fan-in). Great for audits and documentation.
🔗
Pipeline Chain
Agent A generates code → Agent B writes tests → Agent C runs security review. Each agent's output feeds the next. Sequential but specialized.
🏁
Race & Pick Best
Spawn 3 agents with different approaches to the same problem. Compare outputs. Pick the best implementation. Useful for algorithm design.
👁️
Supervisor Pattern
One agent writes code, another monitors for quality issues in real-time. The supervisor can halt the worker if standards are violated.
Chapter 09 · Power Feature
Hooks — Automate Claude's Workflow
Hooks are event-driven triggers that run automatically at specific points in Claude's workflow. They let you enforce team standards, automate quality checks, and integrate Claude into your existing tooling — without any manual prompting.
✦
What hooks enable
Encode your team's quality standards into Claude's DNA.
Hooks run before or after Claude's actions — automatically. Format code before writing. Run tests after editing. Post to Slack when a PR is ready. Log everything to your audit trail. Zero manual steps required.
pre_write hook runs Prettier before every file write. Every Claude-generated file is always formatted correctly.
✅
Test-on-Change Loop
post_write runs related tests. Claude sees failures and auto-iterates. A fully automated TDD loop.
🔒
Security Gate
pre_write hook scans for secrets (git-secrets, gitleaks). Blocks Claude from writing if secrets are detected.
📡
Real-time Slack Updates
post_session posts a summary of what Claude built to your team Slack. Zero-friction async collaboration.
📋
Audit Trail
pre_bash logs every command Claude runs. Full auditability for compliance and debugging runaway agents.
🚀
Auto PR Creation
post_session hook commits Claude's work, pushes to a branch, and opens a GitHub PR — fully automated.
Chapter 10 · Power Feature
MCP — Model Context Protocol
MCP is the open standard that lets Claude connect to any external tool, API, or data source. It's the integration layer that makes Claude Code a universal AI workspace — not just a code editor.
✦
Why MCP matters
Claude becomes the interface to your entire stack.
With MCP, Claude can query your database, create JIRA tickets, search Confluence, post to Slack, query Datadog metrics, trigger GitHub Actions, and look up customer data in Salesforce — all in one conversation, without switching tools.
Claude queries your actual database schema, writes migrations, validates queries against real data — no copy-pasting schemas.
with postgres MCP
Look at our users table schema. Write a Prisma migration to add a `last_seen_at` timestamp column, backfill it from the `sessions` table, and add an index for our "active users in last 30 days" query. Show the query plan before applying.
2
GitHub-Native Workflows
Create PRs, review issues, analyze CI failures — Claude acts directly on your GitHub repo.
with github MCP
Look at CI failures on the last 5 PRs. Identify the most common failure pattern. Write a fix, then create a PR with a description explaining the root cause and what we're doing to prevent recurrence.
3
Ticket-to-Code Automation
Pull a JIRA ticket, implement it, and close it — in one Claude session.
with jira + github MCP
Read PROJ-847 from JIRA. Implement what's described. When done: create a PR on GitHub, link it to the ticket, move the ticket to "In Review", and post a summary in the ticket comments.
4
Production Intelligence
Query your monitoring tools to understand production behavior before writing code.
with datadog MCP
Query Datadog for the slowest API endpoints in the last 7 days. For the top 3, look at the code and identify why they're slow. Write targeted optimizations and predict the performance improvement for each.
5
Figma → Code in One Shot
Turn any Figma design into production code. Claude reads the actual design file — layout, spacing, colors, components — and writes pixel-accurate code directly.
with figma MCP
Look at my current Figma selection. Implement it as a responsive React component using our design system tokens from `src/styles/tokens.ts`. Match the spacing, typography, and colors exactly. Use CSS modules for styling.
✦
Deep Dive
Setting up Figma MCP — the most popular MCP integration
Figma's official MCP server lets Claude read your actual design files — frames, layers, variables, components, and layout data. Two setup options:
Option A — Remote Server (no desktop app needed)
1
Add the remote Figma MCP server
terminal
# Add for current project only
claude mcp add --transport http figma https://mcp.figma.com/mcp
# Or add globally (all projects)
claude mcp add --transport http --scope user figma https://mcp.figma.com/mcp
2
Authenticate in Claude Code
Type /mcp in Claude → select figma → select Authenticate → click Allow Access in the browser. You'll see "Authentication successful. Connected to figma" back in your terminal.
Option B — Desktop Server (via Figma app)
1
Enable in Figma Desktop
Open Figma desktop → Menu → Preferences → Enable Dev Mode MCP Server. This starts a local server at http://127.0.0.1:3845/sse.
2
Connect Claude Code to the local server
terminal
claude mcp add --transport sse figma-dev-mode http://127.0.0.1:3845/sse
Figma MCP Workflow Examples
🎯
Selection → Code
Select a frame in Figma, then: "Implement my current Figma selection as a Vue component." Claude reads the exact frame data — no screenshots needed.
🔗
Link → Code
Copy a Figma frame URL, then: "Build this design: [figma-link]. Use Tailwind, match colors and spacing exactly."
🎨
Design Tokens Extraction
"Read the Figma variables and generate a design tokens file (colors, spacing, typography) as CSS custom properties."
📐
Multi-Frame Flows
"Look at the 5 frames in the 'Onboarding' page. Implement the complete onboarding flow with routing between screens."
🔄
Code → Figma (Reverse)
Figma also supports turning production code back into editable Figma designs — useful for documenting existing UIs.
⚡
Iterate Visually
"The card padding is off. Recheck my Figma selection and fix the spacing to match the design exactly."
◈
Pro tip: Figma + your design system
Put your design system mapping in CLAUDE.md: "When implementing Figma designs, use components from src/components/ui/ and tokens from src/styles/tokens.ts. Never use raw hex colors — always map to our design tokens." This ensures Claude generates code that fits your codebase, not just matches the pixels.
Any HTTP API can become an MCP server using the @modelcontextprotocol/sdk. If your internal tool has a REST API, Claude can use it in 30 minutes of setup. This is how you make Claude an expert in your proprietary systems.
Chapter 11 · Power Feature
Security & Sandbox
Claude Code gives you fine-grained control over what it can and cannot do. Sandbox mode, permission controls, and security guardrails let you experiment safely — even with untested code.
✦
Why this matters
Trust, but verify — and constrain.
An autonomous agent that can execute shell commands and write files needs guardrails. Claude Code's security model gives you three layers: sandbox isolation, permission gates, and CLAUDE.md guardrails. Together they let you run Claude with confidence on sensitive codebases.
Sandbox Mode
🔒 Isolated Execution Environment
Sandbox mode restricts Claude to an isolated environment where file writes and shell commands are contained. Network access is limited, file system changes are tracked, and side effects are verified before applying.
enable sandbox
# Toggle sandbox in REPL
/sandbox
# Launch with sandbox enabled
claude --sandbox
# Sandbox restricts:Network: No outbound requests (API calls blocked)
Files: Writes tracked and reversible
Shell: Commands run in isolated context
Side effects: All changes shown for review before applying
Permission Modes
🛡️
Ask Every Time (Default)
Claude asks for approval before every file write and shell command. Maximum control, slightly slower workflow. Best for sensitive codebases.
⚡
Auto-Approve Safe Actions
File reads and searches are auto-approved. Writes and shell commands still require approval. Best balance of speed and safety.
🚀
Full Auto (CI/CD Only)
All actions auto-approved via --dangerously-skip-permissions. Only for sandboxed CI/CD environments. Never use locally.
permission management
# View current permissions
/permissions
# Restrict to specific tools only
claude --allowedTools "Read,Glob,Grep"
# Block specific tools
claude --disallowedTools "Bash"
# Run security review on recent changes
/security-review
Security Guardrails in CLAUDE.md
CLAUDE.md security section
## Security Guardrails
- NEVER hardcode API keys, tokens, or secrets
- NEVER commit .env files or credentials
- NEVER use eval() or dynamic code execution
- NEVER disable CORS protections
- Always sanitize user input (use Zod schemas)
- Always use parameterized queries (Prisma only, no raw SQL)
- Always validate JWT tokens before accessing protected data
- Log security-relevant events (login, permission changes)
- Use Content-Security-Policy headers
- Rate-limit all authentication endpoints
Safe Experimentation Patterns
🧪
Branch-First Development
Always have Claude work on a feature branch. If anything goes wrong, git checkout main resets everything. Zero risk to your main branch.
📋
Review Before Apply
Use /review before committing. Claude shows all pending changes as a diff. Reject anything suspicious before it touches your repo.
⏪
Instant Rollback
/undo reverts the last change immediately. For bigger rollbacks, use git stash or git checkout -- . to discard all changes.
🔐
Read-Only Exploration
Launch with --allowedTools "Read,Glob,Grep" for pure exploration. Claude can analyze your codebase but can't modify anything.
🎯 Product Manager Guide
PM Playbook
Claude Code isn't just for engineers. Use it to write tighter specs, validate faster, align teams instantly, and prototype before engineering starts.
1
Brain dump → Structured PRD
prompt
Raw notes — structure these into a PRD:
[paste your Slack messages, interview notes, whatever]
Structure: Executive Summary · Problem Statement · Goals & Non-Goals · User Stories (Given/When/Then) · Success Metrics · Risks · Open Questions · Out of Scope
Be specific. No fluff. Flag anything unclear as an Open Question.
2
Pressure-test before sharing
prompt
Act as a skeptical Staff Engineer who has seen PRDs fail. Read this:
[paste PRD]
List: top 5 technical assumptions that could be wrong · 3 questions engineering will ask that I can't answer · edge cases in user stories that would break the feature · realistic scope risk assessment. Be direct.
3
One PRD → four audiences
prompt
Rewrite this PRD for 4 audiences:
1. CEO — 1 page: business impact, revenue, strategic fit, timeline
2. Engineering — technical requirements, APIs, performance targets
3. Design — user flows, key screens, edge cases, accessibility
4. Support — what's changing, FAQs, what users will ask
Cut anything irrelevant to each audience.
4
RICE prioritization
prompt
Score this backlog using RICE. Our DAU is [X]:
[paste items]
Show formula and score per item. Rank them. Flag low-confidence estimates. Identify quick wins (high impact, low effort). Output as a table.
5
Competitive landscape analysis
prompt
Analyze the competitive landscape for [feature/product area].
Competitors to analyze: [list or "identify top 5 in this space"]
For each competitor:
- Feature parity matrix (has / doesn't have / partial)
- Pricing model and positioning
- UX approach — what they do better / worse
- Market share / traction signals
Output: comparison table + strategic recommendations for differentiation.
What can we learn? What should we avoid? Where's the gap we can own?
6
Sprint planning & story breakdown
prompt
Break this feature into sprint-sized user stories:
[paste feature spec]
For each story:
- Title (As a [user], I want [action] so that [value])
- Acceptance criteria (Given/When/Then)
- Story points estimate (1/2/3/5/8/13)
- Dependencies on other stories
- Technical notes for engineering
Group into sprints (2-week cycles). Flag stories that need design input.
Total estimate and critical path. Identify what can be parallelized.
7
Stakeholder update generator
prompt
Generate a weekly stakeholder update from these raw notes:
[paste standup notes, PR links, blockers, metrics]
Format:
## Status: 🟢 On Track / 🟡 At Risk / 🔴 Blocked
## Shipped This Week — bullet list with links
## In Progress — what's actively being worked on, % done
## Blockers — what needs help and from whom
## Next Week — upcoming priorities
## Metrics — key numbers with trend arrows (↑↓→)
Tone: confident, concise, no fluff. Executive-ready.
8
User interview question generator
prompt
Generate user interview questions for: [feature/problem area]
User persona: [describe target user]
Research goal: [what we want to learn]
Create:
- 5 warm-up questions (build rapport, understand context)
- 8 core questions (open-ended, non-leading, about behavior not opinions)
- 3 closing questions (surprises, priorities, referrals)
- Follow-up probes for each core question
Rules: No yes/no questions. No "would you use X?" — ask about past behavior instead.
Include a 1-page discussion guide I can print.
◈
PM superpower — prototype before engineering
Use Claude to generate clickable HTML prototypes from your PRD in minutes. Share with stakeholders for feedback before a single engineering sprint is planned. This collapses the feedback loop from weeks to hours.
PM Use Cases at a Glance
📋
PRDs
Brain-dump to structured spec in seconds. Claude formats, identifies gaps, and flags open questions automatically.
📊
Prioritization
RICE, ICE, MoSCoW — any framework. Paste your backlog and get a scored, ranked table with confidence levels.
🎙️
User Research
Generate interview guides, analyze transcripts, extract themes, build persona cards — all from raw notes.
📈
Metrics
Define success metrics, build measurement plans, generate stakeholder dashboards from raw data.
Paste changelog + PR descriptions. Get polished, user-facing release notes segmented by audience.
🎨 Designer Guide
Design Playbook
You don't need to code. You need to describe design with precision. Learn to narrate color, spacing, hierarchy, states, and motion — Claude handles the implementation.
1
Precise component spec → live code
prompt
Build a data table component.
Visual: Linear.app's table — clean, dense, professional
Colors: bg #fafafa, row hover rgba(0,0,0,.02), selected row left-border 2px #1d4ed8 + bg rgba(29,78,216,.04)
Type: 13px Inter, headers 11px uppercase 600 color #71717a
Sorting: click header, animated direction arrow
Pagination: bottom-right, "1–10 of 243 results" style
States: loading (skeleton rows) · empty (centered message + CTA) · error
Responsive: horizontal scroll mobile, sticky first column
Tailwind CSS · TypeScript · named export
WCAG 2.1 AA audit of this component:
[paste code]
Per violation: WCAG criterion · current state · exact code fix (before/after) · severity
Also check: keyboard navigation · screen reader experience · focus indicators · touch target sizes (44×44px min)
4
Motion spec → Framer Motion code
prompt
Page transitions for Next.js:
- Feel: purposeful, not flashy — like Vercel dashboard
- Entry: elements cascade from bottom, staggered 40ms
- Exit: quick fade 150ms
- Scroll reveals: fade + translateY(20px→0) on viewport entry
- Micro: buttons scale 0.97 on click
- Respect prefers-reduced-motion
Framer Motion. Output: AnimatePresence wrapper + scroll reveal hook.
5
Dark mode from light tokens
prompt
Generate a dark mode token system from these light mode tokens:
[paste your CSS custom properties or design tokens]
Rules:
- Don't just invert colors — reduce contrast, soften whites to gray-100
- Backgrounds: #0a0a0b base, #18181b surface, #27272a elevated
- Text: #fafafa primary, #a1a1aa secondary, #71717a muted
- Preserve brand colors but adjust lightness for dark backgrounds
- Shadows become darker/larger, not lighter
- Borders become more subtle (opacity-based)
Output: CSS custom properties inside [data-theme="dark"] selector.
Include a toggle component with smooth transition (150ms).
6
Responsive breakpoint strategy
prompt
Design a responsive strategy for this component:
[paste component code or description]
Breakpoints: 320px (small mobile) · 375px (mobile) · 768px (tablet) · 1024px (desktop) · 1440px (wide)
For each breakpoint specify:
- Layout changes (stack, grid columns, sidebar behavior)
- Typography scaling (clamp or media queries)
- Spacing adjustments
- What hides / collapses / becomes a menu
- Touch target sizing (min 44px on mobile)
- Image sizing and art direction
Output: Tailwind responsive classes OR CSS media queries.
Mobile-first approach. Test at every breakpoint.
7
Design system documentation
prompt
Generate design system documentation for this component library:
[paste component directory or list of components]
For each component document:
- Purpose and when to use (and when NOT to)
- Props/API with types and defaults
- Visual variants (size, color, state)
- Code examples (React/Vue/HTML)
- Do's and Don'ts with visual examples
- Accessibility notes (ARIA, keyboard, screen reader)
- Related components
Output format: MDX files compatible with Storybook.
Include a component status table (Stable / Beta / Deprecated).
8
UI state inventory
prompt
Audit every component in [directory] and generate a UI state inventory.
For each component, ensure these states exist:
- Loading (skeleton / spinner / placeholder)
- Empty (helpful message + CTA, not just "No data")
- Error (clear message + retry action + fallback)
- Success (confirmation + next action)
- Partial (some data loaded, some pending)
- Offline (graceful degradation)
Output: a table with component name, which states exist, which are missing.
For every missing state, generate the implementation code.
Priority: components users see most → least.
◈
The best design prompts describe the FEELING, not just the pixels
Instead of "make a card with 8px padding," say "it should feel like a Linear card — clean, dense, professional with room to breathe." Claude understands visual metaphors and will translate them into precise CSS. Describe the vibe, then refine the details.
Design Use Cases
🧩
Components
Describe any component with precision — colors, spacing, states, interactions — and get production-ready code instantly.
🎨
Tokens
Generate complete design token systems. Colors, typography, spacing, shadows, motion — as CSS vars, Tailwind config, or Figma JSON.
✨
Animations
Spec motion as feelings and get Framer Motion, CSS transitions, or GSAP code. Micro-interactions, page transitions, scroll reveals.
♿
Accessibility
WCAG audits with exact code fixes. Keyboard nav, screen readers, focus management, color contrast — all checked automatically.
📱
Responsive
Breakpoint strategies, container queries, fluid typography. Ensure every component works from 320px to 2560px.
📖
Documentation
Auto-generate Storybook docs, usage guides, prop tables, and Do/Don't examples for your entire component library.
💻 Developer Guide
Developer Playbook
The exact workflow used by the fastest engineers. From ticket to merged PR — plan, implement with TDD, self-review, ship.
1
Orient Claude at session start
prompt
Read CLAUDE.md. Scan src/. Today: [feature name].
Ticket: [paste ticket]
Starting point: [relevant files]
Confirm you understand the goal. List any assumptions you're making before we start.
2
Plan — files, risks, complexity
prompt
Implementation plan:
1. Files to create (path + purpose)
2. Files to modify (what changes)
3. Schema changes if any
4. New dependencies
5. Risks and gotchas
6. Complexity: Simple / Medium / Complex
Do not write code yet. Wait for my approval.
3
TDD — tests first, always
prompt
Approved. Start with failing tests.
Write Vitest tests for [feature]. Cover:
- Happy path
- Edge cases (null, empty, max values, concurrent)
- Error states (API failure, validation, timeout)
- Auth/permission edge cases
Run tests to confirm they fail. Then implement until green. Show output at each stage.
4
Self-review before committing
prompt
Review all changes as a Staff Engineer.
Check: bugs / logic errors · security vulnerabilities · performance (N+1, re-renders) · style vs CLAUDE.md · test coverage gaps · missing error handling · accessibility
Per issue: severity (Critical/Major/Minor) + exact fix.
5
Generate PR description
prompt
Write a PR description for this session's changes.
## What — brief description
## Why — problem solved + ticket link
## How — key implementation decisions
## Testing — manual steps + automated coverage
## Screenshots — list states to capture
## Checklist — tests, types, a11y, mobile
6
Debugging workflow
prompt
Debug this issue systematically:
[paste error message, stack trace, or bug description]
Follow this workflow:
1. REPRODUCE — write a minimal test that triggers the bug
2. ISOLATE — trace the root cause through the call stack
3. FIX — implement the smallest change that resolves the issue
4. PREVENT — add a regression test + update error handling
Show your reasoning at each step. Don't guess — trace the actual execution path.
If the root cause isn't clear, list hypotheses ranked by likelihood and how to verify each.
7
Performance optimization
prompt
Optimize the performance of [component/endpoint/query]:
[paste code or describe the bottleneck]
Follow this workflow:
1. PROFILE — identify what's slow (measure, don't guess)
2. IDENTIFY — list bottlenecks ranked by impact
3. FIX — implement optimizations one at a time
4. VERIFY — benchmark before/after with real data
Check for: N+1 queries · unnecessary re-renders · missing indexes · unoptimized images · blocking I/O · memory leaks · bundle size bloat
Show metrics before and after each optimization.
Don't premature-optimize. Only fix measured bottlenecks.
8
Database migration
prompt
Create a database migration for:
[describe schema change — add column, rename table, change type, etc.]
Deliverables:
1. Migration file — forward migration with proper SQL
2. Rollback — reverse migration that undoes the change cleanly
3. Backfill script — if data needs transformation, include a batched backfill
4. Zero-downtime plan — ensure no locks on large tables, use concurrent index creation
5. Validation — query to verify migration succeeded
Consider: existing data, foreign keys, indexes, NULL handling, default values.
Test rollback works. Estimate migration time for [X] rows.
9
API design
prompt
Design an API endpoint for [feature]:
[paste requirements]
Deliverables:
1. Endpoint — method, path, naming convention matching existing API
2. Types — request/response TypeScript interfaces with JSDoc
3. Validation — Zod schema with meaningful error messages
4. Error handling — proper HTTP status codes, error response format
5. Documentation — OpenAPI/Swagger spec
6. Tests — unit + integration tests covering happy path + edge cases
Follow REST conventions. Idempotent where possible.
Include rate limiting and pagination if it returns a list.
Match the patterns in our existing API routes.
10
Legacy code refactoring
prompt
Refactor this legacy code safely:
[paste code or point to file]
Follow this workflow:
1. UNDERSTAND — explain what this code does, line by line
2. TEST — write characterization tests that capture current behavior (even if buggy)
3. REFACTOR — improve structure without changing behavior
4. VERIFY — all characterization tests still pass
Improvements to make:
- Extract functions (single responsibility)
- Replace magic numbers with named constants
- Add TypeScript types
- Remove dead code paths
- Simplify nested conditionals
Do not change behavior. Tests must pass before and after.
◈
The 100x loop: Plan → Test → Implement → Review → Ship
The best developers don't type faster — they iterate smarter. Plan with Claude, write failing tests first, implement until green, self-review for bugs and security, then ship with confidence. Every step has a prompt. Every prompt has a checkpoint.
Developer Superpowers
🧪
TDD
Write failing tests first. Claude implements until green. Coverage is a byproduct, not an afterthought.
🔧
Refactoring
Safely modernize legacy code. Characterization tests first, then refactor with confidence. Zero behavior changes.
🐛
Debugging
Reproduce → Isolate → Fix → Prevent. Systematic debugging with regression tests built in automatically.
⚡
Performance
Profile before optimizing. Fix N+1s, re-renders, bundle size. Measure before and after every change.
Endpoints with types, validation, docs, and tests. RESTful conventions, proper error handling, OpenAPI specs.
🔬 QA Engineer Guide
QA Playbook
The best QA thinks like an adversarial user. Claude helps you cover 10× more ground and find edge cases humans miss.
◈
Start every session with the adversarial mindset prompt
Frame Claude as an adversarial QA who has prevented billion-dollar outages. This single framing unlocks failure modes — race conditions, malformed inputs, chaotic user behavior — that polite testers miss.
1
Adversarial test matrix
prompt
You are an adversarial QA engineer. Break this feature in every way possible.
Feature: [paste PRD or description]
Generate a test matrix:
😊 Happy path · 😤 Frustrated user (rage click, back button, refresh mid-flow)
🤔 Confused user (wrong order, skipped steps, duplicates)
😈 Malicious inputs (XSS, SQLi, oversized payloads, special chars)
🌐 Environmental chaos (slow network, offline, timezone, locale)
⚡ Race conditions (concurrent requests, rapid clicks, tab duplication)
📱 Device edge cases (iOS Safari, small screens, touch vs mouse)
♿ Accessibility (keyboard-only, screen reader, high contrast)
2
Playwright E2E tests
prompt
Write Playwright E2E tests for: [user journey]
Requirements:
- Page Object Model (tests/pages/)
- data-testid selectors only (no CSS — fragile)
- Explicit waits: waitForSelector, not waitForTimeout
- Screenshot on failure (configured in playwright.config.ts)
- Parallel-safe — no shared state
- Cover happy path + top 3 failure scenarios
Generate test data factories if they don't exist.
3
Structured bug reports
prompt
Structure this observation as a JIRA bug ticket:
[what I saw]
Format: Title (component — behavior when condition) · Priority + reasoning · Steps to reproduce (exact, numbered) · Expected vs Actual · Impact · Possible cause · Files to investigate · Workaround if any · Attachments needed
4
Smart regression planning
prompt
This PR changes: [paste PR description]
Analyze risk surface:
1. Directly impacted areas (must test)
2. Indirectly impacted (shared code, downstream)
3. Risk per area: Critical / High / Medium / Low
4. Top 15 test cases ordered by risk
5. What's safe to skip and why
Time budget: [2 hours]. Optimize for maximum risk coverage.
5
API contract testing
prompt
Generate API contract tests for: [paste OpenAPI spec or endpoint list]
For each endpoint:
1. Schema validation — response matches documented types exactly
2. Required fields — never null/missing
3. Backward compatibility — new fields are additive, old fields unchanged
4. Status codes — correct codes for success, validation errors, auth failures, not found
5. Error format — consistent error response structure across all endpoints
6. Pagination — correct cursors, limits, total counts
7. Rate limiting — proper 429 responses with retry-after headers
Framework: [Jest/Vitest/Playwright]. Generate test data factories.
Flag any discrepancies between docs and actual responses.
6
Performance & load test scenarios
prompt
Generate performance test scenarios for: [feature/service]
Expected load: [X] concurrent users, [Y] requests/sec
Create:
1. Baseline test — normal load, measure p50/p95/p99 latency
2. Stress test — ramp to 3x expected load, find breaking point
3. Soak test — sustained load for 1 hour, detect memory leaks
4. Spike test — sudden 10x burst, verify recovery time
5. Endurance test — 24-hour run at normal load
For each scenario: k6/Artillery script, success criteria, what to monitor.
Include: CPU, memory, DB connections, response times, error rates.
Alert thresholds: when should we page on-call?
7
Test data factory generation
prompt
Generate test data factories for these models:
[paste TypeScript interfaces or database schema]
Requirements:
- Use faker.js for realistic data (names, emails, dates — not "test123")
- Overridable defaults: factory({ name: "custom" })
- Relationship builders: createUserWithOrders(), createTeamWithMembers()
- State variants: factory.active(), factory.suspended(), factory.trial()
- Batch creation: factory.createMany(50)
- Cleanup helpers: factory.cleanup() to remove created records
Output: TypeScript factory files using @faker-js/faker.
Include seed data for local development (10 users, 50 orders, etc.).
8
Release certification checklist
prompt
Generate a release certification checklist for: [version/feature]
Changes in this release: [paste changelog or PR list]
Checklist categories:
- [ ] Smoke tests — critical user journeys pass
- [ ] Regression — no existing functionality broken
- [ ] Performance — latency within SLA (p99 < [X]ms)
- [ ] Security — no new vulnerabilities (OWASP top 10)
- [ ] Data — migrations run cleanly, rollback tested
- [ ] Monitoring — alerts configured, dashboards updated
- [ ] Documentation — API docs, changelog, user-facing docs updated
- [ ] Rollback plan — tested, documented, rehearsed
Output: markdown checklist with owner assignments and sign-off fields.
Include a go/no-go decision matrix.
◈
QA superpower — Claude finds the bugs humans rationalize away
Humans skip edge cases because "nobody would do that." Claude doesn't rationalize — it systematically explores every path. Race conditions, timezone bugs, Unicode edge cases, concurrent mutations — the bugs that cause 2 AM incidents are exactly the ones Claude catches.
QA Arsenal
📋
Test Matrix
Adversarial test matrices covering happy paths, frustrated users, malicious inputs, environmental chaos, and device edge cases.
🎭
E2E Tests
Playwright tests with Page Object Model, data-testid selectors, parallel-safe execution, and screenshot-on-failure.
🔗
API Tests
Contract testing for schema validation, backward compatibility, proper status codes, and consistent error formats.
📊
Perf Tests
Load, stress, soak, and spike tests with k6/Artillery. Measure p50/p95/p99, find breaking points, detect memory leaks.
🐛
Bug Reports
Structured JIRA tickets with reproduction steps, severity, root cause analysis, and files to investigate.
Comprehensive codebase audit across all files in src/.
Report:
🚨 Security vulnerabilities — file:line, severity, fix
⚡ Performance bottlenecks — N+1s, missing indexes, bundle issues
🐛 Likely bugs — race conditions, null risks, edge case gaps
📊 Tech debt score per module (1–10), worst 5 explained
✅ What's working well — patterns to preserve
Priority: fix immediately / this sprint / this quarter.
Output as markdown report to share with the team.
🗃️ Database Migration Pipeline
Claude generates the migration, tests the rollback, validates data integrity, and estimates execution time — all before you run it in production.
prompt
Database migration pipeline for: [describe schema change]
Execute these steps in order:
1. Generate migration SQL (forward + rollback)
2. Run migration on test database
3. Run rollback on test database — verify clean reversal
4. Re-run migration — verify idempotency
5. Run data integrity checks (foreign keys, constraints, orphaned records)
6. Benchmark: time the migration on [X] rows
7. Generate zero-downtime deployment plan
If any step fails, stop and report. Do not proceed to production.
Output: migration file, rollback file, validation queries, deployment runbook.
📦 Dependency Upgrade Workflow
Upgrade dependencies one at a time. Run tests after each. Report breaking changes. Never ship a broken upgrade.
prompt
Upgrade all outdated dependencies in this project safely.
Workflow per dependency:
1. Check current version vs latest
2. Read the changelog for breaking changes
3. Upgrade the dependency
4. Run the full test suite
5. If tests fail: fix the breaking change or roll back
6. If tests pass: commit with message "chore: upgrade [pkg] from vX to vY"
Order: patch updates first, then minor, then major.
Skip dependencies with known incompatibilities (list them).
Output: summary table of all upgrades with status (✅ upgraded / ⚠️ breaking / ❌ skipped).
🚨 Incident Response Automation
Claude reads error logs, traces root cause, proposes a fix, and drafts the postmortem — while you focus on communication and coordination.
prompt
Incident response — production issue detected:
[paste error logs, alerts, or symptoms]
Execute:
1. TRIAGE — severity (P0–P3), blast radius, user impact estimate
2. ROOT CAUSE — trace the error through logs, identify the failing component
3. MITIGATION — propose immediate fix (hotfix, rollback, feature flag)
4. FIX — implement the fix, run tests
5. POSTMORTEM — draft a blameless postmortem:
- Timeline of events
- Root cause analysis (5 Whys)
- What went well / what didn't
- Action items with owners and deadlines
- How to prevent recurrence
Prioritize speed. Get the mitigation out first, then investigate.
📝
Auto-Documentation
Point Claude at legacy code. Get JSDoc, README updates, architecture diagrams (Mermaid), and API reference docs generated automatically.
🔄
Safe Mass Refactoring
Rename a type used in 50 files. Claude identifies all call sites, updates them, regenerates affected tests. One command, full repo.
🏦
Tech Debt Sprints
"Find the 10 worst functions by complexity. Refactor one by one with tests." A repeatable process for systematic quality improvement.
🗃️
Migration Pipelines
Generate, test, rollback-test, and benchmark database migrations before they touch production. Zero-downtime guaranteed.
📦
Dependency Upgrades
Upgrade deps one at a time with automated test runs. Patch first, then minor, then major. Breaking changes caught immediately.
🚨
Incident Response
From error logs to postmortem in minutes. Triage, root cause, mitigation, fix, and blameless retrospective — all automated.
Chapter 16
Team Playbook
How high-performing teams adopt Claude Code at scale. From spec-driven development to shared guardrails — this is the organizational playbook.
◈
Core principle
Spec-Driven Development (SDD): Humans write specs, AI executes.
The most effective teams don't have engineers writing code by hand anymore. They write precise specs — feature requirements, API contracts, test criteria — and Claude implements them. The human's job shifts from "writing code" to "defining outcomes and reviewing results."
The SDD Workflow
Swimlane — Spec-Driven Development
1. Spec
2. Review
3. Build
4. Ship
PM
Write spec
iterate
Engineer
Validate spec
Review output
Merge & ship
Claude
Implement all
1
PM writes a structured spec
Feature requirements, acceptance criteria, edge cases, API contracts — in a format Claude can execute directly. The spec IS the prompt.
2
Engineer reviews spec, feeds to Claude
The engineer validates the spec is technically feasible, adds implementation constraints, then feeds it to Claude with Plan Mode enabled.
3
Claude implements, engineer reviews
Claude builds the feature end-to-end: code, tests, docs. The engineer reviews the output — not the process. Focus on correctness and quality, not keystrokes.
4
Ship & iterate
The reviewed code ships. Feedback loops improve the CLAUDE.md, the specs, and the team's skill definitions. Each iteration gets faster.
Share CLAUDE.md for Standardized Onboarding
◈
One CLAUDE.md, entire team aligned
Commit your CLAUDE.md to git. Every team member — and every Claude session — gets the same conventions, patterns, and guardrails. New developers (and new AI sessions) onboard instantly. No more "how do we do X here?" questions.
Guardrails & Permission Approval Workflows
🔐
Permission Tiers by Role
Junior devs get read-only + sandbox. Seniors get auto-approve on safe actions. CI/CD gets full auto. Set via .claude/settings.json committed to git.
🚧
Approval Gates
Use hooks to require approval before Claude touches production configs, database schemas, or auth logic. Block destructive actions by default.
📊
Audit Everything
Log all Claude actions to an audit trail. What commands it ran, what files it modified, what it was asked to do. Compliance-ready from day one.
Hooks for Enforcing Team Standards
team enforcement hooks
# Every file Claude writes gets auto-formatted
pre_write → prettier --write {{file}}
# Every file edit triggers related tests
post_write → vitest run --related {{file}}
# Block commits without test coverage
pre_bash (on git commit) → check-coverage.sh
# Notify team lead on sensitive file changes
post_write (on src/auth/*) → notify-security-team.sh
# Auto-generate changelog entry on PR creation
post_session → generate-changelog.sh
Settings Hierarchy Bridges Skill Gaps
◈
Junior devs produce senior-level output
When CLAUDE.md encodes your architecture patterns, naming conventions, and quality standards — every team member gets output at that level. The settings hierarchy (Enterprise → Project → Shared → User) ensures that team-level standards override individual preferences. Skill gaps between team members shrink dramatically.
Sample Team CLAUDE.md
📄 Real-world CLAUDE.md template
Copy this as a starting point for your team. Customize the stack, conventions, and rules to match your project.
CLAUDE.md
# Project: [Your App Name]
## Stack
- Framework: Next.js 14 (App Router)
- Language: TypeScript (strict mode)
- Styling: Tailwind CSS + shadcn/ui
- Database: PostgreSQL + Prisma ORM
- Testing: Vitest + Playwright
- CI: GitHub Actions
## Architecture
- src/app/ — Next.js pages and layouts (App Router)
- src/components/ — Reusable UI components (atomic design)
- src/lib/ — Business logic, utilities, types
- src/server/ — API routes, server actions, DB queries
- prisma/ — Schema and migrations
## Code Conventions
- Use named exports, never default exports
- Components: PascalCase files, co-located tests (.test.tsx)
- Hooks: use[Name] pattern in src/hooks/
- Types: co-locate with usage, shared types in src/lib/types.ts
- Prefer server components. Use "use client" only when needed.
- Error handling: Result pattern, never throw in business logic
## DO NOT
- Do not use any, suppress TypeScript errors, or add @ts-ignore
- Do not use var — always const or let
- Do not write console.log (use the logger utility)
- Do not commit .env files or secrets
- Do not modify prisma/schema.prisma without a migration plan
- Do not skip tests. Every feature needs unit + integration tests.
## Testing
- Run: npm run test (unit) / npm run test:e2e (Playwright)
- Coverage threshold: 80% lines
- Test naming: "should [expected behavior] when [condition]"
## Git
- Branch naming: feat/ticket-123-short-description
- Commit format: type(scope): description
- Always squash merge to main
Onboarding a New Developer with Claude
prompt
I'm a new developer joining this project. Help me onboard.
1. Read CLAUDE.md and summarize the key conventions
2. Scan the project structure and explain the architecture
3. Identify the 5 most important files I should understand first
4. List the main abstractions and patterns used (state management, data fetching, error handling)
5. Show me a typical feature implementation end-to-end (pick a recent one from git log)
6. What are the common gotchas new developers hit?
7. Set up my local dev environment — walk me through each step
I learn best by example. Show me real code from this repo, not generic examples.
Cross-Team Code Review
prompt
Review this PR as someone from a different team who owns the shared libraries:
[paste PR diff or gh pr view URL]
Focus on cross-team concerns:
1. Does this change break any shared interfaces or APIs?
2. Are there backward-compatibility issues for other consumers?
3. Does it follow the shared conventions in CLAUDE.md?
4. Are there performance implications for other services?
5. Does it add coupling between teams that shouldn't exist?
6. Is the public API surface clean and well-documented?
Tone: collaborative, not adversarial. Suggest, don't demand.
Flag anything that needs discussion before merge.
◈
Metrics dashboard
Before & after Claude Code adoption
Ticket → PR time: 4.2 days → 1.1 days (74% faster) PR revision rounds: 3.1 → 1.4 (55% fewer) Test coverage: 62% → 87% (+25 points) Bug escape rate: 12/sprint → 3/sprint (75% reduction) Onboarding time: 3 weeks → 4 days (80% faster) Source: Aggregated from teams using SDD workflow for 3+ months
Team Adoption Strategies
🧪
"No Manual Code" Week
Challenge: one week where all code must go through Claude. Forces the team to learn effective prompting, write better specs, and discover where AI excels (and where it doesn't).
📢
Knowledge-Sharing Forums
Weekly "prompt show & tell" — share your best Claude workflows, skills, and CLAUDE.md improvements. The team learns 10x faster from shared experience.
📏
Measure What Matters
Track: time from ticket to PR, number of revision rounds, test coverage delta, bug escape rate. These metrics prove the ROI and guide adoption.
🔄
Iterate the CLAUDE.md
Treat CLAUDE.md like code: review it in PRs, version it, improve it weekly. Every mistake Claude makes should result in a new rule added to the file.
Chapter 17
Common Mistakes
These eight patterns explain 90% of the gap between average and expert Claude Code users. Avoid them and you'll leap ahead immediately.
✗
Skipping CLAUDE.md
Starting without a CLAUDE.md is like onboarding a new developer with zero documentation. Every session wastes 20 minutes on corrections.
◈
Fix: Create CLAUDE.md today. When Claude makes a mistake you'd want it not to repeat, add it to the DO NOT section. It becomes more valuable over time.
✗
Committing without reviewing
Claude is highly capable but not infallible. Shipping its output without review is how bugs reach production.
◈
Fix: Run git diff --staged before every commit. Or use the cc-review alias. 2 minutes of review saves hours of incident response.
✗
Vague prompts
"Fix this" or "make it better" generates mediocre output. The quality of Claude's response is a direct reflection of prompt specificity.
◈
Fix: Always specify what, why, constraints, and output format. If you get a bad response, the prompt was underspecified — not Claude's fault.
✗
Big tasks without planning
"Build the entire dashboard" in one shot produces sprawling, inconsistent code. Big tasks need architectural checkpoints.
◈
Fix: Always ask for a plan first. "Wait for my approval before starting." Review the plan. Break into sub-tasks. This is how senior engineers work.
✗
Ignoring context limits
When the context window fills, Claude's output quality degrades noticeably. Most users don't realize this and wonder why Claude seems "worse."
◈
Fix: Run /status regularly. Compact at 80% full. Break long features across sessions with progress notes in scratch.md.
✗
Using Claude as a search engine
Asking "How does React useEffect work?" wastes a powerful coding agent on something Google does better. Claude excels at doing, not explaining basics.
◈
Fix: Give tasks, not questions. Instead of "What is the best way to handle auth?" say "Implement JWT auth with refresh tokens for our Express API. Follow the patterns in src/auth/." Claude is a 100x engineer, not a search engine.
✗
Not iterating on CLAUDE.md
Writing a CLAUDE.md once and never updating it. Your codebase evolves — your AI instructions should too. Stale rules lead to stale output.
◈
Fix: Treat CLAUDE.md like code: review it in PRs, update it weekly. Every time Claude makes a mistake you'd want it to avoid, add a rule. Every time you remove a library, update the stack section. Set a calendar reminder.
✗
Copy-pasting instead of referencing files
Pasting 200 lines of code into the prompt instead of letting Claude read the file directly. This wastes context, introduces copy errors, and misses surrounding code that Claude needs for full understanding.
◈
Fix: Say "Read src/components/UserTable.tsx and refactor it" instead of pasting the file. Claude can read files, understand imports, and follow the dependency chain. It gets better context this way — and you save precious tokens.
Chapter 19 · Power Users Only
Under the Hood
How Claude Code actually works at a systems level. Understanding the internals helps you debug weird behavior, optimize token usage, and push the tool to its limits.
⚠
Heads up
This section covers undocumented internals that can change between versions.
These details are sourced from official docs, community analysis, and traced LLM traffic. They'll help you reason about behavior, but don't build workflows that depend on exact implementation details.
The Agentic Loop
At its core, Claude Code is a single-threaded loop: think → act → observe → repeat. Each iteration, the model decides what tool to call (or whether to respond with text). The loop terminates when Claude produces a plain text response with no tool calls.
Agentic Loop
Thinkread context
Actcall tool
Observeread result
Decideloop or exit?
Loop back — unless Claude responds with text only (exit)
1
Your prompt enters the loop
Your message is appended to the conversation history. The system prompt, CLAUDE.md contents, tool definitions, and any loaded skill descriptions are assembled into the full context and sent to the Claude API.
2
Claude reasons and picks a tool
The model reads the full context and decides what to do. It might call Read to look at a file, Bash to run a command, Edit to modify code, or Grep to search. If extended thinking is active ("think hard", "ultrathink"), the model spends more internal reasoning tokens before deciding.
3
The tool executes locally
Claude Code runs the tool on your machine — it reads your actual files, runs real commands in your shell, hits live APIs. The result (file contents, command output, search results) is appended to the conversation as a tool result message.
4
Result feeds back, loop repeats
The tool result becomes new context. Claude reads it, decides the next action, and the loop continues. A single user prompt can trigger dozens of tool calls — read → search → read → edit → bash → read → edit → bash — before Claude is satisfied.
5
Text response = loop exit
When Claude responds with plain text (no tool call), the loop terminates and you see the response. You can interrupt at any point — your input enters a real-time steering queue that lets Claude adjust mid-loop without restarting.
The Built-in Toolkit
Claude Code ships with ~18 built-in tools. Each is a JSON schema sent in the system prompt — the model "knows" about tools only because their definitions are in its context. Here's the real toolkit:
Category
Tool
What it actually does
Read
Read
Reads up to 2000 lines per call. Can read images (multimodal), PDFs (max 20 pages), and Jupyter notebooks. Lines over 2000 chars are truncated.
Glob / Grep
Glob finds files by pattern. Grep does regex search via ripgrep (not grep). Chosen over vector DB for speed — no embeddings, just fast regex.
Write
Edit
Surgical string replacement — finds exact match of old_string, replaces with new_string. Fails if match isn't unique. This is why Claude prefers small targeted edits over full rewrites.
Write
Overwrites entire file. Claude must Read before Write (enforced). Used for new files or when Edit can't work.
Execute
Bash
Runs shell commands in your actual shell with your environment. Working directory persists between calls, but shell state doesn't. Commands timeout after 2 minutes by default (max 10 min). Each command is risk-classified before execution.
Web
WebFetch / WebSearch
WebFetch retrieves a URL and processes content through a smaller model. WebSearch performs web queries. Both are restricted — can't access authenticated pages.
Planning
TodoWrite
Creates structured task lists with IDs, status, and priorities. The UI renders these as interactive checklists. System messages reinject TODO state after each tool call so Claude doesn't lose track.
Orchestration
Task
Spawns subagents — isolated Claude instances with their own context window. Can't spawn their own subagents (no recursive explosion). Types: Explore (read-only), Plan (architect), and specialized agents.
💡
Why this matters
Every tool definition consumes tokens in every API call (~16.8K total for system tools). This is why disabling unused MCP servers via /mcp frees context — their tool schemas stop being sent with each request.
Context Window Anatomy
The 200K context window (1M for Opus 4.6) isn't all yours. Here's the real breakdown of where tokens go:
Token Budget (200K)
Sys 1.3%
Tools 8.4%
CLAUDE.md ~3%
Response 16.5%
Your conversation ~72%
Component
Tokens
% of 200K
Notes
System prompt
~2.7K
1.3%
Base instructions. Grows with environment config.
Tool definitions
~16.8K
8.4%
18 built-in tools. Each MCP server adds more.
CLAUDE.md files
1–10K
0.5–5%
All hierarchical CLAUDE.md files, loaded every request.
Response buffer
~33K
16.5%
Reserved for Claude's thinking + output. Reduced from 45K.
Your conversation
~140–150K
~72%
Prompts, tool results, Claude responses, file contents.
◈
Check yours: /context
Run /context anytime to see exactly what's consuming your context window — per-MCP-server tool costs, CLAUDE.md size, and conversation usage. This is the single best debugging tool for "why is Claude forgetting things."
How Compaction Really Works
Compaction Pipeline
Threshold~92–98%
Summarizesmaller model
Replacehistory → summary
Continueloop resumes
1
Threshold hit (~92–98% usage)
Auto-compact triggers when your conversation approaches the context limit. The system monitors usage continuously. At ~92%, older tool outputs are cleared first. At ~98%, full compaction begins.
2
Summarization via a dedicated prompt
A separate summarization call asks a smaller model: "Write a summary of this transcript for continuity. Include state, next steps, and learnings." This is why manual /compact with focus instructions produces better summaries — you control what's preserved.
3
History replaced, critical state preserved
The raw conversation history is replaced with the compact summary. File modification records and TODO state are preserved separately so Claude doesn't "forget what it changed." But nuanced instructions from early in the conversation often get lost.
4
Session continues seamlessly
The loop resumes with the compacted context. Claude has the summary + preserved state + your next message. To prevent information loss: put key rules in CLAUDE.md (reloaded fresh, never compacted), and run /compact focus on [what matters] at logical breakpoints.
Subagent Architecture
When Claude spawns a subagent via the Task tool, it creates an entirely separate Claude instance with its own fresh context window. This is key to understanding why subagents help with long sessions.
🔍
Agent Type
Explore Agent
Read-only — has Glob, Grep, Read, WebFetch, WebSearch. No Edit, Write, or Bash. Used for codebase research without risk of changes. Fastest and cheapest.
📋
Agent Type
Plan Agent
Read-only + architecture. Can analyze code and produce implementation plans. No write access. Used for design decisions before committing to implementation.
⚡
Agent Type
Task Agent (Full)
Has all tools — can read, write, edit, and run commands. Used for delegating complete subtasks. Cannot spawn its own subagents (prevents recursive explosion).
💰
Key Insight
Separate Context = Free Space
Subagent work doesn't bloat your main context. When done, only a summary returns. This is why delegating research to subagents keeps long sessions healthy.
The Permission Pipeline
Every tool call passes through a multi-layered permission check before executing. Understanding this pipeline explains why some things get blocked and how to configure it:
Permission Pipeline
Deny Rules
blocked
Allow Rules
pass
Risk Classify
flagged
Hooks
blocked / pass
1
Deny rules checked first
Deny rules always win, regardless of any allow rules. If a file matches a deny pattern (e.g., **/.env*), access is blocked — period. This is your security backstop.
2
Allow rules checked next
If no deny rule matches, allow rules are checked in order. Matching an allow rule grants access silently — no user prompt. Configure these for trusted operations like npm test or git status.
3
Bash commands get risk-classified
Shell commands receive an additional risk classification. Claude Code categorizes each command and appends safety notes. Destructive commands (rm, git push --force) get flagged even if Bash is generally allowed.
4
Hooks execute post-permission
After permission is granted, any matching hooks fire (pre_write, pre_bash, etc.). Hooks provide additional validation on top of permissions — not a replacement. A hook can still block an operation that permissions allowed.
Model Selection Internals
Claude Code doesn't use one model for everything. Different models serve different roles behind the scenes:
Role
Model
Why
Main reasoning
Your selected model (Sonnet/Opus)
Handles your prompts, tool decisions, and code generation. This is where /model switches.
Compaction summaries
Smaller/cheaper model
Summarization doesn't need the strongest model. Using a lighter model keeps compaction fast and cheap.
WebFetch processing
Small, fast model
When fetching URLs, content is processed through a lighter model to extract relevant info before entering your context.
Subagent tasks
Configurable per-agent
Explore agents often use Haiku for speed. Full Task agents default to your main model. Override with model selection in agent config.
💡
The cost implication
Your main model runs for every iteration of the agentic loop — and it reads the entire context each time. A task that triggers 20 tool calls means 20 full-context API requests. This is why context management matters so much: a bloated context doesn't just slow things down, it multiplies your token spend across every loop iteration.
Debugging Checklist for Power Users
🔎
"Claude forgot my instruction"
Compaction likely dropped it. Run /context — if usage is high, it was compacted. Move the instruction to CLAUDE.md so it reloads every request.
🐌
"Responses getting slower"
Context is full. More tokens = slower inference on every loop iteration. Run /compact or /clear. Check /mcp — unused MCP servers add tool schemas to every request.
💸
"Session cost is high"
Run /cost. Each tool call = full context re-read. Long conversations with many tool calls multiply cost. Use subagents for research (separate context), and /compact before long implementation phases.
🔄
"Claude keeps looping on the same error"
The agentic loop has no built-in retry limit for most tools. Interrupt with Esc and reframe: "Stop. The issue is X. Try Y instead." Give it a new direction rather than letting it repeat.
📦
"Edit tool keeps failing"
Edit requires an exact unique string match. If the file was modified since Claude last read it, the match fails. Ask Claude to re-read the file, or use Write for a full replacement.
🔇
"Subagent returned nothing useful"
Subagents start with a fresh context — they don't inherit your conversation. Give them complete, self-contained instructions. "Investigate the auth bug" fails; "Read src/auth/ and find why login fails when session token is expired" succeeds.
Reference
Master Cheat Sheet
Everything important on one page. Bookmark this.
🚀
Launch
claude — REPL claude "task" — one-shot claude --continue — resume claude -p — pipe output
⌨️
Keyboard
↑↓ history · Ctrl+C stop Ctrl+R search · Ctrl+G plan mode Ctrl+D exit · Tab autocomplete
Spec-Driven Development Shared CLAUDE.md in git Custom commands in .claude/commands/ Hooks enforce standards
⚠️
Always Do
Review before committing /status in long sessions /compact at 80% context Plan before large tasks
🔑
Power Phrases
"Wait for approval before starting" "Act as a Staff Engineer" "Plan first, list files" "Self-review for security"
🏆
The Mindset
You are the architect. Claude is your best engineer. Your job: direction + review. Their job: everything else.
Reference
Frequently Asked Questions
Real questions devs ask online — the non-obvious stuff that docs don't make clear.
Context & Memory
Context
Claude Code auto-compacts your conversation when the context window fills up (~80%). It summarizes earlier messages to free space — but that summary can lose nuances. Workaround: run /compact manually at logical breakpoints (after finishing a feature, before starting the next). Put critical info in CLAUDE.md so it's always loaded fresh, never compacted away.
Config
They serve different purposes. CLAUDE.md = what Claude should know (your stack, conventions, coding patterns, DO/DON'T rules). settings.json = what Claude can do (permissions, allowed tools, environment variables, MCP servers). Think of it as: CLAUDE.md is the briefing doc, settings.json is the access badge.
Config
Yes. Claude loads CLAUDE.md hierarchically — project root first, then any CLAUDE.md in subdirectories. The most specific (deepest nested) file takes priority when instructions conflict. This is great for monorepos: put shared rules at the root and package-specific rules in each package folder.
Context
This means you're at the limit and auto-compact is about to trigger. Before it does: 1) Run /compact yourself for a better summary. 2) Check /context — disable unused MCP servers eating context space. 3) If the task is done, /clear and start fresh. 4) For long sessions, use /rename before clearing so you can /resume later.
Cost & Usage
Cost
With an API key: the average is ~$6/day per developer using Sonnet, with 90% of users staying under $12/day. That's roughly $100–200/month. Opus costs significantly more. With a Max subscription ($100 or $200/month), usage is included — no per-token charges. Run /cost anytime to see your current session spend.
Cost Sonnet handles 80%+ of daily coding tasks well — edits, refactors, tests, bug fixes. Switch to Opus for: complex architectural decisions, multi-step reasoning across many files, or when Sonnet keeps getting it wrong. Use /model to switch mid-session. Don't default to Opus "just in case" — Sonnet is faster and cheaper.
Cost
No. You need either a Claude Pro ($20/month), Max ($100 or $200/month), or Team/Enterprise subscription — or use your own API key. The free plan doesn't include terminal access.
Workflows & Productivity
Workflow
Use git worktrees. Each worktree is an independent checkout sharing the same git history, so multiple Claude sessions can work on different branches without conflicts. Run git worktree add ../feature-a feature-a, then open a Claude session in each directory. The Claude Code team runs 3–5 in parallel — it's their single biggest productivity unlock.
Workflow
These control extended thinking — how much internal reasoning Claude does before answering. think = light reasoning, think hard = deeper analysis, ultrathink = maximum reasoning budget. Use strategically: don't ultrathink a typo fix. Reserve deeper thinking for architecture decisions, complex debugging, or multi-file refactors where getting it right the first time saves you 3 rounds of back-and-forth.
Config
Use /permissions to configure allow rules for specific tools. For full autonomy, run claude --dangerously-skip-permissions — but only in sandboxed/CI environments. A safer middle ground: set up permission hooks in .claude/settings.json that auto-approve safe operations (file edits, reads) while still prompting for destructive ones (bash commands, git pushes).
Workflow
Yes — use headless mode with claude -p "your task". It runs non-interactively: takes input, does the work, outputs the result. Pair it with --dangerously-skip-permissions in CI (where the sandbox protects you). Common uses: automated code review on PRs, generating changelogs, running migration scripts, and auto-fixing lint errors.
Workflow
Don't accept the first suggestion — iterate. Say "No, use X instead" or explain why the approach doesn't fit. Also: tell Claude to explore first: "Read the project structure and understand the architecture before making changes." And put your key constraints in CLAUDE.md so they're always loaded. The #1 cause of bad output is insufficient context, not model capability.
Workflow
Yes — and it's a power pattern. Run Claude Code locally as the "coordinator" and have it spawn another Claude instance inside a Docker container as a "worker." The worker runs with --dangerously-skip-permissions since it's sandboxed. Use this for experimental or long-running tasks: if something goes wrong, it's all contained. The local instance reviews the worker's output before applying changes.
Permissions & Security
Config
Add deny rules in your .claude/settings.json. Deny rules are checked first and block access regardless of other rules. Also: Claude Code respects .gitignore by default for read/edit permissions, so if your .env is gitignored, it's already excluded. For extra safety, explicitly deny patterns like **/.env* and **/credentials*.
Config /sandbox enables an isolated execution environment where Claude's actions are contained — file changes, commands, and network access are restricted to the sandbox. Use it when: experimenting with unfamiliar code, running untrusted scripts, letting Claude try risky refactors you want to review before applying, or any time you want a "dry run" safety net.
Setup & Environment
Config
No — terminal choice doesn't affect functionality. Claude Code works in any terminal that supports Node.js. That said, some terminals have better rendering for Claude's markdown output. Pick whatever you're comfortable with.
Config
VS Code, Cursor (and other VS Code forks), IntelliJ, PyCharm, and other JetBrains IDEs. The VS Code extension gives you an integrated terminal panel. But Claude Code is terminal-first — the IDE integration is a convenience, not a requirement. Many power users prefer it in a standalone terminal alongside their editor.
Config Don't use sudo — fix npm's global directory permissions instead. Run mkdir ~/.npm-global && npm config set prefix '~/.npm-global', then add ~/.npm-global/bin to your PATH. Alternatively, install via brew install --cask claude-code on macOS, or curl -fsSL https://claude.ai/install.sh | sh which handles permissions correctly.