How I Use Claude Code
How I Approach Problems with Claude Code
After building two full platforms in single sessions (ai-hedge-fund market data pipeline and the Threads analysis dashboard), here are the patterns that actually work.
Pattern 1: Start With the Outcome, Not the Steps
I don’t plan the implementation. I describe what I want to exist:
“Create a system in Docker that auto-pulls from Threads every X minutes, and a Postgres DB to store every single thing the API can pull.”
Claude Code figures out the architecture: Docker Compose, init.sql schema, sync worker loop, seed script. If I’d specified every file upfront, I’d have spent 30 minutes planning what took 5 minutes to build.
Pattern 2: The CTO Delegation
When the project gets big enough that sequential work is slow, I say:
“Use multiple agents to tackle all of them. You’re the CTO, delegate.”
This works because:
- Claude Code understands file-level dependencies
- Non-overlapping files = safe parallelism
- Each agent gets a focused brief with just enough context
The Threads session ran 4 build agents + 1 validator simultaneously. The ai-hedge-fund session ran agents for Docker, Postgres schema, and data pipelines in parallel.
Pattern 3: Blind Validation
After building, I spawn a fresh agent with zero context and tell it to discover and validate the project. No briefing on what was built, no hints about architecture. It reads everything from scratch.
Why this works: it has no sunk-cost bias. It examines what actually exists, not what was intended. When the blind validator says “production-quality” after 61 tool calls, that means something different than the build agent saying “done.”
The Threads blind validator caught that the analysis tables in Postgres were empty — the pipeline wrote JSON but not to the DB. Real gap, caught by fresh eyes.
Pattern 4: /simplify Before Building
Run /simplify (code review skill) before the main build sprint. It launches 3 review agents in parallel checking for:
- Code reuse opportunities (found 150 lines of duplication across 4 files)
- Quality issues (N+1 insert patterns, confusing dual exports)
- Efficiency problems (unbounded arrays, blocked graceful shutdown)
Fixing these first means the dashboard is built on a cleaner foundation. The shared threads-api.mjs module extracted by /simplify was imported by every script that followed.
Pattern 5: Dev Teams With a Workboard
For quick wins, I create named developers with a shared .workboard.md:
| Dev | Task | Status | Files Owned |
|-----|------|--------|-------------|
| Alex | Mutual Information UI | IN PROGRESS | api/mutual-info.ts |
| Jordan | Engagement Heatmap | IN PROGRESS | EngagementHeatmap.tsx |
| Sam | Surprise Scatter | IN PROGRESS | SurpriseScatter.tsx |
Rules: read the workboard first, only edit your files, append (don’t overwrite) shared files. Then QA agents validate each dev’s work independently.
Pattern 6: Cross-Project Pollination
The Threads audit agent (social media + info theory + marketing expert) proposed 14 features. One of them — using the Threads keyword search API for sentiment analysis — connected directly to the ai-hedge-fund project. That led to an integration doc mapping Threads social data as a signal source for the trading system.
I didn’t plan that connection. It emerged from giving an agent enough domain context to see across projects.
Pattern 7: Scope Check Before Building
Before building anything non-trivial, I run /scope-check:
- How many users does this serve?
- Is it a differentiator or table stakes?
- Effort estimate (S/M/L)?
- Does it block or unlock other work?
Verdict: BUILD NOW / BACKLOG / KILL IT. One line of rationale.
This prevents building things that sound cool but don’t matter.
The Lazy Commands
Everything ends up in Docker for maximum laziness:
npm run docker:all # Postgres + sync + dashboard
# Go to localhost:4321. Done.
If I can’t run the whole thing with one command, the automation isn’t done yet.
The Docker Dashboard Moment
The payoff of building everything in Docker: open Docker Desktop and see both project stacks running side by side.
threads-analysis ai-hedge-fund
├── postgres-1 :5433 ├── market-data-db :5434
├── sync-1 (0% CPU) │
├── web-1 :4321 │
│ │
Total: 122 MB RAM, 1.66% CPU across 5 containers
Two full data platforms — one analyzing 49K social media posts with information theory, the other pulling market data from Finviz and Yahoo Finance — running simultaneously on a laptop using less memory than a single Chrome tab. Both built in single Claude Code sessions. Both auto-syncing on schedules. Both serving dashboards.
This is what “infrastructure as code” actually feels like when AI compresses the build cycle.
What Doesn’t Work
- Asking Claude to decide what to build. It compresses implementation distance, not product judgment.
- Micromanaging agent work. The CTO pattern works because you delegate, not because you specify every line.
- Skipping validation. Build agents are optimistic. They say “done” when the code compiles. Blind validators check if it actually works.
- Single-threaded sessions for multi-file work. If files don’t overlap, parallelize. The speedup is real.
Pattern 8: Multi-Machine Orchestration
My MacBook Air is the coding device. My Mac mini is the command center. Claude Code on the MacBook controls both via SSH over Tailscale.
The pattern:
# Claude on MacBook runs commands on Mac mini remotely
ssh weixiangzhang@Weixiangs-Mac-mini.local "docker compose up -d"
ssh ... "pg_restore < market_data_backup.dump"
scp scripts/*.sh weixiangzhang@Mac-mini:~/project/scripts/
Claude Code doesn’t care which machine it’s running commands on. SSH is just another shell. So I treat the Mac mini as a deployment target — Claude builds scripts locally, copies them over, installs cron jobs, starts daemons, verifies everything is running. One session, two machines.
The key enabler is Tailscale. No port forwarding, no firewall rules, no dynamic DNS. Just 100.71.141.45 from anywhere.
Pattern 9: Notification-Driven Development
Self-hosted ntfy on the Mac mini + Claude Code hooks = Claude tells me what it did when it’s done.
The hook upgrade was the breakthrough: Claude Code’s Stop hook sends last_assistant_message in the JSON payload. My notify script now parses that and sends the actual summary instead of a generic “done.” A background claude -p --model haiku call (free — it’s within Claude Code’s own auth) sends a polished 10-word version a few seconds later.
This means I can spawn 5 background agents, go do something else, and get Apple Watch notifications like:
Claude [MBA] — voxlight: Fixed iPad layout crash in reader viewClaude [Mini] — ai-hedge-fund: Alpaca pipeline caught up, 2499 rows
Each notification tells me which machine, which project, and what happened. I come back when something needs my attention, not when I think something might be done.
Pattern 10: The Swarm Deploy
Tonight I spawned 9 agents in parallel — each building one notification integration script. Docker crash alerts, heartbeat monitor, pipeline failures, trade signals, resource monitoring, channel personality design, Docker hosting audit, command listener, and Apple Shortcuts guide.
The pattern: decompose the work into fully independent pieces, give each agent a complete brief, launch them all at once, collect results as they finish. No agent depends on another. No shared files. Pure parallelism.
9 agents, ~15 minutes wall time, 8 production scripts + 3 docs. This is the CTO delegation pattern scaled up — you’re not delegating to one agent, you’re delegating to a team.