Cracking the Coding Interview
Comprehensive technical interview prep: Big O analysis, data structures (hash tables, trees, graphs, heaps, tries), algorithm patterns (sorting, searching, DP, recursion, bit manipulation), system design, and behavioral storytelling — with company-specific interview guidance.
- › Apply systematic problem-solving framework: listen → example → brute force → optimize → code → test
- › Analyze time and space complexity with Big O rules (add vs. multiply, amortized, memoization)
- › Identify optimal data structure for interview problems (hash table, heap, trie, graph)
- › Apply algorithm patterns: BFS/DFS, two pointers, sliding window, divide and conquer, DP
- › Prepare behavioral answers using STAR stories and the Interview Prep Grid
- › Recognize and apply dynamic programming patterns: top-down vs. bottom-up
Install this skill and Claude can walk through any interview problem using the seven-step framework — from brute force through BUD optimization to clean coded solution with edge-case tests — and coach STAR behavioral stories tailored to company-specific principles
Replaces ad-hoc interview prep with a repeatable, structured system — covering algorithmic rigor, Big O analysis, and behavioral storytelling in one framework calibrated to FAANG hiring bars
- › Running a mock interview where Claude poses a graph traversal problem, evaluates brute-force complexity, asks optimization probes, and gives structured feedback on edge-case coverage
- › Auditing a submitted solution to find a hidden O(N²) inner loop inside an O(N log N) outer sort and explaining amortized dynamic array costs
- › Restructuring a raw production-incident story into a tight STAR narrative aligned with Amazon's Bias for Action leadership principle
Cracking the Coding Interview
A complete framework for software engineering interview preparation, covering problem-solving methodology, data structures, algorithm patterns, Big O analysis, and behavioral techniques drawn directly from Gayle Laakmann McDowell’s 6th edition.
Source type: technical | Domain: software engineering interviews | Audience: software engineers preparing for technical interviews
Core Philosophy
The book’s central premise: interviewers evaluate problem-solving process, not just correct answers. A candidate who struggles but demonstrates structured thinking, communicates trade-offs, and iterates toward a solution often outperforms one who silently produces a correct answer.
Key principles:
- Stumbling is normal. Even strong candidates at Google rarely produce flawless interviews. Performance is judged relative to other candidates, not as binary pass/fail.
- Show your thinking. Talk out loud. The interviewer is watching how you think, not just what you produce.
- Don’t memorize solutions. Practicing problems by reading answer keys is like learning calculus by reading solved problems. You must practice deriving solutions.
- Practice writing code on paper. Whiteboard coding lacks syntax highlighting, autocomplete, and debugging tools. Practice the slow, manual process in advance.
- Your first solution doesn’t need to be perfect. State the brute force, explain its complexity, then optimize.
Key Concepts & Frameworks
The Interview Process
Screening → On-site → Decision
- Phone screens typically involve the same difficulty bar as on-site. An engineer runs them; expect real coding questions.
- On-site: 3–6 interviews, often including one lunch interview (usually non-technical — good opportunity to ask honest questions about culture).
- After interviews: feedback is submitted, then reviewed by a hiring committee (Google, Facebook) or a hiring manager (Microsoft, Amazon).
- Not hearing back within 3–5 business days = follow up politely. Silence does not mean rejection.
- Rejected candidates can often reapply after 6–12 months. Many engineers get rejected from Google then later receive offers.
Company-specific notes:
- Microsoft: You may meet a hiring manager (“as app” = as appropriate) at the end — a good sign. Teams have individual control; experiences vary.
- Amazon: Watch for the “bar raiser” — an interviewer from another team with veto power. Prepare for scalability and OOD questions.
- Google: 4–6 interviewers; feedback goes to a hiring committee separate from your interviewers. Strong focus on analytical/algorithm skills regardless of experience level. Prepare for system design.
- Facebook: Interviewers play roles — “Jedi” (behavioral), “Ninja” (coding/algorithms), “Pirate” (system design). Hired for the company, not a specific team; 6-week bootcamp follows.
- Palantir: Harder algorithm questions than most. Often includes a coding challenge via HackerRank. Know data structures inside and out.
Preparation timeline:
- Implement all core data structures and algorithms from scratch
- Learn and master Big O
- Build projects; write code regularly
- Do mock interviews (practice talking out loud)
- Create an Interview Prep Grid (see Behavioral section)
- Review mistakes list as you practice
Big O Time & Space Complexity
What it means: Big O describes the rate of increase of an algorithm’s resource usage as input size grows — not exact measurements.
The key rules:
-
Drop constants: O(2N) = O(N). Two sequential for-loops over the same array is still O(N), not O(2N).
-
Drop non-dominant terms: O(N² + N) = O(N²). O(N + log N) = O(N).
-
Add vs. multiply:
- Sequential work (do A, then do B) → add: O(A + B)
- Nested work (do B for each element in A) → multiply: O(A × B)
- Common mistake: Two nested loops over different arrays is O(A × B), not O(N²).
-
Amortized time: ArrayList insertion is O(1) amortized. The array doubles when full (O(N) for that insert), but doubling happens so rarely that total cost of N insertions = O(N), so each insertion = O(1) amortized.
-
O(log N) pattern: When the problem space halves with each step (binary search, balanced BST lookup), the runtime is O(log N). The number of times you can halve N until reaching 1 is log₂N.
-
Recursive runtime pattern: When a recursive function branches
btimes and recursesdlevels deep, runtime is typically O(b^d). Two recursive calls → O(2^N) unless memoized. -
Space complexity: Recursive calls consume stack space. A recursion of depth N uses O(N) space even if the work per frame is constant.
Common runtimes in order (best to worst): O(1) → O(log N) → O(N) → O(N log N) → O(N²) → O(2^N) → O(N!)
Best Conceivable Runtime (BCR): Before optimizing, ask “what’s the minimum work any algorithm must do?” If you must touch every element once, BCR ≥ O(N). If you’ve hit BCR and still have extra space usage, focus on space optimization. If you’ve hit BCR with O(1) space, you’re done.
Memoization: Converting exponential recursive algorithms to O(N) by caching subproblem results. Fibonacci’s naive O(2^N) recursion becomes O(N) with a memo array.
Data Structures
Hash Tables
- Maps keys to values using a hash function + array of linked lists (for collision handling)
- Average lookup/insert/delete: O(1). Worst case (many collisions): O(N)
- Alternative implementation via balanced BST: O(log N) but uses less space and supports ordered iteration
- Interview priority: Hash tables are the most commonly useful data structure for optimization. When stuck, ask: “Can I use a hash table here?”
- StringBuilder uses a similar resizable-array approach to avoid O(xn²) string concatenation
Arrays & Arraylists
- Fixed arrays: O(1) access, fixed size
- ArrayList (dynamic array): O(1) amortized insertion, O(1) access, O(N) worst-case insertion (on resize)
- Resize factor is typically 2×. Total copies for N insertions = N/2 + N/4 + … + 1 < N, so amortized O(1)
Linked Lists
- Singly linked: each node has
nextpointer. No constant-time random access (must traverse to find Kth element). - Benefit: O(1) insert/delete at head
- Runner technique: Two pointers at different speeds to find midpoints, detect cycles, or interleave lists
- Key implementation notes: always check for null pointers; update head/tail when deleting; singly vs. doubly linked matters
- Recursive linked list problems: remember recursive algorithms use O(N) space
Stacks
- LIFO (last-in, first-out)
- Operations: push, pop, peek, isEmpty — all O(1)
- Use when: you need to reverse order, implement DFS iteratively, track recursive backtracking state
- Can be implemented with a linked list (add/remove from same side)
Queues
- FIFO (first-in, first-out)
- Operations: add, remove, peek, isEmpty — all O(1)
- Use when: BFS traversal, processing items in arrival order, implementing a cache
- Common bug: forgetting to update both
firstandlastpointers on edge cases
Trees
- A tree is a connected acyclic graph. Root → children → leaves.
- Binary tree: each node has ≤ 2 children
- Binary Search Tree (BST): all left descendants ≤ node < all right descendants (for every node, not just immediate children)
- Balanced tree: height difference between any node’s subtrees is at most 1 → guarantees O(log N) insert/find
- Complete binary tree: all levels filled except possibly the last (filled left to right)
- Full binary tree: every node has 0 or 2 children
- Perfect binary tree: full + complete; has exactly 2^k − 1 nodes
Tree traversals:
- In-order (left, current, right): visits BST nodes in ascending order
- Pre-order (current, left, right): root is always first
- Post-order (left, right, current): root is always last
Heaps (Min-Heap):
- Complete binary tree where each node ≤ its children; root = minimum element
- Insert: add at bottom-right, bubble up → O(log N)
- Extract min: swap root with last element, bubble down → O(log N)
- Use for: tracking median with two heaps (max-heap for lower half, min-heap for upper half), priority queues
Tries (Prefix Trees):
- N-ary tree storing characters at each node; paths represent words
- Prefix lookup: O(K) where K = string length (same as hash table, but trie can answer “is this a valid prefix?” which hash tables cannot)
- Use when: searching through lists of valid words, autocomplete, prefix matching
Graphs
- Collection of nodes with edges. Trees are a special case of graphs (connected, no cycles).
- Directed vs undirected edges
- Adjacency list: most common representation. Each node stores list of adjacent nodes. Space-efficient, fast neighbor traversal.
- Adjacency matrix: NxN boolean matrix. Uses more space; neighbor lookup requires iterating all nodes.
Graph search:
- DFS (Depth-First Search): go deep before going wide. Use when visiting all nodes. Implemented recursively (or with explicit stack). Must track
visitedto avoid infinite loops. - BFS (Breadth-First Search): explore all neighbors before their children. Use when finding shortest path. Implemented with a queue (not recursively — a common mistake).
- Bidirectional BFS: two simultaneous BFS searches from both endpoints. Reduces time from O(k^d) to O(k^(d/2)) — effectively doubles the searchable path length.
Algorithm Patterns
Sorting Algorithms
- Merge Sort: O(N log N) time, O(N) space. Divide array in half, sort each half, merge. Half-and-half approach.
- Quick Sort: O(N log N) expected, O(N²) worst case. Pick pivot, partition. Best/expected case O(N log N); worst case O(N²) if pivot always the largest/smallest element.
- Binary Search: O(log N). Only works on sorted arrays. When problem space halves each step.
Recursion Approaches
- Bottom-up: solve base case, build up (iterative DP)
- Top-down: divide problem into subproblems (memoized recursion)
- Half-and-half: divide data set in half (merge sort, binary search)
Hint that a problem is recursive: “compute the Nth…”, “list the first N…”, “implement a method to compute all…”
Dynamic Programming
- Identify overlapping subproblems in a recursive solution → cache results
- Top-down (memoization): recursive + cache array. Fibonacci with memo: O(N) time vs. O(2^N) naive.
- Bottom-up: iterative, fill cache from base cases up. Often achieves same time complexity with O(1) space if only last few results needed.
- Rule: if you see the same recursive calls appearing repeatedly in the call tree, DP is applicable.
Bit Manipulation
- Useful for space-efficient problems (bit vectors), performance optimization, and low-level problems
- Key operations: AND, OR, XOR (^), NOT (~), left shift (<<), logical right shift (>>>), arithmetic right shift (>>)
- Two’s complement: negative K as N-bit number = complement + 1 (or: invert bits, add 1)
- Arithmetic right shift divides by 2 (preserves sign bit). Logical right shift fills with 0.
Problem-Solving Approach
The book’s 7-step flowchart for solving any coding problem:
Step 1: Listen Carefully Every detail in the problem statement is usually there for a reason. “Given two sorted arrays…” means sorting matters for your algorithm. Write key constraints on the whiteboard. If you find yourself stuck, ask: “Have I used all the information in the problem?”
Step 2: Draw a Concrete Example Draw an example that is:
- Specific: use real numbers/strings, not abstract boxes
- Sufficiently large: most examples candidates draw are too small by ~50%
- Not a special case: avoid accidentally choosing a perfectly balanced tree, an empty input, or an already-sorted array
Step 3: State the Brute Force Say it out loud even if it’s obvious and terrible. State the time and space complexity. This shows you’re not stuck, establishes a baseline, and gives you something to optimize.
Step 4: Optimize (BUD Framework) Look for:
- Bottlenecks: the step that dominates runtime. Don’t optimize anything else until you address it.
- Unnecessary work: computing values you don’t need (e.g., looping over
dwhen you could compute it directly) - Duplicated work: recomputing the same subproblem repeatedly (use hash tables or memoization)
Additional optimization tactics:
- Look for unused information from the problem statement
- Try a fresh example — sometimes a different example reveals the pattern
- Solve “incorrectly” first, then think about why it fails — can you fix those failure cases?
- Make a time vs. space trade-off: store more state to reduce computation
- Precompute/reorganize data (sorting, hashing) upfront if it saves time later
- Use a hash table — hash tables solve a huge fraction of interview optimization problems
- Think about the Best Conceivable Runtime — if you’ve reached it, stop optimizing runtime and look at space
DIY technique: Mentally “do it yourself” on a big example before writing code. Your intuition will often give you a better algorithm than abstract reasoning. Then reverse-engineer what you just did.
Simplify and Generalize: Solve a simplified version (e.g., with characters instead of words), then generalize.
Base Case and Build: Solve for n=1, then n=2, then derive the general pattern. Often leads to natural recursive solutions.
Data Structure Brainstorm: When stuck, systematically consider: linked list, array, hash table, binary tree, heap, trie, graph. Sometimes just asking “what if I use a heap here?” unlocks the solution.
Step 5: Walk Through Before writing code, walk through your algorithm in detail. Know exactly what variables exist and how they change. Writing whiteboard code is slow — get it right before you start.
Step 6: Implement Write beautiful code:
- Modularize: extract helper functions even if you don’t implement them yet
- Use meaningful variable names: abbreviate after first use, but explain the abbreviation
- Use appropriate data structures: create a class/struct when returning multiple values
- Note error checks: add a TODO comment for edge cases if full validation would take too long
- Flexible/general: write for NxN rather than 3x3 when the general case is not significantly harder
- Good code is correct, efficient, simple, readable, and maintainable
Step 7: Test Test in this order:
- Conceptual test: read through the code like a code review. Does each line do what you intend?
- Weird-looking code: double-check
x = length - 2, loops starting ati = 1rather than0, etc. - Hot spots: base cases in recursion, integer division, null nodes in trees, start/end of linked list
- Small test cases: use a 3–4 element input (not the big example from step 2) — finds bugs faster
- Special cases: null, single element, extreme values, duplicates
When you find a bug: don’t just apply the first fix that comes to mind. Understand why the bug occurred, then fix it carefully.
Behavioral Questions
The Interview Prep Grid Build a 2D grid:
- Columns: each major project/job on your resume
- Rows: Challenges, Mistakes/Failures, What I Enjoyed, Leadership, Conflicts, What I’d Do Differently
Study this grid before the interview. Rehearse each story. For 2–3 projects, be able to go deep technically: the challenges, mistakes, technical decisions, technology tradeoffs, and how you’d scale it.
Response techniques:
Nugget First: Open with a one-sentence “headline” of what your story is about. This orients the interviewer and forces you to be focused.
S.A.R. (Situation, Action, Result):
- Situation: brief context
- Action: most important part — break it into multiple specific steps (“I did three things. First… Second… Third…”)
- Result: outcome, ideally measurable
The action must dominate your answer. Most candidates over-explain the situation and rush through the action. The action reveals your thought process and personality.
What does your story say about you? After preparing a story, ask: what personality attributes does this demonstrate? Initiative? Empathy? Teamwork? Humility? If none, rework how you communicate it. You don’t say “I did X because I have empathy” — instead: “I made sure to call the client myself because I knew he’d appreciate hearing it directly from me.”
Weaknesses: Give a real weakness. “I work too hard” signals arrogance and poor self-awareness. A good answer names a genuine flaw, then describes how you actively work to overcome it.
“Tell me about yourself” structure:
- Current role headline
- Education
- Post-graduation / career progression
- Current role details
- Outside interests (only if technical or uniquely interesting)
- Why you’re looking now
Drop in proof of success naturally (recruited by a former manager, launched key feature, system scaled well) rather than claiming “I was great.”
Questions to ask the interviewer: Prepare genuine, insightful, and passion-based questions. Questions that demonstrate advance research about the company’s tech stack signal serious interest. Never have zero questions.
Decision Frameworks
Choosing an Algorithm
| Situation | Consider |
|---|---|
| Sorted input | Binary search, two-pointer merge |
| Need fast lookup | Hash table |
| Need ordered lookup | BST or sorted array |
| Find min/max repeatedly | Heap |
| Path/connectivity between nodes | BFS (shortest path) or DFS (any path / all nodes) |
| Count prefix matches | Trie |
| Overlapping subproblems | Dynamic programming |
| ”Compute Nth…” or “all combinations…” | Recursion + possibly memoization |
| Need O(1) space optimization | Look for ways to encode state in input, two-pointer, or rolling variables |
Choosing a Data Structure
| Need | Use |
|---|---|
| Fast lookup by key | Hash table |
| Ordered data, range queries | BST |
| Track running min/max | Heap |
| Process in arrival order | Queue |
| Reverse order / undo operations | Stack |
| Prefix/word matching | Trie |
| Flexible sequence with fast head insert | Linked list |
| Track median dynamically | Two heaps (max-heap + min-heap) |
| Connectivity between entities | Graph (adjacency list) |
When to Use Hash Tables
Hash tables are the most universally useful optimization tool. Apply them when:
- You need to eliminate an O(N) repeated search (replace with O(1) lookup)
- You need to count frequencies
- You want to precompute a mapping upfront to answer queries in O(1)
- You’re eliminating “duplicated work” in a BUD analysis
Runtime Optimization Heuristic
If your algorithm is O(N × N):
- Can the second N become O(log N) → use binary search or BST
- Can the second N become O(1) → use a hash table If you’ve hit BCR, stop optimizing runtime and consider space.
Vocabulary
| Term | Definition |
|---|---|
| Big O | Upper-bound asymptotic description of runtime growth rate |
| Amortized time | Average time per operation over a sequence (e.g., O(1) amortized for ArrayList insert) |
| BCR (Best Conceivable Runtime) | The fastest any algorithm could solve the problem given the required input/output |
| BUD | Bottlenecks, Unnecessary work, Duplicated work — framework for optimization |
| Memoization | Caching results of recursive subproblems (top-down DP) |
| Dynamic programming | Solving problems by breaking into overlapping subproblems and caching results |
| In-order traversal | Left → current → right; visits BST nodes in ascending order |
| DFS | Depth-first search: go deep before wide; use stack/recursion |
| BFS | Breadth-first search: go wide before deep; use queue |
| Bar raiser (Amazon) | Interviewer from another team with veto power; calibrates the hiring bar |
| S.A.R. | Situation, Action, Result — behavioral answer framework |
| Trie | Prefix tree; O(K) prefix lookup where K = string length |
| Min-heap | Complete binary tree where every node ≤ its children; root = minimum |
| Two’s complement | Standard binary representation for negative integers |
| Adjacency list | Graph representation as array of neighbor lists; space-efficient |
| Runner technique | Two-pointer technique for linked lists at different speeds |
Anti-Patterns
Problem-solving anti-patterns:
- Jumping to code before fully understanding the problem or having an optimal algorithm
- Drawing a special-case or too-small example (e.g., a perfect 3-node BST)
- Forgetting to use information the interviewer gave you (sorted array, unique elements, etc.)
- Assuming O(N²) is optimal without checking if a hash table can reduce it to O(N)
- Saying O(N²) for two nested loops over different arrays — it’s O(A×B), not O(N²)
- Mixing up “best case” and “best conceivable runtime” — they are completely unrelated concepts
- Treating Big O as multiple-choice; derive the runtime, don’t guess
Complexity analysis anti-patterns:
- Keeping constants (O(2N) when O(N) is correct)
- Using the same variable N for two different input sizes
- Assuming a recursive function that “visits each node” is O(N log N) just because it’s a tree
- Forgetting that recursive stack space counts as O(N) space
- Thinking BFS is recursive (it’s not — use a queue)
Coding anti-patterns:
- Using single-letter variables everywhere except loop counters
- Not modularizing: writing everything in one function makes bugs harder to find
- Not testing base cases, null inputs, and edge cases
- Forgetting to update
head/tailpointers in linked list operations - Forgetting
visitedflags in graph traversal (leads to infinite loops)
Behavioral anti-patterns:
- Saying “we” throughout behavioral answers — interviewers can’t evaluate your individual contribution
- Giving fake weaknesses (“I work too hard”)
- Not having 1–3 technical projects you can discuss at depth
- Preparing stories that don’t demonstrate any specific positive attribute
- Asking zero questions to the interviewer — signals disinterest
Interview process anti-patterns:
- Assuming silence from a recruiter means rejection
- Not following up after one week with no word
- Treating a rejection as permanent — most companies welcome re-application in 6–12 months
- Stopping optimization once you have a working solution — always ask “can this be better?”
- Not admitting you’ve seen a question before — interviewers find this dishonest
How to Apply This Knowledge
When a user asks for interview preparation help:
- Apply the 7-step problem-solving flowchart explicitly (Listen → Example → Brute Force → Optimize → Walk Through → Implement → Test)
- Before optimizing, analyze the brute force with BUD (Bottlenecks, Unnecessary work, Duplicated work)
- Calculate Big O for each solution proposed — time and space both
- Suggest hash tables as a first optimization instinct for any O(N) search that gets repeated
- Ask “what’s the BCR?” before deciding an algorithm is optimal
- When data structures are ambiguous, suggest the brainstorm approach
When a user asks to review code:
- Evaluate: correct, efficient, simple, readable, maintainable
- Check for modularization, variable naming, error handling, edge cases
- Suggest the testing sequence: conceptual → weird code → hot spots → small cases → special cases
When a user asks about behavioral questions:
- Apply S.A.R. structure (Situation, Action, Result)
- Remind them the Action is the most important part — break it into numbered steps
- Help them identify what personality attribute the story demonstrates
- Check for overuse of “we” instead of “I”
When a user asks about specific data structures:
- Provide implementation notes, time/space complexity, and when to use each
- Highlight common interview gotchas for that structure (e.g., null checks in linked lists, visited flags in graphs, BFS uses a queue not recursion)
When a user asks about company-specific prep:
- Reference the company-specific guidance (Amazon → scalability + OOD, Google → algorithms + system design, Facebook → behavioral + entrepreneurial spirit, etc.)
Source Attribution
This skill was generated from: Cracking the Coding Interview, 6th Edition by Gayle Laakmann McDowell
- Type: technical
- Domain: software engineering interviews
- Generated by: claude-code-skill (pdf-to-claude-skills)