Ace coding interviews with 50+ questions on arrays, trees, graphs, dynamic programming, and algorithm patterns.
Array: contiguous memory, O(1) random access, O(n) insert/delete (shifting). LinkedList: non-contiguous, O(n) access (traverse), O(1) insert/delete at known position. Array better for: random access, cache efficiency, known size. LinkedList better for: frequent insert/delete, unknown size, no memory reallocation. In practice, arrays usually preferred.
Average case: insert O(1), delete O(1), search O(1). Worst case (all collisions): O(n). Hash function converts key to index. Collision handling: chaining (linked lists) or open addressing (probing). Load factor affects performance; rehash when exceeded. Good hash functions distribute evenly. Used in: dictionaries, sets, caches, database indexing.
BST: left children smaller, right children larger. Operations (balanced): search O(log n), insert O(log n), delete O(log n). Worst case (unbalanced/linear): O(n). Traversals: inorder gives sorted order. Self-balancing trees (AVL, Red-Black) maintain O(log n). Delete cases: leaf (remove), one child (replace), two children (find inorder successor/predecessor).
Heap: complete binary tree with heap property (max-heap: parent >= children, min-heap: parent <= children). Operations: insert O(log n), extract O(log n), peek O(1). Usually implemented as array (parent at i, children at 2i+1, 2i+2). Applications: priority queues, Dijkstra's algorithm, heap sort, finding k-th largest, median maintenance.
Adjacency Matrix: O(V²) space, O(1) edge lookup, good for dense graphs. Adjacency List: O(V+E) space, O(degree) edge lookup, good for sparse graphs (most real graphs). Edge List: O(E) space, simple, good for edge-based algorithms (Kruskal). Most interviews use adjacency list. For directed graphs, separate lists for in/out edges may help.
BFS (Breadth-First): uses queue, explores neighbors first, finds shortest path in unweighted graphs, more memory for wide graphs. DFS (Depth-First): uses stack/recursion, explores deeply first, less memory, good for: topological sort, cycle detection, path existence. Choose BFS for shortest path, level-order; DFS for exhaustive search, backtracking.
DP solves problems by breaking into overlapping subproblems, storing solutions to avoid recomputation. Signs: optimal substructure, overlapping subproblems. Approaches: top-down (memoization, recursion + cache) or bottom-up (tabulation, iterative). Examples: Fibonacci, knapsack, longest common subsequence, coin change. Start with recursion, add memoization, convert to tabulation if needed.
Two pointers traverse array/string simultaneously, often from opposite ends or same direction at different speeds. Examples: (1) Two Sum II - converge from ends, (2) Remove duplicates - slow/fast pointers, (3) Linked list cycle - fast moves 2x, (4) Container with most water - shrink from sides. Reduces O(n²) to O(n) in many cases.
Sliding window maintains a subset of elements that "slides" through the array. Types: fixed size (sum of k elements) or variable size (longest substring without repeat). Pattern: expand window (add element), shrink when condition violated (remove from start). Reduces O(n²) to O(n). Examples: max sum subarray, longest substring, minimum window substring.
Recursion: function calls itself, uses call stack, elegant for tree/divide-conquer problems, risk of stack overflow. Iteration: loops, uses explicit stack/queue if needed, more memory efficient, sometimes more complex code. Every recursion can be converted to iteration. Use recursion for: trees, backtracking, when it's clearer. Optimize with tail recursion or convert to iteration.
Practice with interactive quizzes and get instant feedback.
Start Free Practice