Understand the importance of maintaining original order when sorting data with duplicate keys.
A sorting algorithm is stable if it preserves the relative order of elements that compare equal.
Suppose two elements have the same key. If one appears before the other in the input, a stable sort guarantees that this order remains unchanged in the output. Stability is not visible when elements are simple numbers. It becomes meaningful when elements carry identity beyond their sorting key.
In real systems, we rarely sort primitive values. We sort objects — users, transactions, logs, records.
Stability allows sorting to become composable. You can sort by one attribute, then another, and rely on predictable behavior. This is not convenience — it is structural integrity.
Input Array: [ (A, 2), (B, 1), (C, 2) ] Stable Result: [ (B, 1), (A, 2), (C, 2) ] Unstable Result: [ (B, 1), (C, 2), (A, 2) ]
Notice something subtle: many simple, incremental algorithms are naturally stable. Algorithms built around swapping distant elements often are not.
If you are sorting user-facing data, logs, reports, or layered attributes — stability is usually desirable.
If you are sorting raw numeric data where equal values have no identity — stability may not matter.
Sorting is not merely rearrangement. It is transformation under constraints. Stability defines one such constraint — preserving meaning while imposing order. And in engineering, preserving meaning is often more important than achieving speed.
Hand-picked resources to deepen your understanding
© 2025 See Algorithms. Code licensed under MIT, content under CC BY-NC 4.0.