← Back to Visualizer

Documentation

Theme

Overview

VisualSorting is an interactive learning and analysis environment for sorting algorithms. It renders array operations in real time so you can study algorithm behavior, compare trade-offs, and validate performance expectations under controlled conditions.

Visual Language

The visualizer uses consistent operation states across modes:

  • Comparison (yellow): Two indices are being evaluated.
  • Swap/Write (red): Data is being moved or overwritten.
  • Finalized (green): An element has reached a confirmed sorted position.

Quick Start Workflow

  1. Select an algorithm from the primary dropdown.
  2. Configure array size and playback speed.
  3. Generate a dataset using your preferred distribution.
  4. Run the sort and observe comparisons, writes, and completion pattern.

Features

Advanced Options

The Advanced Options panel provides controls for reproducibility, accessibility, and presentation quality.

Array Distribution

Configure initial data shape to evaluate best-case, average-case, and adversarial behavior:

  • Random: Baseline distribution for general comparison.
  • Nearly Sorted: Useful for adaptive algorithms such as Insertion and Tim Sort.
  • Reversed: Common stress case for many quadratic approaches.
  • Few Unique: Highlights duplicate-handling behavior.
  • Already Sorted: Validates early-exit and adaptive optimizations.

Sound Effects

Optional auditory feedback maps element magnitude to pitch, helping users detect execution rhythm and operation density while observing the visualization.

Show Bar Values

Displays numeric values above bars, which is especially useful when validating correctness on small or custom arrays.

Per-Tab Onboarding Tours

Context-aware onboarding introduces controls and expected workflows for each tab, reducing setup time for new users and improving feature discoverability.

Dynamic WebGL Particle Background

The landing experience includes an optimized WebGL particle background designed to maintain a polished presentation without interfering with core visualization performance.

Advanced Audio Synthesizer

An enhanced synthesis pipeline applies scale mapping and envelopes to provide clearer audio differentiation across operations and value ranges.

Time-Scrubbing Engine

Replay controls support bidirectional scrubbing so users can revisit specific execution segments and inspect transitions frame by frame.

More Customization

Additional controls include accessibility palettes, alternative visual styles, and array persistence/import tools for repeatable experiments.

Benchmarking

The Benchmark tab measures algorithm performance without visualization delay. Execution occurs in Web Workers, isolating heavy computation from UI rendering and improving result consistency for comparative testing.

Benchmark Procedure

  1. Choose array size and distribution.
  2. Set run count to reduce one-off timing variance.
  3. Select algorithm set and start the benchmark.
  4. Review per-algorithm averages for time, comparisons, and writes.

Scoring Model

Scores are normalized by relative speed. The fastest algorithm in a run receives 100 and all others are scaled proportionally by average execution time.

Metric How It's Used Description
Average Time Primary score driver Mean execution time (ms) across configured runs
Comparisons Reported in results Average element comparisons; informative for complexity analysis
Swaps/Writes Reported in results Average write operations; useful for memory-sensitive scenarios

Algorithm Race (Compare)

The Compare tab executes up to 25 algorithms on identical array copies in parallel visual panels, allowing direct qualitative comparison of convergence speed and operation patterns.

Features

  • Live Statistics: Each panel tracks comparisons and writes in real time.
  • Completion State: Panels are marked immediately when sorting finishes.
  • Podium Ranking: Top finishers are labeled for quick visual interpretation.
  • Run Control: Pause and resume to inspect intermediate states.

Custom Algorithms

The Custom tab provides a sandbox for authoring and visualizing user-defined sorting routines. Execution is isolated in worker contexts, and operations are captured for deterministic playback and metric tracking.

Supported Languages

  • JavaScript: Native worker execution.
  • Python: Executed via Pyodide (WebAssembly runtime).
  • C#: Experimental compatibility through lightweight translation.
  • Blockly: Visual block authoring for introductory workflows.

Available API

Function Description
compare(i, j) Highlights indices i and j, returns comparison result, and updates statistics.
swap(i, j) Exchanges values at i and j and records a write operation.
set(i, val) Writes val to index i as a direct assignment operation.
n or length Total number of elements in the working array.

Collapsible Console

An integrated console surfaces runtime logs and execution errors from the sandbox, simplifying debug and iteration cycles while developing custom logic.

Bubble Sort

O(n²) O(1)

Repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. Named because smaller elements "bubble" to the top. Note: First analyzed in the 1950s.

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Selection Sort

O(n²) O(1)

Finds the minimum element from the unsorted part and puts it at the beginning. Simple but inefficient for large lists. Note: A natural human sorting method intuitively used for arranging physical objects..

✓ Stats

  • Best: O(n²)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Double Selection Sort

O(n²) O(1)

Selects both the minimum and maximum in each pass, placing them at the beginning and end respectively. Approximately 2x fewer passes than standard selection sort. Note: A logical extension to reduce overhead without altering fundamental bounds..

✓ Stats

  • Best: O(n²)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Insertion Sort

O(n²) O(1)

Builds the sorted array one element at a time by inserting each element into its correct position. Very efficient for small or nearly sorted data. Note: Mirrors how humans sort a hand of playing cards..

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Cocktail Shaker Sort

O(n²) O(1)

A variant of bubble sort that traverses the list in both directions alternately. Helps avoid the "turtle" problem (small values at the end). Note: Also known as bidirectional bubble sort.

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Gnome Sort

O(n²) O(1)

Works like a garden gnome sorting flower pots — moves forward when things are in order, steps back to fix when they are not. Note: Named by Hamid Sarbazi-Azad in 2000.

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Odd-Even Sort

O(n²) O(1)

Alternates between odd-indexed and even-indexed passes, comparing and swapping adjacent pairs. Originally designed for parallel processors. Note: Originally developed for use on multiple processors with local interconnections..

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Cycle Sort

O(n²) O(1)

Theoretically optimal in terms of writes. Minimizes the total number of memory writes by rotating elements into their correct positions. Note: Based on the concept of tracking permutations as disjoint cycles..

✓ Stats

  • Best: O(n²)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Pancake Sort

O(n²) O(1)

Sorts by repeatedly flipping sub-arrays, like arranging pancakes by size. Only uses a "flip" operation (reversing a prefix of the array). Note: Made famous by a paper published by Bill Gates and Christos Papadimitriou in 1979..

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Comb Sort

O(n²/2^p) O(1)

Improves on bubble sort by using a gap that shrinks by factor 1.3 each pass. Eliminates turtles (small values near end) much faster. Note: Invented by Włodzimierz Dobosiewicz in 1980.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Strand Sort

O(n²) O(n)

Pulls sorted "strands" from the input and merges them into the output list. Very efficient for partially sorted data. Note: A variant of natural merge sort that actively hunts for sorted sublists..

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: No

Bingo Sort

O(n^2) O(1)

Selection sort adapted for duplicates. Note: Often utilized in systems where the input data has a very small universe of possible values..

✓ Stats

  • Best: O(n+k)
  • Worst: O(n^2)

⚠ Info

  • Stable: No
  • In-Place: Yes

Exchange Sort

O(n^2) O(1)

Systematic comparisons globally. Note: Sometimes referred to as 'Standard Macaroni Sort' in older literature..

✓ Stats

  • Best: O(n^2)
  • Worst: O(n^2)

⚠ Info

  • Stable: No
  • In-Place: Yes

Merge Sort

O(n log n) O(n)

Divides the array in half, sorts each half, then merges them. Guaranteed O(n log n) performance regardless of input. Note: Invented by John von Neumann in 1945.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: Yes
  • In-Place: No

3-Way Merge Sort

O(n log₃ n) O(n)

Splits into three parts instead of two. Slightly fewer comparisons than standard merge sort in theory. Note: An exploration of optimization boundaries in comparison sorts..

✓ Stats

  • Best: O(n log₃ n)
  • Worst: O(n log₃ n)

⚠ Info

  • Stable: Yes
  • In-Place: No

Quick Sort

O(n log n) O(log n)

Picks a pivot, partitions around it, and recursively sorts the partitions. One of the fastest general-purpose sorts in practice. Note: Developed by Tony Hoare in 1959.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Dual-Pivot Quick Sort

O(n log n) O(log n)

Uses two pivots to partition into three segments. Used by Java Arrays.sort() for primitives. Fewer swaps on average. Note: Vladimir Yaroslavskiy's optimization.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

IntroSort

O(n log n) O(log n)

Starts as quicksort, switches to heapsort if recursion gets too deep, and insertion sort for small partitions. Used by C++ STL. Note: Invented by David Musser in 1997 to provide a seamless hybrid of Quick Sort and Heap Sort..

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Heap Sort

O(n log n) O(1)

Builds a max-heap from the array, then repeatedly extracts the maximum. Guaranteed O(n log n) with no extra memory. Note: Introduced by J. W. J. Williams in 1964.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Shell Sort

O(n^(4/3)) O(1)

Generalizes insertion sort to allow exchanges of elements far apart. Uses a decreasing sequence of gap values. Note: Invented by Donald Shell in 1959.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n^(3/2))

⚠ Info

  • Stable: No
  • In-Place: Yes

Tim Sort

O(n log n) O(n)

Hybrid of merge sort and insertion sort. Finds natural runs, extends them, and merges with a sophisticated merge strategy. Used by Python and Java. Note: Created by Tim Peters for the Python programming language in 2002..

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: Yes
  • In-Place: No

Bitonic Sort

O(n log² n) O(1)

Creates bitonic sequences then merges them. Highly parallelizable — designed for hardware implementations and GPU sorting. Note: Invented by Ken Batcher in 1968.

✓ Stats

  • Best: O(n log² n)
  • Worst: O(n log² n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Circle Sort

O(n log n log n) O(log n)

Compares elements equidistant from both ends, swapping if out of order. Repeats until no swaps needed. Simple and elegant. Note: A more recent structural exploration of recursive swap networks..

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Smoothsort

O(n log n) O(1)

A variation of heapsort. Note: Designed by Edsger W. Dijkstra in 1981..

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Patience Sorting

O(n log n) O(n)

Distribution sort using card game patience rules. Note: Derived from the Patience card game.

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: No

Tournament Sort

O(n log n) O(n)

Similar to heapsort, builds a tournament tree. Note: Often used as a precursor phase for highly optimized external sorting models..

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: No

Tree Sort

O(n log n) O(n)

Builds a binary search tree and traverses it. Note: Provides an intuitive bridge between data structures operations and array sorting..

✓ Stats

  • Best: O(n log n)
  • Worst: O(n^2)

⚠ Info

  • Stable: Yes
  • In-Place: No

Block Sort

O(n log n) O(1)

Stable sort. Note: A highly complex algorithm demonstrating the absolute boundaries of in-place merging..

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Pattern-Defeating Quicksort

O(n log n) O(log n)

Modern hybrid. Note: Created by Orson Peters.

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Library Sort

O(n log n) O(n)

Gapped insertion sort. Note: Also known as gapped insertion sort.

✓ Stats

  • Best: O(n)
  • Worst: O(n^2)

⚠ Info

  • Stable: Yes
  • In-Place: No

Drop-Merge Sort

O(n log n) O(n)

Adaptive based on drops. Note: Designed to perfectly target scenarios with extremely few inversions..

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: Yes
  • In-Place: No

Cartesian Tree Sort

O(n log n) O(n)

Min priority queue sort. Note: Establishes a deep theoretical link between tree topology and array permutations..

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: No

Weak-Heap Sort

O(n log n) O(n)

Minimizes branches. Note: A deep mathematical tweak on Williams' original Heap Sort paradigm..

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Ford-Johnson

O(n log n) O(n)

Merge-insertion for minimum comparisons. Note: Often referred to as the merge-insertion sort.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: No

Batchers Odd-Even Mergesort

O(n log^2 n) O(1)

Sorting network style. Note: Another of Ken Batcher's parallel network innovations from 1968..

✓ Stats

  • Best: O(n log^2 n)
  • Worst: O(n log^2 n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Counting Sort

O(n+k) O(k)

Counts occurrences of each value and uses arithmetic to place elements. Very fast when the range k is not much larger than n. Note: Created by Harold Seward in 1954..

✓ Stats

  • Best: O(n+k)
  • Worst: O(n+k)

⚠ Info

  • Stable: Yes
  • In-Place: No

Radix Sort (LSD)

O(d(n+k)) O(n+k)

Sorts by processing digits from least significant to most significant, using counting sort as a subroutine. Note: Actually predates computers.

✓ Stats

  • Best: O(d(n+k))
  • Worst: O(d(n+k))

⚠ Info

  • Stable: Yes
  • In-Place: No

Radix Sort (MSD)

O(d(n+k)) O(n+k)

Like LSD Radix Sort but processes from the most significant digit first. Can short-circuit on already-sorted data. Note: Often preferred for string sorting algorithms..

✓ Stats

  • Best: O(d(n+k))
  • Worst: O(d(n+k))

⚠ Info

  • Stable: Yes
  • In-Place: No

Bucket Sort

O(n+k) O(n)

Distributes elements into buckets, sorts each bucket individually (usually with insertion sort), then concatenates. Note: Particularly effective for floating-point arithmetic inputs..

✓ Stats

  • Best: O(n+k)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: No

Gravity (Bead) Sort

O(S) O(n*m)

Simulates beads falling under gravity on an abacus. Each value is represented as a row of beads that drop into place. Note: Also known as Bead Sort.

✓ Stats

  • Best: O(n)
  • Worst: O(S)

⚠ Info

  • Stable: No
  • In-Place: No

Pigeonhole Sort

O(n+N) O(N)

Distribution sort suitable when number of elements and range of possible values are roughly the same. Note: An elegant.

✓ Stats

  • Best: O(n+N)
  • Worst: O(n+N)

⚠ Info

  • Stable: Yes
  • In-Place: No

Flashsort

O(n) O(n)

Efficient distribution sort. Note: Published by Karl-Dietrich Neubert in 1998..

✓ Stats

  • Best: O(n)
  • Worst: O(n^2)

⚠ Info

  • Stable: No
  • In-Place: Yes

Stooge Sort

O(n^2.71) O(log n)

Recursively sorts first 2/3, last 2/3, then first 2/3 again. Named after The Three Stooges. Extremely slow — purely educational. Note: Named after the Three Stooges comedy act due to its deliberate absurdity..

✓ Stats

  • Best: O(n^2.71)
  • Worst: O(n^2.71)

⚠ Info

  • Stable: No
  • In-Place: Yes

Sleep Sort

O(max(n)) O(n)

Sort by timeouts. Note: A joke algorithm born on an anonymous tech message board..

✓ Stats

  • Best: O(max(n))
  • Worst: O(max(n))

⚠ Info

  • Stable: No
  • In-Place: No

Slowsort

O(n^log n) O(n)

Multiply and surrender. Note: Invented conceptually to prove that a recursive sort cannot be arbitrarily slow..

✓ Stats

  • Best: O(n^log n)
  • Worst: O(n^log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Stalin Sort

O(n) O(1)

Eliminates out of order elements. Note: A widespread internet meme demonstrating aggressive data loss disguised as optimization..

✓ Stats

  • Best: O(n)
  • Worst: O(n)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Bogo Sort

O(n*n!) O(1)

Randomly shuffles the array and checks if sorted. Expected time is factorial. Also known as "stupid sort" or "monkey sort". Use with very small arrays only. Note: The quintessential bad algorithm.

✓ Stats

  • Best: O(n)
  • Worst: O(∞)

⚠ Info

  • Stable: No
  • In-Place: Yes