I built a music visualization system that maps audio frequencies onto a Fermat spiral, mimicking how the human cochlea processes sound.
How it works:
- 381 logarithmically-spaced frequency bins (20 Hz — 8 kHz)
-
- Mapped onto a Fermat spiral (r = √θ), following the cochlear structure
-
- ISO 226 equal-loudness normalization for perceptual accuracy
-
- Chromesthesia color mapping: frequency → hue (low red → high cyan)
-
- 60 FPS with melodic trails, rhythm pulses, and harmonic auras
- The result: you can literally see harmony. When notes align, the spiral lights up in symmetric patterns. Dissonance creates beautiful chaos.
- Built in Python with scipy (FFT), PIL (rendering), and FFmpeg (encoding).
- Watch it in action: https://www.youtube.com/watch?v=66RiYBl7aQY
- More examples on the channel: youtube.com/@NivDvir-ND
I’d love to hear thoughts from the creative coding community — especially anyone working with audio-reactive visuals or signal processing.