I’ve done a few experiments with sound visualization in Processing using the Sound and Minim libraries, but I always stuck in the same problem: the values of frequencies amplitudes over time are very discrepants and the result its a raw transition that’s not too suitable for the eyes.
I’ve wondering if there’s a better approach to get smoothed values of an FFT analysis, in a way that the transition don’t be so abrupt.
Sure, a possibility its read a song file for a first time just to get the values so I can smooth them and then draw the new values while play the song. But what if I need to get smoothed values in a real-time analysis?
Thinking in the same logic, maybe what I need its to have a kind of buffer with something like a second of the sound and then do the same process: read, smooth the values and draw. It’ll not be on the real time, but maybe there’s a way to fine tune the buffer size to get smoothed values, but keeping an apparent synchrony with the sound.
I’m not too familiar with those libraries, so I’m wondering if they have a proper method to do this or if someone here can suggest a better approach.