Is there a function or way to use the fft object without playing the sound? For example, using a loop to get the data out of the audio source before a sound is played? I know there are tools out there that can generate .csvs and such, but wondering if natively the fft object can do this without playing the sound. An example might be to analyze the audio data of a sound object and then do something later with that data after it’s been processed, but not before the user plays or hears the sound.
Under the hood the FFT class creates a
AnalyserNode [sic] which is intended to processing real time audio streams. However, this StackOverflow answer hints at a way to do it using
OfflineAudioContext, but it looks like this will only work for Chrome based browsers and possibly Safari.
Unfortunately, I looked into doing something like this with p5.sound’s SoundFile and FFT classes and I think it is pretty much impossible because there are many hard coded references to a single global AudioContext in p5.sound.
Thanks so much for your response, really appreciate it. I figured this would not be possible, but thought I’d ask.
I’m doing a music visualizer project for school and I had an idea to approach the music as a function of time (like creating a infographic) from the data. I’ll just have to let the infographic develop while the song is playing. I suppose another way is to analyze the song using a pre-determined csv file but I think that takes the beauty out of it and makes it less dynamic.
You can however set the volume to 0 and still produce an fft analysis. Playback speed should be kept at original speed, the resolution of the fft is playbacks speed dependent and the faster the playback the lower the resolution the slower the playback the higher the resolution.
I had this same thought but the resolution thing is sort of a deal breaker. The idea would have been to have the data populated during some brief preloading period, but playing the sound super fast would give me a bad data set