However, there are 2 problems. First, I’m assuming that your array is the spectrum from the FFT object. This already lost information of the phase - in the article it has real and imaginary numbers, which are the “raw” output of FFT, but Processing only exposes the magnitude, which is, abs(real + i * imag) or mag(real, imag) if you think of a vector.
Another problem is that usually FFT is a “snapshot” in the time domain. If you compute FFT of a whole song of a few minutes, for example, you get a huge array (that is double the size of the samples of the song or the length of the waveform). But this is not convenient because it doesn’t represent the time (for example, if you visualize this, you will have stationary spectrum bars for the whole song ). What people usually do is to calculate spectrogram which is FFT of a small chunk of the song, so it represents the frequency domain of that given duration, and that’s what Processing does (so it can animate). Unless you have all the snapshots of the FFT arrays (thus it becomes an array of arrays) you cannot recover the original song.
But I’m curious if you can simply generate a waveform from the spectrum. The array you have probably contains the amplitude of each frequency. For example, if the data is [0.9, 0.5, 0.1, 0.2] and if you already know that they correspond to 0, 4, 8, 16 Hz, then the original waveform (without phase) is
Thank you for the detailed response! I am still just learning this so I will probably try exploring these other methods of using audio data before I go ahead and use the FFT array.
And yes, I am using an array of amplitudes, so perhaps I will use that equation and see what results I can get!