ArrayIndexOutOfBoundsException on FFT analyze when bands above 2^14

So I know that the amount of bands you assign to an FFT needs to be a power of 2, but when I go over 2^14 (16384) to 2^15 (32768) bands, I get the fft.analyze(spectrum) code highlighted and the error code - ArrayIndexOutOfBoundsException. Is this a limitation of hardware, processing or something else? Is there a way around this? There are many more frequencies than 16384 so this is really inconvenient.

Example code of the problem below.

import processing.sound.*;
SinOsc sin;
FFT fft;
int bands = 16384; //32768 is out of bounds.
float[] spectrum = new float[bands];

void setup() {
  sin = new SinOsc(this);
  fft = new FFT(this, bands);
  fft.input(sin);
}

void draw() {

}

void mouseReleased() {
sin.stop();
sin.play(440, 1);
fft.analyze(spectrum);
}
1 Like

hi, seems like i could not help you
Frequencies not adding up here,
with a explanation about bands, sorry.

now for this question, yes, for speed that DFT is done in
array with integer pointer, using integer values
( and not double or floating point …) so the band limit on your computer seems reasonable.

again, processing is not doing a FFT, it just sends a sample array
and when you set ( using sound lib )
bands = 512;
it sends 1024 sample batches
and your FFT shows your 440Hz as a max on about band 10.
here some tests of your modified code ( show print FFT )

// pls set :
int bands = 512;
// and test:
// Hz
// 50 in band 1
// 440 in band 10
// 10,000 in band 232
// 20,000 in band 464 of 512 *

( * my computer/sound card or my ears not work for this )

so 512 bands is what you have to use.

actually when you complain about frequency resolution of FFT
it should be on the low range ( < 50Hz ) if that is a issue.

1 Like

Sorry, why do you recommend 512 bands?

I know 16384 gives 32768 bands, but I’d like to use more. No solution on that end?

using sound lib
bands 16384 is bands,
it sends a array what is 32768 samples long
to the system to make a DFT.
and gets back a array of 16384 amplitudes
where [0] is kind of offset/average of all below the frequency
with a amp in [1].
so… what is that (lowest ) frequency [1] in that case ??

if your signal is 44kHz it has 44000 samples in one second,
so 32768 samples are 0.74 seconds
so in [1] is a amplitude for a 1.3Hz sinus

and your highest?
in [16383] 22kHz.

longer arrays seems not possible.

and as i show you with
bands = 512 you can see also frequencies up to 22kHz
and that is all a FFT can do on a signal with 44kHz
( sampling theorem )
and the number of bands can not change anything about that.

i think you still mix up bands and frequencies?
or i misunderstand your question?

tell us what you are up to pls.

2 Likes

Ok, I think I actually get the difference with regards to the bands now. The amount of times the original clip is sampled by the FFT is double the band amount, yes? I’ve been doing the calculation right, but just didn’t seem to get what was going on.

But you’re saying spectrum[0] is not the amp of the lowest frequency, but the amp of the average frequency?

What I’m up to… is something over my head. Trying to figure out how to perfectly recreate sound samples in a format that’s very easy to manipulate.

I guess I should ask what’s the most basic way to produce a sound from the PC? I thought it was with a sine wave, but I think it’s just power over time (and the resulting frequencies), but not sure at the moment how you play power.

no, it is a offset, but as frequencies below [1]
are also seen es partial offset.
even more difficult,
if a sinus fits into a band you get one peak,
and [0] and all other bands show 0.
if the sample batch is out of phase with that sinus
( what is normal ) you get a distribution ( also [0] not 0 )

think of like your sample show a cosinus instead of a sinus?
you think FFT would not see it?
it recreates it by give you amplitudes for several sinus what would add up to that cosinus.
( ok, no problem, ignore it )


back to your point:
you want create easy sound with processing sound lib
and use the

good.

when you analyze a
sound sample with a FFT and
want recreate that you must use a INVERSE FFT
( sound lib not show

but MINIM might do it:
http://code.compartmental.net/minim/fft_method_inverse.html
) anyhow you can do manually

signal = ftt[0]/2 + fft[1]*sin(w)+fft[2]*sin(2*w) ....

is that what you are looking for?

2 Likes

I’ll have to do research on the use case of Inverse FFT.

This link might be useful too, no?

With regards to SinOsc, the way I’ve read it, a single sample doesn’t represent any particular frequency, it’s only when they accumulate over time that they create a frequency and tone, and so I was wondering if their’s a more basic way to output a sound than using an Oscillator. I’ve actually tried with a Sine Oscillator and had some success, though the result is limited by the draw framerate and amount of oscillators that can be used at once, so that’s why I’m looking for something more basic. I think the link above might be doing that.

i not know what post of that topic you linked you ref to,
but i not see how that fits to FFT.

i have no idea if you can do it online with processing,
but from theory yes, you can make FFTs from a sample sound
and manipulate the AMPs and recreate the sound.
a example would be a very good FILTER to disable one specific
frequency ( noise? ) instead of a semi analog filter.

but for now start just with making sound by adding sinus of different frequencies.

By “very easy to manipulate” … what do you mean? Easy to change the amplitude, or pitch? Easy to pan in stereo? Easy to reverse / pause / speed / slow?

Digital computers are great at “perfectly recreate.” Unless there is a pseudo-random element, any algorithmic digital sound generation methods are going to produce the same results every time (for one piece of given hardware, of course). Unless I’m not understanding you, and you are talking about something like recording external audio for playback into a lossless encoding format…?

1 Like

I mean being able to take an audio clip, capture all its individual samples (ex. 44100 * secs of clip), and perform any type of modulation to those samples over time. For example, if I captured a light touch on a guitar string, a medium strike, and a full-force heavy strike, I could examine how the samples between them change over time, and be able to modulate any amount between those states. For example. 2/3rds of the way to medium strike. Also it would be nice if I could pan each sample individually or each frequency instead, but I’m not sure how possible that is or if it’s supper intense on computing. Lots of question marks you see, which is why I’m trying to do this bit by bit and not overloading the forum with questions, heh.

while a tone is a sum of frequencies,
a sound is much more: ADSR

https://processing.org/reference/libraries/sound/Env.html
or
http://code.compartmental.net/minim/adsr_class_adsr.html

i only play it in PYTHON
http://kll.engineering-news.org/kllfusion01/articles.php?article_id=80
pls. note the graphic way to edit signal and adsr with ONE mouse drag!
but you need a real processing synthesizer specialist here

1 Like