Frame export with a steady FPS

Hi all, I have quite some experience with Processing and always loved it. However, I’m running into some trouble.

I’m currently trying to create an audio-reactive (FFT) video clip. As this is a video clip, at the end of every draw() cycle, I’m exporting the frame to the filesystem. Afterwards, I will reassemble the frames into a video and sync it up to the audio.

This means I don’t really care about ‘real time rendering’. What I care about is a steady frame rate.

For example:
If I have a sketch that is set to 30 FPS, all goes well as long as the running time of the draw() does not exceed 1/30th of a second. However, if the running time exceeds that period, Processing will just drop the frame rate, resulting in fewer exported frames per second. I would like to change this behaviour. I don’t care how long it takes to render a single frame. All I care about is that it exports 30 frames relative to 1 second of audio. This also means that the audio playback speed depends on the time it takes to render a single frame. If a frame takes more than 1/30th of a second to render, the audio needs to be slowed down (compared to real time) in order to get accurate FFT values.

I realize that this isn’t what Processing was built for, but I’m wondering if there is a way… I’m happy to fork or change the core if needed, but I don’t know where to start.

I think that an option to toggle real time rendering would add a lot of value, as it basically eliminates any hardware requirements for complex sketches.

I hope that was clear. Please ask if clarification is needed!

Thank you!


Hi @ssi_karim,

This is an interesting issue :wink:

What first comes to my mind is to save the image in another thread to offload work without blocking the rendering loop.

I found previous threads about this but I don’t know if it really solves it:


Also the documentation on thread():

1 Like

I’m going out on a limp here, but have you considered not using the (inconsistent) FPS mechanism of void draw? You could call noLoop() and use a for loop that loops the amount of frames needed (based on the length of your audio). That way you’re certain you end up with the right amount of frames.


Good idea! :wink:

It means that you need to sample the audio at a certain point in time and then do FFT. I don’t know if it’s possible to do that with the sound library…

1 Like

Hi both! Thanks for your replies!

I actually never thought of using noLoop(). That is quite a neat idea!

@josephh that is indeed probably going to be tricky, as that would require fine control of the playback speed.

Another option I’m thinking of is to pre-analyse the FFT stuff. If I can get an array with the FFT value for every frame, I could just use that array instead of doing the FFT analysis in the loop itself. However, I’m not sure that is possible either :smiling_face_with_tear:

1 Like

I think this is a great and better idea. :slight_smile:

Anyway, another one I’ve used to deal with this problem is storing each frame in memory, in a List of PImages, not saving them to disk in each draw loop, after all data has processed, I save all the frames stored. Worked fine for me.


If you time your animations based on the frameRate variable, then it doesn’t matter how long it takes to render or save out each frame. Base your animations on a time variable computed from frameRate such as

float t = frameRate/30.0;

and then save each frame at the end of draw(). Processing will NOT drop any frames, they will simply lag behind the real-world clock.

External to Processing, you can then use ffmpeg (or whatever you prefer) to combine the frames into an animation along with your audio track.

Just be sure that you are using that same frameRate-computed time variable for indexing into your audio track. You can’t use computed time for the animation but use wall-clock time for the audio and expect them to match up.


@scudly what do you mean with base it on a frameRate derived value exactly?

I usually update the variables just based on the draw() iterations (e.g. x += 1). I think that should work too though.

Maybe I formulated the question in the wrong way. The problem isn’t so much that the frameRate itself drops. As you say, it won’t drop the frameRate, but it will slow down compared to the real-world clock.

The problem is that the audio playback doesn’t slow down with it. If you overload a sketch with too many objects you’ll notice that frames are rendered slower than usual, but the audio playback stays consistent. The result of this is that there is an offset between frames and the playhead position, causing frames to be rendered with incorrect FFT values.

By the way: I feel that the standard processing sound and FFT classes are quite simple, so I’m currently looking at the Minim library (see GitHub - ddf/Minim: A Java audio library, designed to be used with Processing.)

@all, your responses have been very helpful, thank you!

A small update:

I’m currently looking into the possibility to ‘pre-process’ the FFT values. If I can get an array with the FFT values for every frame, I can just grab those and don’t need the AudioPlayer at all :).

Any thoughts?

Yes, you need to detach the FFT audio analysis from real-world playback and treat this entirely as a non-realtime computation. If Processing’s audio library only lets you do realtime, then your animation will always be off from the music because you simply can’t analyze the audio, compute an animation, render it, and save out the images at full speed. The only way you would need to do so is if you are trying to play along with live audio such as in a live performance.

If you can use a better FFT library where you can fully analyze the audio independent of playback, then you can even add anticipatory animations such as a ball dropping that hits the ground on a drum beat or bass drop.

You don’t need to use x += 1 per frame. The built-in variable frameCount already does that for you. My point is just that if the animation you are producing will play back at 30 fps, then you want to analyze your audio in 1/30 second chunks and produce an animation that advances in 1/30 second chunks. Ignore Processing’s frameRate() or millis() and just use frameCount to index how far you are into the audio/animation.

1 Like

Thanks for your quick reply @scudly!

I think the Minim library I mentioned allows for that. It offers both an audio player and FFT class with public APIs that look more usable. The audio player class for example, has a method to move the playhead around. So I guess using that I can set it at the right position for every frame and fetch the FFT values.

Regarding the x += 1 example. I didn’t mean I’m keeping track of the frame count myself by incrementing a custom variable. I just meant that I’m used to update values (e.g. coordinates or color values) every iteration of the draw loop. However, I mentioned it because I was a bit confused. Those updates will work regardless, because the amount of iterations won’t change. Hope that makes sense.

Now I have to dive into getting the right buffer size, sample rate, etc. so I end up with 30 values for each second. Not too familiar with this, so that might take a while.

If I have something working I will post it here for anyone interested. Open source for ever!


Hey! I have been stuck with the same problem for very long.
Have you figured it out yet? Were you able to get the right bufferSize, sampleRate etc to end up with 30 values for each second?

Hey @ssi_karim !
I seem to have solved it. I used the default sound library for this.

int FPS = 25;
float startPos = 0.0;
float endPos = 10.0; //for 10 seconds of audio
float a = startPos; 

//SoundFile, Amplitude, BeatDetector etc variable declaration

void setup() {

//setup code here



void draw() {


amp = amplitude.analyze();
beat = beatDetector.isBeat();

// use the values of amp and beat to create visuals


a += 1.0/FPS;

if(a >= endPos) exit();