So I’m using processing to generate animations to composite later for a music video project on which I’m working.
The problem is, I can’t get my sketches to render consistently so that I can edit them later to sync them with the music.
I know the tempo. 128 beats per minute.
If my math is right, that’s 468.75 milliseconds per beat.
Or 14.0625 frames per beat (at 30 fps.)
I want my sketch to do something on each beat.
Counting frames is pretty rock solid, except that of course I can’t count partial frames. Processing can’t do something every 14.0625 frames, it has to round. Consequently, the beat gets off quite frequently and I have to make a ton of edits.
On the other hand, using milliseconds is just as easy, offers more precision and could potentially give me more accurate timing. But I’m realizing that Processing includes the thinking time, and saving time for each frame I’m saving as the sketch runs when it calls millis(). The sketch is doing what I told it to, but as soon as i start saving, it saves out manymany fewer frames than it should be.
Are there any ways around this?
Is it possible to stop a timer during the save and restart once the next frame begins to draw? Or would that even solve my problem?
Maybe I’m asking too much from Processing?
It’s an odd problem to have. Let me know if I haven’t explained it clearly enough.
Any advice or suggestions for resources would be greatly appreciated.
Why don’t you save each frame as image (saveFrame() at end of draw() cycle) and then make video from those images? Each frame perfectly rendered and you can set frame rate as you wish.
No no. That’s exactly what I’m doing.
The problem is that events depicted in each frame do not have the correct timing to sync properly with a musical tempo.
(Or even a predictable timing.)
I’d check out the Video Export library (by hamoid, Abe Pazos) I seem to remember it had some frame rate reduction feature…
My intuition is you can never be sure of the frame rate. Could you run the music and the sketch slower just for recording then speed up things later for playback?
The frames don’t need to have a predictable timing. Don’t use millis() or any real-world clock for your animation timing. Instead use something like
float t = frameCount/60.;
to have, for instance, 60 frames per second. Time your animation events to match the timing of audio events in the music based on the frame number. You can’t sync it up in real time while Processing is running because it takes too long to encode the images and write them out.
What?
Of course you need predictable timing. For any two things to sync, you need consistent timing between them.
frameCount was my first thought too.
Unfortunately frames don’t have fine enough resolution for music timing.
For example: at 128 beats/minute, one beat is 14.0625 frames.
If you trigger an event every 14 frames, the tempo is now 128.6 bpm (a half beat faster)
If you trigger an event every 15 frames, the tempo is now 120 bpm. (8 bpm slower!!!)
This is wildly out of sync.
Oh wow. That’s really interesting.
You mean going through and storing actual pixel data, right?
Ok. Sounds a bit over my head, but I’ll definitely research that. Thank you!
Ok… so you mean, like… run the sketch still at 30 fps, but then using millis() for timing, and increase the milliseconds totals so it animates more slowly?
Then speed the whole thing up after importing into compositing software.
Hmmm… Not bad. It’s not as elegant a solution as I was hoping for.
It would still be unpredictable. But maybe the sync errors would be much smaller.
That’s a lot of wasted frames too.
You need consistent timing, but you don’t need real-world clock timing at the time you are generating the images. When Pixar makes movies, it can take a day to render an image yet when it plays back at 24 frames per second, the audio still syncs up. They obviously don’t record their animation in real-time using millis(). When you make an animation, you are taking a series of pictures that are discrete samples of what we trick our brain into thinking is a continuous motion. That motion can contain discrete events, but those events do NOT need to occur exactly at the same time that one of the pictures does. If we see a ball falling towards the floor in one picture and heading away from the floor on the next, our brain sees it as bouncing. In the animation code, we can compute the exact time the ball should hit the floor and sync it up with the audio, but the frame rate that we generate and play back that video from can be independent of the exact timing of those events.
When you play back the final animation, you will be using some pre-written video player that likely supports only a limited set of framerates. It won’t have a variable frame rate that matches the beat of your music. The frames you save are uniformly generated samples of the motion in your animation, sampled in time, at a rate that will match your playback, presumably 60fps. The events in the animation are independent of that frame rate and will almost certainly fall in between the sampled frames. You can, and want to, time those animation events to perfectly sync up with your audio events. The frames you output, however, will not and do not need to match the exact timing of the animation events.
Analyze your music to get the timing of the animation, but let your Processing program compute the animation and output the frames on its own time, saving out the frames as it goes. Then use ffmpeg or whatever to put the frames and the audio together into a well-synchronized video.
Oh ok. I get it now.
You’re not understanding what I am asking.
As previously stated, I’ve already analyzed the timing i need Processing to produce. Either with frames, or milliseconds.
I already know exactly when events need to occur. I don’t need to match timing during playback in Processing, I’m combining audio and visuals later.
The problem is that Processing can’t save images and render animation timing consistently. Using frames, it can’t trigger events between frames, and when using milliseconds, precious milliseconds get lost during the saving and drawing process, and not at a predictable rate, so I can’t compensate.
FrameRate is an integer value. To get precise timing out of Processing to match a bpm, Processing would have to draw an event between frames. i.e. a floating point number. That can’t happen.
No, you’re still not understanding. You don’t need Processing to generate the frames in real time. It can happily take hours to generate each frame as long as the frames, when put together, match the timing of your audio. millis() uses real time and is the wrong thing to use for a saved animation. frameCount is just the index number of each frame. If you will play back your animation at 60 fps, then each frame will occurs at a multiple of 1/60 seconds, indexed by the frameCount. Define
float animationTime = frameCount / 60.;
and code all the events using that time to match the events in your audio and it WILL sync up. The events WILL happen in between frames and that doesn’t matter. Every movie or TV show you’ve ever seen does exactly that.
Animation time is mathematically continuous. You can schedule an event to occur at any time you want.
If you make the screen flash white for only an instant and it goes back to black in between frames, then, no you won’t see it. But any effect you’re going to make should have a duration and possibly motion blur that will last across multiple image frames. I can make an animation that matches any timing you give me. The playback will always be discrete because that’s the way the digital world works. Your monitor, your TV, film projectors all have fixed framerates, but events that happen in the videos you see are not tied to that frame rate. The display is just a discrete sampling of what we think of as a continuous signal.
I appreciate the effort.
But at this point you’re simply repeating yourself, and not addressing my problem.
Thank you for participating. I’m going to try some of the other suggestions.
This code pulses the circle every beat at 128 beats per minute, sampling the animation at 30 fps. Put in the code at the end of draw to save the frames with whatever name you like. Combine them using your preferred software at 30 fps and it will match your music.
float sn( float x ) { return sin( TAU * x ); }
float cs( float x ) { return cos( TAU * x ); }
void setup() {
size( 800, 800 );
noStroke();
frameRate( 30 );
}
void draw() {
float t = frameCount / 30.; // time in seconds
float beat = t * 128 / 60; // time in beats
background(0);
translate( width/2, height/2 );
scale( width / 12. );
float a = beat/16.;
if( (floor( beat/ 16) & 1) == 1 )
a = -a;
circle( 3*sn(a), -3*cs(a), 1+(floor(beat)%4) );
// if( beat < TOTAL_NUMBER_OF_BEATS_IN_SONG )
// saveFrame( whatever );
// else
// exit();
}
If you want to change the frame rate to 60, do it both in setup() and in the first line of draw().
Obviously you are very knowledgeable about this. I appreciate it.
I understand that the making of the movie can be slow and when you play the movie it’s much faster.
But I have a question. When the OP has a song and he has a new drum coming in at millis 3600 he wants to start a new image or animation. So in the Sketch that is making the movie he needs to know the time of the song.
The millis.
@leatherbird would have to tell us how he times the events for his song. Programs like winamp use Fourier analysis to detect changes in the audio at different frequencies. They play the song with a second or two delay so that they can compute and show video effects in sync with playing the audio. I’ve never done that myself.
In the Processing animation code, t is the seconds into the song, so it’s trivial enough to check when t > 3.6 to change state when you hit 3600ms. The point is to code everything as a closed function of animation time, using either my “t” or “beat”. You can’t be doing forward integration of a motion where you’re adding something each frame. Instead, every object needs to have a motion function that says something like
if( t < t0 ) pos = x0(t), y0(t);
else if ( t < t1 ) pos = x1(t), y1(t);
else if ( t < t2 ) pos = x2(t), y2(t);
and so on. You can then sample the animation at any time, or at any frame rate, and it’ll know what to show. The t0, t1, and t2 are based on the exact timing of events in the music and it doesn’t matter if they fall in between frames of the animation.
And none of this needs to, or even can, happen in real time. Modern computers run multi-tasking operating systems that interrupt your program hundreds or thousands of times per second to switch between tasks. There is no guarantee that some other task won’t pause Processing and cause a hiccup if you’re trying to time things using millis().
I always record my frames as PNGs which have full lossless colors but good compression and then use ffmpeg to convert to animated GIF or mp4. PNGs are slightly slower to write out, but since I’m not trying to record using real-world wall clock time, the time doesn’t matter.