I was using VideoExport and wanting to make a music video of sorts from an mp3 file. Everything would seem to mostly work, I start playing the mp3 and call startMovie(), except when I closed the movie, the movie would be longer or shorter than my mp3, either chopping it off or padding it with silence.
The problem turned out to be that I was calling frameRate with a number in setup, and then calling VideoExport’s setFrameRate(frameRate); Until draw() is executed, PApplet’s frameRate seems to default, and with differing rRates, videoExport was creating the video accordingly and getting out of sync.
Fix it by either calling frameRate() and setFrameRate() with a constant or variable set to the value you want. Thought I’d pass it along in case it wasn’t old news already.
One way to figure out how many frames you need to match an audio file is this: say your song is 3 minutes long and you are going to produce a video at 30 frames per second. That means that you need exactly 3 minutes * 60 seconds per minute * 30 frames per second = 5400 frames. It does not matter at what frame rate processing runs, only the frame rate you specify for the movie. While the program runs, it may feel like the visuals take too long or too short, but it doesn’t matter, as long as you have 5400 frames at the end.
A more tricky aspect is if you want to have the visuals in sync with the music. There is an example included with the library called withAudioViz.
ps. One more example: what if the audio is 2 minutes and 17 seconds long? That’s (2 minutes * 60 seconds per minute + 17 seconds) = 137 seconds * 30 frames per second = 4110 frames.
That’s a good tip to know, since my computers aren’t the fastest, and if I were generating something to show on HD, as long as the necessary frames are there, I’m ok.
But if you are using the streaming support in video export, then framerates need to match:
if videoexport is using a higher framerate than processing’s, your output video will be shorter than expected
if videoexport is using a lower framerate than processing’s, the output video will be longer than expected, but padded with silence
Hi! Ah now I see what you mean with the “streaming support”. You’re talking about this example:
I haven’t used this feature.
But yes, recording audio with video in real time is harder. I mentioned it in this thread https://github.com/hamoid/video_export_processing/issues/39 . The way that could be solved would be if it was possible to specify timestamps for each video frame. Some environments support this, but not ffmpeg afaik.
It works well enough for what I’m doing, and the frames seem to match up ok. On my old laptop, was taking too long to render frames in draw if I was doing a lot with it, but the results were mostly ok even then, just more slow motion than expected, but the duration of the mp4 was correct and the audio seemed fine.
My basic setup is to open the stream, play it in real time, and use Minim’s IcyListener to detect title changes, end the movie and start and a new one then etc. Obviously only works if the stream provides meta data on the songs, but most do.