Export video in real time

Hey! im actuallly using hamoid librarie video export. but frame rate still down. I need some way to record a sketch in real time( im moving cameras, and other events, so i need perfect sync). What alternatives do i have?what about if i make a system to keep track all variables? store all variables in a array and later “play” and then rec a video. what do you think? syphon to a external recorder?

Hello. Can you please post the code you’re using to record your video?

Hi @vjjv :slight_smile: What you suggest may work, but might not be as straightforward as it sounds.

If you want to produce a smooth video with 30 frames per second, you can play your system live while recording a time stamp and your camera positions and orientations. This will produce a file with data, but there will not be 30 timestamps per second. Very likely the time stamps will be irregularly spaced.

In phase two, when producing the video, you need to produce the frames to save. The rendering speed does not matter. Ideally you would interpolate the stored camera positions to figure out where the camera should be 30 times per second.

// saved camera position time-stamps
// A              B            C                  D            E
// *              *            *                  *            *               
// Needed camera positions at these regular time-stamps
// *         *         *         *         *         *         *   
// a         b         c         d         e         f         g

You need to know where the camera should be positioned at times
a b c d e f and g

a and g are easy, because they coincide with A and E, but all others probably should be interpolated.
For instance, at time b the camera should be between camera positions saved at times A and B. You interpolate position A towards position B, about 60% (because b is somewhat closer to B than A).
d is about 10% between C and D. e maybe 70% between C and D.

Another thing I could do is to implement what I suggested here:

One more option would be to use an external program to record the screen, like ffmpeg or something else. In Linux there’s simplescreenrecorder. In Mac you can use quicktime to record the screen. In Windows there’s a key shortcut to record the screen while you play video games.

The advantage of exporting frames one by one is that you can produce a smoother video without any dropped frames. The disadvantage is that it’s harder to do.

Is a veryy large code… in resume, is a P3D sketch and a few PGraphics obj with shaders. The framerate normally is around 30/40 fps, when video rec is on, about 15.

In that case I think you could try this: set the frame rate to 30 and make sure it runs at 30 fps stable. That means maybe deactivating shaders, reducing polygon count or whatever it takes. Then you could do the “saving data” phase, in which you record your interactions, camera positions, etc. This should produce 30 rows of data per second, saved maybe to a CSV file, json or something like that.

Then, as you suggested, you do the saving-video phase, in which you load the saved data and render each frame using the camera positions, interaction, etc as stored in the data file. This will probably run slowly, taking twice the expected time (because you said that when saving the video the program runs at 15 fps).

Once the program is done, you should have a video with the right number of frames and the right duration.

Do you think this is doable?

give me a few days to study and implement some code. I need to think in a system, becouse in future i want to sync processing sketch via midi with ableton live, so when i save a video i need a perfect syncronisation with audio.

1 Like

im in this topic again.try this:

import com.hamoid.*;

VideoExport videoExport;

// Press 'q' to finish saving the movie and exit.

// In some systems, if you close your sketch by pressing ESC, 
// by closing the window, or by pressing STOP, the resulting 
// movie might be corrupted. If that happens to you, use
// videoExport.endMovie() like you see in this example.

// In some systems pressing ESC produces correct movies
// and .endMovie() is not necessary.

void setup() {
  size(600, 600, P3D);

  videoExport = new VideoExport(this);
  videoExport.startMovie();
}
void draw() {
  background(#224488);
  for(int i = 0; i < 1000; i++){
    rect(frameCount * frameCount % width, 0, 40, height);

  }
  println(frameRate);
  //videoExport.saveFrame();
}
void keyPressed() {
  if (key == 'q') {
    videoExport.endMovie();
    exit();
  }
}

here im increase the rect count to produce a slow sketch. when i comment the videoExport saveFrame(); its around 50-60 fps. when i discomment, the fps downs to 30. this is exactly the porcentaje that slow in my other sketch, always the half. maybe in a ffpmeg settings theres something wrong? did you know @hamoid?

Hi! What goes down to 30fps, the sketch or the produced video? Are you using a hard drive or a SSD? Which OS?

The sketch, the produced video is ok. but if frame slow the performence becomes difficult. im in a hard drive,osx El capitan. im in a big sketch sending midi data from ableton to processing, and they need to be in sincro.

Do you mean real time as in you having to be there and move the camera, or do you literally mean you have to record in real time? If it’s the first option, then an alternative may be to have the sketch run at half the frame rate and halve the camera’s speed (or whatever stable speed). Then make the recorded video 2 times faster (or whatever factor).

Thanks!, but is not only the camera. i mean, i move sliders and send data to the program. For example, i send a midi note and its impossible to syncronize latter when i`ve the video and the audio.

So probably the CPU and or the hard drive are the bottle neck. The CPU needs to encode the video to h264, and the hard drive needs to store the encoded video to disk.

One option is what I suggested originally: a two pass approach: first store all the data required to produce the video (this is probably very light data) and in a second pass produce the video, even if it runs very slowly, it doesn’t matter, because at the end sound and video should be in sync. Did you check out this example?


This should work even if you are sending midi notes and control changes. In Ableton you would record the audio in real time, so you have a wav file (or mp3) produced. The first pass of the Processing program would save all the midi data received to be used in the second pass, where you actually produce the frames. Why is this useful? Because by not rendering anything on the first pass nor saving the video, the program introduces no latency.

If it needs to happen in real time and your video is not excessively long, you could use a RAM disk and see if there’s improvement. That would help if the bottle neck is the hard drive. At least in ArchLinux /tmp is a ram disk by default, so saving there can be much faster than saving to another location.

Finally, maybe you can try an external screen recorder software. In Mac I think you can use the included QuickTime program to record the screen. Maybe it helps?

2 Likes

Ok, i will do that. So, youre saying something like automatize all the parameters and save it first (in a bufferReader obj?) and next “play” that information and recording the video?

Yes, the example I posted does fft analysis and saves the values to disk in a text file using createWriter. That example actually does fft faster-than-realtime, which is not what you want. But you can use the same approach, you would just save one row of data each time draw() is called, and try to avoid any rendering related code (don’t draw, no shaders, etc). Only save the real time data that you can’t calculate later: the data that comes in real time.
Then the second pass is the rendering pass, which loads the saved text file and renders each frame (doesn’t matter how slow).

Make sure to set the sketch and video output framerates as you need them (maybe 30fps or 60fps).

By using this approach you could even compensate for possible latency. Imagine you are pressing a midi keyboard. The electronics of the keyboard add latency. USB adds latency. Ableton adds latency. etc. But since you have all the data recorded to a file, you could shift the audio respect to the video a bit forward or backwards to have it perfect.

Last tip: if the video will be played in front of an audience, it may be good to delay the video somewhat respect to the audio to make it feel in sync. The amount of delay will depend on the distance from speakers to the audience. You can see at http://www.gbaudio.co.uk/data/time.htm that if the speakers are at 15 meters away, that’s 43.8 milliseconds, which is 2.62 frames at 60 fps (43.8 ms / 16.6666). Not important in small rooms or with normal videos, but if you do glitchy, tightly-synced graphics, it may improve by tweaking the latency. I’ve used this approach in sonification performances: I left a parameter to tweak the latency live because I didn’t know in advance how large the room would be. Then I adjusted it by standing in the middle of the audience. Programs like the VLC video player allow adjusting this latency by pressing key shortcuts for the currently playing video.

:slight_smile:

3 Likes

This! Anything that’s not doing an on GPU copy is going to be slow. GPU to CPU read back is fairly slow, if using Processing’s own code for this it’s really slow.

1 Like

When building PoShowTube I needed to sync song playback exactly with a processing sketch whilst recording.

I used hamoid’s most excellent VideoExport code initially but added an array to hold the song FFT data, that gave me an array index to work with that represented each position of my sketch because my sketch was as long as the song. When recording I didnt play the song but I could still use the index to drive the sketch position and record each VideoExport frame at the right time, this way it didnt matter how much time it took to process each VideoExport frame the final video on playback always kept in sync with the song.

https://www.poshowtube.com/home.php

1 Like

Could you share your code? Thanks! :slight_smile:

Ive added some new sample code to this which shows the basics I used to load a song and record as video…

Hi @poboyd :slight_smile: Nice that you found the Video Export Library useful and included it in your repo.

I released the library with the GPL license as you can see in the header here or in the repo. It’s not good to delete the license from the file.

The header is not complete though, that’s an issue of the build approach of the Library template. The exported library does include copyright data. You can find it in your sketch folder under libraries/VideoExport/src/com/hamoid/VideoExport.java and should look something like

/**
 * Video Export
 * Simple video file exporter.
 * https://funprogramming.org/VideoExport-for-Processing
 * <p>
 * Copyright (c) 2017 Abe Pazos http://hamoid.com
 * <p>
 * This library is free software; you can redistribute it and/or
 * modify it under the terms of the GNU Lesser General Public
 * License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * <p>
 * This library is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
 * Lesser General Public License for more details.
 * <p>
 * You should have received a copy of the GNU Lesser General
 * Public License along with this library; if not, write to the
 * Free Software Foundation, Inc., 59 Temple Place, Suite 330,
 * Boston, MA 02111-1307 USA
 *
 * @author Abe Pazos http://hamoid.com
 * @modified 02/28/2018
 * @version 0.2.3 (23)
 */

Could you please add it back to your copy? :slight_smile: Thanks!

Ok thanks for that, and yes it is an excellent library ill add it back in.