Sketch to video efficiently with ffmpeg

Hi all,

I’ve got a project where I am drawing a sketch and saving the frames as a video using ffmpeg. My project is an ElectronJS desktop app, so I have access to both the browser and the operating system. The problem I’m facing is performance is very slow due to the way I’m capturing each canvas frame. Here’s my current workflow in a nutshell:

  • I create an image stream (new PassThrough) that is piped to my ffmpeg process
  • After drawing each frame, I convert it to a blob with canvas.toBlob()
  • I convert the blob to arrayBuffer
  • And get a buffer with Buffer.from()
  • I write the buffer to the image stream with stream.write()

This works well but canvas.toBlob() is incredibly slow compared with everything else. Would anyone know how to optimise this?

I’m guessing there is no way p5 could draw to anything other than a canvas, something that can be converted to a blob or buffer quicker, is there? I think even createGraphics uses a hidden canvas. And I don’t know if we can draw to an OffscreenCanvas (wich may also not be faster to save).

I also want to experiment with canvas.captureStream() + MediaRecorder, but I doubt I can pipe that to ffmpeg directly.

I think the builtin saveFrame would not help either, as that tries to download the file directly. And it may use toBlob under the hood, anyway. Not sure.

Any ideas are appreciated.

What version of Electron are you using?

It sounds like Chromium < 83 had a toBlob() performance bug:

2 Likes

Thanks. That’s a great catch!

I updated Electron to a version with Chromium 87 (was using 80) and unfortunately the issue is not resolved directly, as it was affecting OffscreenCanvas and not HTMLCanvasElement. However, I’m using OffscreenCanvas in a different part of my app and I see a noticeable improvement there.

I think I’ll try to copy the canvas to a worker, reproduce it as an OffscreenCanvas and create the blob there and see what happens. Thanks again

1 Like

Doing the canvas to blob conversion in a worker, despite having to send messages back and forth and to recreate a new canvas, seems about 22% faster. Not groundbreaking but an improvement.

I also tried to read the pixels array from the p5 instance with loadPixels(), convert it to a PNG buffer with jimp and save that, but the process was orders of magnitude slower.

My best worflow so far goes like this:

  • Draw to HTML Canvas with p5
  • Convert canvas to ImageBitmap with createImageBitmap()
  • Send bitmap to web worker (and pass its ownership)
  • Copy bitmap to OffscreenCanvas in worker with new OffscreenCanvas(width, height).getContext(‘bitmaprenderer’).transferFromImageBitmap(bitmap);
  • Convert OffscreenCanvas to blob with convertToBlob()
  • Get Array Buffer from blob with arrayBuffer()
  • Send back Array Buffer to main thread (and ownership)
  • In main thread, convert Array Buffer to buffer with Buffer.from()
  • Pipe to ffmpeg with stream.write()

I suppose there are better ways of achieving this. Suggestions are welcome

1 Like

Hi Juan,

I’m trying to create an Electron app with similar functionality, but couldn’t get streaming to FFmpeg working at all.

Do you by any chance have some code snippets I could go by? Could you maybe point me to a suitable tutorial for webworkers and offscreen canvases? I couldn’t find anything useful in my searches.

Thanks

Hi @UriShX ,

My implementation is hard to decypher because it includes many details that only apply to my case. I think your best bet is to google for each one of the functions I mention in my previous post. That’s how I found out about them. Some details that may help you, though:

This is what I’m using for the image stream:

imagesStream = new PassThrough();

This is the input I’m specifying to ffmpeg

-f image2pipe -i -

The dash at the end is necessary.

I execute ffmpeg with something like

const { execFile } = require('child_process');

const child = execFile('path/to/ffmpeg', ffmpeg_args, () => {
  // Callback
});

// And then use the child with the stream like

imagesStream.pipe(child.stdin);

This is mostly what I do to write each frame (some of the workflow and error handling is missing)

// Tell the worker the size of the canvas when sending the first frame
if (frame === 0) {
  saveWorker.postMessage({
    action: 'setSize',
    payload: { size: [canvas.width, canvas.height] }
  });
}

// Convert canvas to bitmap
const bitmap = await createImageBitmap(canvas);

// Prepare for when the worker returns the data
saveWorker.onmessage = ({ data }) => {
  const buf = Buffer.from(data.payload, 'base64');
  // The stream tells me if it's ready to receive more data
  const ok = imagesStream.write(buf, 'utf8', () => {
    // If we know it's the last frame, we close the stream
    if (imagesStream && lastFrame) imagesStream.end();
  });
  if (ok) {
    // Go on with your program, draw the next frame, etc
  } else if (imagesStream)
    imagesStream.once('drain', () => {
      // The stream is not ready, this will run once it is so you can continue processing more frames
    });
};

// We send the bitmap to the worker
saveWorker.postMessage(
  {
    action: 'saveFrame',
    payload: { bitmap, stream: true }
  },
  [bitmap]
);

And this is my web worker file

const canvas = new OffscreenCanvas(300, 300);
const ctx = canvas.getContext('bitmaprenderer');

const saveFrame = async ({ bitmap }) => {
  ctx.transferFromImageBitmap(bitmap);
  const blob = await canvas.convertToBlob();
  const arrBuf = await blob.arrayBuffer();
  self.postMessage({ action: 'success', payload: arrBuf }, [arrBuf]);
};

self.onmessage = ({ data }) => {
  const { action, payload } = data;
  if (action === 'setSize') {
    const { size } = payload;
    canvas.width = size[0];
    canvas.height = size[1];
  }
};

Hi Juan,

Thanks mate, that really helps me a lot.

Still looking for a more efficient solution. I have tried everything that I was able to code.

I wonder if there is a way to tell ffmpeg where the canvas pixels are in memory so it reads them from there, instead of doing format conversions. Maybe studying how createImageBitmap works internally is a step in that direction… Not sure

I created this kind of minimal project that compares the render times of different approaches. GitHub - JuanIrache/canvas-stream-test: Try to improve transferring canvas frames to ffmpeg

There, running ffmpeg in a web worker is much faster than the other ideas, but in real world projects with more video streams and complex calculations within the sketch the original “worker” approach described in this forum is still faster. I don’t know why.