Sketch to video efficiently with ffmpeg

Hi all,

I’ve got a project where I am drawing a sketch and saving the frames as a video using ffmpeg. My project is an ElectronJS desktop app, so I have access to both the browser and the operating system. The problem I’m facing is performance is very slow due to the way I’m capturing each canvas frame. Here’s my current workflow in a nutshell:

  • I create an image stream (new PassThrough) that is piped to my ffmpeg process
  • After drawing each frame, I convert it to a blob with canvas.toBlob()
  • I convert the blob to arrayBuffer
  • And get a buffer with Buffer.from()
  • I write the buffer to the image stream with stream.write()

This works well but canvas.toBlob() is incredibly slow compared with everything else. Would anyone know how to optimise this?

I’m guessing there is no way p5 could draw to anything other than a canvas, something that can be converted to a blob or buffer quicker, is there? I think even createGraphics uses a hidden canvas. And I don’t know if we can draw to an OffscreenCanvas (wich may also not be faster to save).

I also want to experiment with canvas.captureStream() + MediaRecorder, but I doubt I can pipe that to ffmpeg directly.

I think the builtin saveFrame would not help either, as that tries to download the file directly. And it may use toBlob under the hood, anyway. Not sure.

Any ideas are appreciated.

What version of Electron are you using?

It sounds like Chromium < 83 had a toBlob() performance bug:


Thanks. That’s a great catch!

I updated Electron to a version with Chromium 87 (was using 80) and unfortunately the issue is not resolved directly, as it was affecting OffscreenCanvas and not HTMLCanvasElement. However, I’m using OffscreenCanvas in a different part of my app and I see a noticeable improvement there.

I think I’ll try to copy the canvas to a worker, reproduce it as an OffscreenCanvas and create the blob there and see what happens. Thanks again

1 Like

Doing the canvas to blob conversion in a worker, despite having to send messages back and forth and to recreate a new canvas, seems about 22% faster. Not groundbreaking but an improvement.

I also tried to read the pixels array from the p5 instance with loadPixels(), convert it to a PNG buffer with jimp and save that, but the process was orders of magnitude slower.

My best worflow so far goes like this:

  • Draw to HTML Canvas with p5
  • Convert canvas to ImageBitmap with createImageBitmap()
  • Send bitmap to web worker (and pass its ownership)
  • Copy bitmap to OffscreenCanvas in worker with new OffscreenCanvas(width, height).getContext(‘bitmaprenderer’).transferFromImageBitmap(bitmap);
  • Convert OffscreenCanvas to blob with convertToBlob()
  • Get Array Buffer from blob with arrayBuffer()
  • Send back Array Buffer to main thread (and ownership)
  • In main thread, convert Array Buffer to buffer with Buffer.from()
  • Pipe to ffmpeg with stream.write()

I suppose there are better ways of achieving this. Suggestions are welcome

1 Like