"Offline" Rendering?

I have a sketch which responds to some audio, and I am happy with how it works. If I feed it a soundfile, that also works as expected. My problem occurs when I want to capture the resultant images to disc, so that I can make a movie later. The frames take longer than desired to render, which means that (a) I get fewer frames than expected; and (b) a movie made with those frames quickly goes out of sync.

I wonder whether there is a way of running the sketch in an ‘offline’ mode, where the audio playback is synchronised to the frame rate? I don’t need to listen whilst it’s rendering, so I don’t care for smooth audio playback at render time. I tried using cue() and play() in the draw() function, but that seemed to mess up the fft which I was using…!

Simplified code is here (the real code does some mangling of the PGraphics object, and makes the frame-render take longer still):

import processing.sound.*;

final int DISCS = 64;

PGraphics pg;
Amplitude amp;
FFT fft;

SoundFile file;

void setup()
{
  size(960, 540,P3D);
  frameRate(25);
  pg = createGraphics(960,540,P3D);

  // Load a soundfile from the /data folder of the sketch and play it back
  file = new SoundFile(this, "14.wav");
  file.play();
  fft = new FFT(this, DISCS*16);
  fft.input(file);
}

float r1 = 0;
float r2 = 0;
float r3 = 0;
float o = 0;

final int SPACING = 30;
final int DIAMETER = 600;

float[] levels = new float[DISCS*16];
int d=0;

void draw()
{
  clear();
  fft.analyze(levels);

  pg.beginDraw();
  background(0);
  pg.noFill();
  pg.lights();
  pg.translate(width/2, height/2, 0);

  r1 += noise(o)*PI;
  r2 += noise(o)*PI;
  r3 += noise(o)*PI;
  
  pg.rotateX(r1);
  pg.rotateY(r2);
  pg.rotateZ(r3);

  pg.pushMatrix();
  pg.translate(0, 0, -DISCS*levels[2]*SPACING/2);
  for(int i=0;i<DISCS;i++)
  {
    pg.translate(0, 0, levels[2]*SPACING);
    pg.stroke(255*noise(i+o), 255*noise(1+i+o),255*noise(2+i+o));
    
    pg.ellipse(0,0,DIAMETER*levels[(d+i)%DISCS],DIAMETER*levels[(d+i)%DISCS]);
  }
  pg.popMatrix();
  pg.endDraw();
  o+=0.01;
    
  image(pg, 0, 0);
//  saveFrame("f-#######.png");
}
1 Like

In this example https://github.com/hamoid/video_export_processing/blob/master/examples/withAudioViz/withAudioViz.pde I tried to work around this getting-out-of-sync problem by splitting the program in two:

  1. Analyze sound and output data (no complex rendering or saving images to slow down the program).
  2. Once you have the sound analysis data, you can run a second program that slowly creates each frame using the sound data, even if each frame takes 1 minute, it does not matter.

Maybe that works for you too?

3 Likes

Ah, great - I will give that a try. Thank you very much :slight_smile:

Hi hamoid…

That was awesome; thank you so much — perfectly commented and explained, very easy to adapt to what I am trying to do. Now I just need to investigate which are the ‘key’ frequencies in my audio!

Thanks again :smiley:

1 Like

I found this thread searching for similar help

in case it helps anyone i’ve made this all in one framework that lets you analyse audio offline at a set frame rate and then do your animation in one pass

demo vid and code link: https://youtu.be/w7EEKCzTM_M

2 Likes