OSC message to Run Processing Sketch

Hi,

Is it possible to send an OSC message to processing that will initiate the Run command?

I am using Supercollider to send OSC messages to trigger events in Processing. I need to add another message command to my OSC event function. I need everything to start at once so that the frameCount is accurate. Code extracts from a larger patch

  if (saveOneFrame) {
    saveFrame("frames/####.tga");
    if (frameCount > 120) {
      saveOneFrame = false;
      exit();
    } 
   
  }
void oscEvent(OscMessage msg) {

  if (msg.checkAddrPattern("saveFile")) {
    float saveFileValue = msg.get(0).floatValue();

    if (saveFileValue == 1.0) {
      saveOneFrame = true;
      println(saveFileValue);
    }
  }
}

Thanks,

I don’t think you can remotely control the Processing IDE. I would prepare a separate variable to store the frame count since osc message is recorded and simply skip the draw function (and incrementing this variable) until the message is received.

But a more fundamental question is - if you need accurate timing, doesn’t it conflict with saveFrame? With saveFrame the framerate drops significantly and you won’t get 60 fps.

1 Like

Maybe launch it from the command line?

https://www.dsfcode.com/using-processing-via-the-command-line/

But if accuracy is important, I would not start the program when the sound starts as there’s an initial delay.

Instead, I would first start the programs (maybe one program launches the second) BUT the “show” does not start until you trigger some action, for example evaluating a certain line is SuperCollider or pressing a key in Processing. The difference is that both programs were running and ready waiting just for a signal.

Then, using frameCount is not a good approach when synchronizing programs, as the frame rate is just a target frame rate, and it can go faster or slower depending on how much the program it asked to do.

You can use millis() instead, and subtract the millis() value of the moment you sent the “start” signal. That way, if you start after 5 seconds, it would subtract 5000 from millis() so the time starts at zero.

Another thing you can do is constantly send the time via OSC from one program to the other. But it’s hard to suggest the ideal option without knowing how your system works.

1 Like

But a more fundamental question is - if you need accurate timing, doesn’t it conflict with saveFrame? With saveFrame the framerate drops significantly and you won’t get 60 fps.

It does conflict, massively. This is my first step into trying to render HQ synchronized audio visual content. I tried filming using OBS screen recorder but it’s not great (understatement). Do you have any suggestions?

This is what I needed to understand too - I was looking into using the internal clock as a work around.

My current setup is as you suggested here

Yes , I’ve found that I’m loosing a lot of information…

@hamoid @micuat
Thank you for your replies.

You’ve both touched on the core issue that what I’m asking for is accurate capturing and the method I am using is not going to achieve the result I desire.

I have a question…

I can create an OSC file, called a “score”, from Supercollider. This details all the events that are going to happen during a specified time frame.

Do you think it would be possible for Processing to read this file, and then export a visual render, without running the program? In Supercollider this is called “offline” processing or Non-RealTime synthesis.

And if not, do either of you have any suggestions for capturing synced audio visual content in HQ? Within the limits of Processing.

Rendering from a score is indeed the precise way to do it. If you have a sound that it’s one minute long, at 30 fps you would need 1800 frames to match that length. So you can export a movie from Processing with those 1800 frames and later attach the sound. The nice thing is that it doesn’t matter how long it takes to create the 1800 frames, or if it takes longer to create some frames than others. At the end it will play at 30 fps.

I created the VideoExport Library which should help you put together the audio with the video frames as an mp4 file.

1 Like

Yes, I’ve used your library! (Thought I recognized your username :slight_smile: )

What you described there works very well, but I rely on osc-reactive events between the audio and the visual content. So I can’t export the audio and visual content independently.

I messed around with OBS again (screen recorder) and managed to get a capture that appears equal in quality to my render.

To clarify in my last post - Supercollider can create an OSC file of the production one is working on. It would be great to read that file into Processing, and for Processing to basically render “offline” - i.e. you don’t click “run”, which means the render/program doesn’t happen in real-time. It’s like exporting a file in a commercial digital audio workstation…you don’t have to record it in real-time, you click export and it does all the computation offline.

But I’ll leave that for another time and another post - for now, my schedule means I have to go with what I can achieve easiest (Ocams Razor and all).

Thanks for your input!