Sharing PImage or PGraphics between nested Applets

I’m attempting to write an application that will stream a single incoming video feed to both eyes using an Oculus Rift DK2. In Linux, I’ve configured the Rift as a second monitor. Thanks to another topic on here, I’m able to create a second window, full screen, in Rift’s viewport.. I would like to have a control window on my monitor, for setting various options, and showing a headset feed preview.

I’ve got some issues however. I’m intending on rendering a 3D scene, incorporating a video feed. I’d like to render this scene once, and display it to both eyes. I figured the best way to go about that was to render to a single PGraphics framebuffer, then draw it twice, once for each eye. For the sake of showing what the user is seeing in the control window, I’d like to pass the PGraphics object from one applet to another, or a rendered PImage from it. However, while I’m able to reference simple variables in the child from the parent, or vice-versa, I don’t seem to be able to pass either of these visual data types and have them render. I either don’t get a rendered image, without an error, or I’ll get an error stating that I’m drawing outside of when OpenGL would prefer I do so.

Is there some sort of data escrow account I could use to asynchronously pass images from one applet to another? Is there some simpler or more correct way of accomplishing what I’m trying to do? I’d like to get GStreamer coming into this application, and inserting the video feed into the 3D scene, and if I can’t get images between applications, I worry that I won’t be able to access the GStream images from both applets, if one is binding to a port. It’s because of this that I’d really prefer to have this data be “owned” by either parent or child, and then passed out to the other. Because it’s just a heads up of what the operator is seeing, I’m not the most concerned if the monitor’s version is of a similar frame rate, or even if it’s lagging slightly.

Furthermore, if I render the scene in the child applet, I’m forced to declare the applet rendered with P2D or P3D, which results in a ThreadDeath error upon closing the application. If I render the scene in the parent applet, I don’t face this error.

It’s currently using the same buffer for each eye, but I might add a true stereo effect later. In that case, I’d really like to avoid rendering everything a third time!

Rift rift;

PImage i;

void settings() {
  size(480, 515, P2D);
}

void setup()
{
  rift = new Rift(this);
  String[] args = {"Rift"};
  PApplet.runSketch(args, rift);
  fill(255,0,255);
  rectMode(CENTER);
}

void draw() {
  background(0);
  rect(mouseX, mouseY, 25, 25);
  // I would like to call this here
  // image(rift.buffer,0,0,200,200);
}

public class Rift extends PApplet {

  riftStreamer parent;
  PGraphics buffer;
  
  Rift(riftStreamer myParent) {
    parent = myParent;
  }

  void settings() {
    fullScreen(P2D, 2);
  }
  
  void setup()
  {
    buffer = createGraphics(1030, 960, P3D);
    println(width, height);
    buffer.rectMode(CENTER);
  }

  void draw() {
    background(50, 50, 50);
    
    // Drawing to a buffer to be reused
    buffer.beginDraw();
    buffer.background(0);
    buffer.rect(parent.mouseY*2, width-parent.mouseX*2, 50, 50);
    buffer.endDraw();
    
    // Draw once for each eye
    image(buffer, 0, 0);
    image(buffer, 0, height/2);
  }
}

I haven’t gotten into the substance of your question, but just wanted to mention first in passing that browsing the source of previous Oculus Rift and Google Cardboard libraries might be interesting, if you haven’t already.

1 Like

While yes, you didn’t get to the substance, no worries here because these libraries still look like excellent resources, and I feel our project will be better for it. Thank you!

This part of your question was confusing to me. Just to be sure – you mean that there will be no depth of field in the Rift display – the two eyes will literally see, pixel-for-pixel, the same image?

That’s our original, bare minimum solution, yes, and what I kind of have working at the present. The more I prototype the more I see that even if there’s not distinct data going to each eye, drawing the scene twice allows for other tuning of the system.

The primary use of the display is to look around a scene captured elsewhere on the local network by a cluster of cameras. We’re doing an omnidirectional, locally streamed video as part of a group project. It’s captured video, so the feed is the same to both eyes, because that’s how we’ve chosen to design the camera system. As soon as I added HUD elements, I realized that it will be beneficial to manage two feeds, and just display the same video within both.

My plan was to get the incoming fisheye video streams, map them to a skybox, and stick a camera() in the middle of it. I’ve played with Processing for over a decade now, but video streaming and systems interacting is uncharted waters.

Since posting yesterday, I’ve still not succeeded in sharing PGraphics buffers between frames, but I have been able to call functions that populate a PGraphics buffer native to each applet, and draw the same thing over for each of the windows. https://www.youtube.com/watch?v=HNmhI9e37NY

Side note, it’s something we’re working on for a a university class, but I felt the “homework” tag wasn’t applicable. This is an open ended project that is the last portion of our degree.

I looked at this topic and saw two separate approaches – one, a method of creating windows with @quark’s G4P library, and two, the idea of using two PApplets and passing things to them. I’m assuming that you are using the PApplet approach.

Are you creating PApplet firstApplet, then passing a reference to firstApplert.g to the second PApplet so that it can image(g, 0, 0) to copy?

Keep in mind that PApplet is single-threaded, so locking / synchronization could be a key issue here. (I am not an expert in multi-threading). @GoToLoop might have some thoughts.