Hi all, can anyone offer insight into how the PGraphics3D “projection” and “modelview” (or combined “projmodelview” ) matrices are used to convert a 3d vector in the scene to a final 2d vector on the rasterized screen? I have been able to use the “modelview” matrix on its own to successfully project the point but only when in orthographic projection mode using “ortho()”. Heres an example with peasycam, the green X is drawn to the screen in 2d using peasycams hud drawing functionality and follows its 3d red circle counterpart this is exactly the behaviour I want although I cannot get it to work once a projection with perspective is added to the mix.
import peasy.*;
PGraphics3D p3d;
PeasyCam camera;
PGraphics overlay;
PVector posScene;
PVector posScreen;
float markerSize = 12.;
void setup() {
size(600, 600, P3D);
camera = new PeasyCam(this, 500);
p3d = (PGraphics3D)g;
rectMode(CENTER);
posScene = new PVector(0.,0.,0.);
posScreen = new PVector(0.,0.,0.);
//draw in order
hint(DISABLE_DEPTH_TEST);
ortho();
}
void draw() {
background(100);
posScene = new PVector(125,125,0);
PMatrix trans = p3d.projmodelview.get();
//trans.invert();
trans.mult(posScene, posScreen);
//print(posScreen.toString() + '\n');
//draw rectangle 3d scene
stroke(0);
strokeWeight(1);
rect(0,0,250,250);
//draw red point in 3d scene
stroke(255,0,0);
strokeWeight(10);
point(posScene.x, posScene.y, posScene.z);
// draw green X in 2d Hud overlay
camera.beginHUD();
pushMatrix();
translate(width/2,height/2);
stroke(0,255,0);
strokeWeight(2);
PVector pos = new PVector(posScreen.x*width/2,-posScreen.y*height/2);
// X
line(pos.x-markerSize, pos.y-markerSize, pos.x+markerSize, pos.y+markerSize);
line(pos.x-markerSize, pos.y+markerSize, pos.x+markerSize, pos.y-markerSize);
camera.endHUD();
popMatrix();
}
if “ortho()” is removed then the 2d vector outputs seem to become nonsensical. I believe I am applying the matrices correctly but the fact it breaks when introducing perspective implies that there is something wrong with the “projection” matrix, but not the “modelview” matrix. I believe there must be another transformation later in the pipeline (this would make sense as opengl draws in a range from -1 to +1 and processing draws in pixel coordinates), probably within opengl itself but I have been unable to track it down so far.
I will continue looking through the processing codebase but can anyone point me in the right direction or offer any insight? I have looked through peasycams code and don’t think it is having any ill effects on the transforms ultimately (other than manipulating a standard camera) so I think its more of a general issue/question about how processing interacts with opengl.
Ive found a few ultimately unanswered questions on this as a concept so it could be very helpful to figure out, cheers!