Hi, I’m new to the forum. I’ve appended a jpeg to illustrate my question, sorry for its amateurish appearance. My question: in the given setup, how do I calculate the equation of the image plane from 3 points – pp1, pp2, pp3? I understand how to solve for the 3D plane in real world points that are behind the projected points but not from its projected points.
Planar equation from camera image
I can’t tell from your question what you are trying to solve, but I’m guessing that you either want to use screenX()
or modelX()
(and Y, and Z).
https://processing.org/reference/screenX_.html
https://processing.org/reference/modelX_.html
If you could provide a very simple MCVE in which you draw your plane in P3D then it would be easier to give you more advice.
If this is actually object based problem, there are also approaches like the Picking library. Or if for some reason you can’t use screenX, you could implement something like ray tracing –
or you could try to just compute a simple lineplane intersection. This is untested:
…but you could follow this Java example approach using PVector.dot(). In order to project the line you need to know the camera location
https://processing.org/reference/camera_.html
The key fact there is that the default eyeZ is (height/2.0) / tan(PI*30.0 / 180.0)
– so your line works like this:
PVector ray1 = (width/2.0, height/2.0, (height/2.0) / tan(PI*30.0 / 180.0));
PVector ray2 = (pp1x, pp1y, 0); // eg mouseX, mouseY
…and then you solve for p1 using lineplane intersection, as per https://stackoverflow.com/a/52711312/7207622
You might also be interested in this recent related post:
Thankyou for replying, I am working on the MCVE (which might remain the same) and a P3D depiction of the planar issue since you said that would be most helpful. In the mean time I wanted to refine the clarity of the need from your response: to describe an equation of a 3D plane in 3D real world coordinates from its 2D projections on the camera image. Unfortunately I am stuck with the model plane’s 2D point projections on the camera image.The plane I am using is a real model.
By “real model” do you mean an object in the world that you point a camera at? If it is a virtual model, like a .obj, then you could record the 3D values at draw time. That is much easier! Otherwise, if you really can’t do that, then you must treat it like a computer vision problem.
Given that you know a fixed length / height, it sounds like you are interested in affine transform / keystone correction type algorithms. The word “moving” in your illustration suggests that you might be trying to do something like QR Code detection with a (virtual) camera – and/or like mapping AR content onto a fiducial, and in addition to using OpenCV there might already be specific libraries for that. These applications can be fast, and optimized for rectangles. But there are so very many versions of this problem, it is hard to know without more details on your application.
By “real model” do you mean an object in the world that you point a camera at?
Yes. I know its width and height in meters as a plane at all times.
OMG plz forgive some redundant material addressed last question. I’ve been holed up studying the language (its new) and lost track of where I was in the question.
This is a real world rigid planar object with augmented features for tracking. I’ve tried the MCVE and have uploaded it. I got sucked into Processing.org pouring over the basics and still could only provide a minimal minimal example but its a cool language : ) Referencing the sketch you will notice that the scene is entirely working except with the final step which is what to do with the 4 blob points (from a light source reflected on some pixels.) The target points are circles, the light source "x"s. If I could convert the dots in the camera back to real world coordinates, I could use the equation of a 3dplane with them, and find the intersect point.But first and foremost is the plane equation. Some of it got a little sloppy  but – the "x"s belong to the corners of the plane and the points(p14) belong to the pixels.
Thanks for this reply.
Is your sketch linked from somewhere? You refer to it, but I’m not seeing it in this thread.
I may have misunderstood some key elements here. I have a JPEG of the sketch, should be right above this post.
Ah, I see. Sorry, I was unclear by saying MCVE.
https://www.google.com/search?q=mcve
At this point I think I understand your problem by your description and screenshot, but an MCVE is a minimal sketch that actually runs. You paste the code into the forum and format it with the </>
button on the forum editor.
Questions:

can you use a marker on your object, e.g. an Aruco marker? https://docs.opencv.org/3.4.0/d5/dae/tutorial_aruco_detection.html If so then builtin libraries – e.g. for OpenCV – can retrieve your object directly: https://stackoverflow.com/a/46370215/7207622
… see also http://answers.opencv.org/question/174902/arucomarkerswithopencvgetthe3dcornercoordinates/ 
can you use two cameras, rather than one? If so, then you could use OpenCV function to estimate depth from stereo images, e.g. https://stackoverflow.com/a/15594544/7207622

You could try estimating using the OpenCV solvePnP, which takes the known dimensions of your object as a set of points and the pixel observations of those points, and returns the position and orientation – see (untested):
https://stackoverflow.com/questions/35051060/opencv310findingthe3dpositionofaflatplanewithknowndimensions
…in general, see related discussion of Pose Estimation
 https://stackoverflow.com/questions/46262307/howtogetthe3dpositionfroma2dwithopencv
 https://opencvpythontutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_pose/py_pose.html
 https://stackoverflow.com/questions/8927771/computingcameraposewithhomographymatrixbasedon4coplanarpoints/10781165#10781165
Thank you Jeremy already have a downhill march on this one since my base system is opencv. Its great.I will have to give up my quest for the ultimate light code high power and work towards it over time which works.
You can use solvePnP function to determine quadrilateral’s position and orientation wrt camera. I am going to have to sift through the potential solutions for high performance. Right now I have 64 FPS with low latency.
All you need to have is
 3D coordinates of quadrilateral’s corners in world frame
 corresponding pixel coordinates
 Camera’s intrinsic parameters
You can directly use solvepnp() function of Opencv.
Ive got these three…
 3D coordinates of quadrilateral’s corners in world frame
 corresponding pixel coordinates
 Camera’s intrinsic parameters
size(640, 360, P3D);
background(100);
translate(width/2, height/2);
rotateY(PI/8);
stroke(255);
noFill();
int obj1_vt1_x = 100, obj1_vt1_y = 100, obj1_vt1_z = 0, obj1_vt2_x = 100, obj1_vt2_y = 100, obj1_vt2_z = 0;
int obj1_vt3_x = 100, obj1_vt3_y = 100, obj1_vt3_z = 0, obj1_vt4_x = 100, obj1_vt4_y = 100, obj1_vt4_z = 0;
int obj2_vt1_x = 50, obj2_vt1_y = 50, obj2_vt1_z = 0, obj2_vt2_x = 50, obj2_vt2_y = 50, obj2_vt2_z = 0;
int obj2_vt3_x = 50, obj2_vt3_y = 50, obj2_vt3_z = 0, obj2_vt4_x = 50, obj2_vt4_y = 50, obj2_vt4_z = 0;
beginShape();
vertex(obj1_vt1_x, obj1_vt1_y, obj1_vt1_z);
vertex(obj1_vt2_x, obj1_vt2_y, obj1_vt2_z);
vertex(obj1_vt3_x, obj1_vt3_y, obj1_vt3_z);
vertex(obj1_vt4_x, obj1_vt4_y, obj1_vt4_z);
endShape(CLOSE);
textSize(22);
fill(255,0,0);
text("Camera Image x, y", 58, 100, 50); // Default depth, no zvalue specified
translate(100, 0,30);
fill(51);
beginShape(); //obj 2 = small model/rectangle
vertex(obj2_vt1_x, obj2_vt1_y, obj2_vt1_z);
vertex(obj2_vt2_x, obj2_vt2_y, obj2_vt2_z);
vertex(obj2_vt3_x, obj2_vt3_y, obj2_vt3_z);
vertex(obj2_vt4_x, obj2_vt4_y, obj2_vt4_z);
endShape(CLOSE);
textSize(22);
fill(0, 102, 153, 204);
stroke(255,0,0);
line (200,64, 40, 50);
line (200, 66, 40, 50);
line (60,75, 50, 50);
line (60,75, 50, 50);
textSize(18);
fill(255,0,0);
//text("word", 12, 45, 30); // Specify a zaxis value
text("o", 200, 64); // Default depth, no zvalue specified
text("o", 200, 66); // Default depth, no zvalue specified
text("o", 60, 75); // Default depth, no zvalue specified
text("o", 60, 75); // Default depth, no zvalue specified
//line(obj2_vt4_x,obj2_vt4_y, obj2_vt4_z);
text ("x", 40, 50);
text ("x", 40, 50);
text ("x", 50, 50);
text ("x", 45, 50);
fill(0,0,0);
text("p1(r1,c1)", 220, 75); // Default depth, no zvalue specified
text("p2(r2,c2)", 220, 76); // Default depth, no zvalue specified
text("p3(r3,c3)", 90, 75); // Default depth, no zvalue specified
text("p4(r4,c4)", 90, 76, 40); // Default depth, no zvalue specified
fill(255,255,255);
textSize(15);
text("Plane Model", 40, 5, 20);
 3D coordinates of quadrilateral’s corners in world frame
Do i need to know z coor for each of the quadrilateral pts? I could set the coordinares to z =0 and the corner points to (right upper x, right upper y, 0) & (right lower x, right lower y, 0) …etc. i know each coordinate with blob detection and each distance in between .
So how do i satisfy the quadrilateral point requirements? The model is exact but how do i set up z? Setting z = focal pt?