Direction of Up vector with Camera, P3D

Hey everyone! Just looking at P3D here, and I noticed something that’s got me slightly confused. I am guessing it’s something simple, so I hope for some clarification.

I have posted a quick example at the bottom. In Processing, I know the viewport has the Y pointing down, and so the ortho/perspective projection calls invert the Y to bring things to a more common frame for people.

Now, what confuses me is why the camera up vector seems to be inverted. Let me explain:
If I set up to some direction, I expect that to be up in my camera-space, since the camera function call makes a basis.

So, if I set my up to be <0,1,0>, I expect the positive Y axis to match my camera up, upwards in the scene

However, in processing, it’s the opposite.
If I set up to <0, 1, 0>, then the Y axis goes down into the screen.
If I set up to <0,0,1>, then the Z axis goes down into the screen, instead of up.

So the way it’s implemented, it’s almost as if we should call it the “down” vector.

See this code where the Y axis is clearly pointing down. I set the camera on the positive Z axis, looking at the origin. Y is down, X is right. So this gives me a Left Handed System.

What am I missing?

void setup() {
  size(1200, 1200, P3D);
  colorMode(RGB, 1.0f);
  perspective(PI/3.0, 1, 2, 150);
  camera(0, 0, 5, 
         0, 0, 0, 
         0, 1, 0);

void draw() {
  background(1, 1, 1);
  stroke(0, 0, 0);
  fill(0, 0, 1);
  vertex(0, 0, 0);
  vertex(0, 1, 0);
  vertex(1, 1, 0);
  vertex(1, 0, 0);

I think a good night of sleep helped out. I think I see it now.

When peeking at the processing code, I saw the Y inversions and I thought it was correcting for the viewport. But I think it’s the opposite, it’s doing the correction. So, I end up with a LHS, with Y going down.

That’s why, when I have the up vector of <0, 1, 0> the Y positive axis goes down into the screen. Because as you move along the camera Y vector positively, it goes down the screen.

If I want increases in camera Y to go up the screen, the camera Y needs to go into negative territory.


The three last parameters of the 9 Parameters
are the UP vector.

The way I see it it has nothing to do with the coordinate system or with the movement of the camera.

Instead it just is whether the camera is up right or lying on its side or is up side down etc.

Hi @jimyoung,

My tinkering for a left-handed y-up camera is below. Because the rest of the Processing library assumes y-down, though, the impact of a y-up camera ripples out to the appearance of text, images, lights. Sounds like you’ve already looked at the source code, but for comparison the original camera is here and frustum, which is called by perspective, is here.


PGraphics3D rndr;

void settings() {
  size(512, 512, P3D);

void setup() {
  rndr = (PGraphics3D)getGraphics();

void draw() {
  float theta = TAU * mouseX / (float)width - PI;

    255.0, 245.0, 215.0,
    0.0, -0.6, 0.8);
    sin(theta) * height, 0.0, cos(theta) * height,
    0.0, 0.0, 0.0,
    0.0, 1.0, 0.0);

  line(0.0, 0.0, 0.0, 100.0, 0.0, 0.0);
  line(0.0, 0.0, 0.0, 0.0, 100.0, 0.0);
  line(0.0, 0.0, 0.0, 0.0, 0.0, 100.0);

  rotateY(frameCount * 0.01);

  text("Upside down", 150.0, 0.0);

void perspective(PGraphics3D rndr) {
  perspective(rndr, THIRD_PI, rndr.width / (float)rndr.height,
    0.01, 1000.0);

void perspective(
  PGraphics3D rndr,
  float fov,
  float aspect,
  float near,
  float far ) {

  rndr.cameraFOV = fov;
  rndr.cameraAspect = aspect;
  rndr.cameraNear = near;
  rndr.cameraFar = far;

  float cotfov = 1.0 / tan(fov * 0.5f);
  float d = 1.0 / (far - near);
    cotfov / aspect, 0.0, 0.0, 0.0,
    0.0, cotfov, 0.0, 0.0,
    0.0, 0.0, (far + near) * -d, (near + near) * far * -d,
    0.0, 0.0, -1.0, 0.0);

void camera(
  PGraphics3D rndr,
  float xEye, float yEye, float zEye,
  float xCenter, float yCenter, float zCenter) {

    xEye, yEye, zEye,
    xCenter, yCenter, zCenter,
    0.0, 1.0, 0.0);

void camera(
  PGraphics3D rndr,
  float xEye, float yEye, float zEye,
  float xCenter, float yCenter, float zCenter,
  float xUp, float yUp, float zUp) {

  // Subtract center from eye to get forward.
  // Camera orbits target.
  float kx = xEye - xCenter;
  float ky = yEye - yCenter;
  float kz = zEye - zCenter;

  // Normalize forward. mag(v) = sqrt(dot(v, v)).
  float kmagsq = kx * kx + ky * ky + kz * kz;
  float kmag = sqrt(kmagsq);

  kx /= kmag;
  ky /= kmag;
  kz /= kmag;

  // Avoid case where forward parallel to reference up.
  float dotp = kx * xUp + ky * yUp + kz * zUp;
  if (dotp < -0.9999 || dotp > 0.9999) {

  // Cross forward with reference up.
  float ix = ky * zUp - kz * yUp;
  float iy = kz * xUp - kx * zUp;
  float iz = kx * yUp - ky * xUp;

  // Normalize right.
  float imagsq = ix * ix + iy * iy + iz * iz;
  float imag = sqrt(imagsq);

  ix /= imag;
  iy /= imag;
  iz /= imag;

  // Cross right with forward.
  float jx = iy * kz - iz * ky;
  float jy = iz * kx - ix * kz;
  float jz = ix * ky - iy * kx;

  // Normalize up.
  float jmagsq = jx * jx + jy * jy + jz * jz;
  float jmag = sqrt(jmagsq);

  jx /= jmag;
  jy /= jmag;
  jz /= jmag;

  // Set loose renderer camera variables.
  rndr.cameraX = xEye;
  rndr.cameraY = yEye;
  rndr.cameraZ = zEye;
  //rndr.eyeDist = kmag; // eyeDist not visible.

  // Set renderer camera inverse.
    ix, jx, kx, xEye,
    iy, jy, ky, yEye,
    iz, jz, kz, zEye,
    0.0, 0.0, 0.0, 1.0);

  // Set the camera.
    ix, iy, iz, -xEye * ix - yEye * iy - zEye * iz,
    jx, jy, jz, -xEye * jx - yEye * jy - zEye * jz,
    kx, ky, kz, -xEye * kx - yEye * ky - zEye * kz,
    0.0, 0.0, 0.0, 1.0);

  // Update project model view.
1 Like

Thanks Jeremy for the help,
I ended up solving my problem by just treating the UP vector as a down, and being aware of my coordinate system.

The good thing is, during my confusion I was worried that Processing had an inconsistency. It doesn’t, it’s just that UP meaning down on the screen is slightly confusing.

I understand that Processing uses a downward Y, and that isn’t going to change. I also think my uses of it are niche and not the target audience.

The long-term solution I came up with was somehow a way to expose the viewport matrix, so we can just change the mapping there, and everything else will follow suit. I haven’t looked at the code to see if this is even possible, or even consistent across platforms, but if the platform issues have been wrapped to provide a consistent viewport, then adding a simply transform at the next level (lowest in the processing pipeline we can get) I think would be a solution.

I can’t see most of the Processing community care about this though. So I may look at doing it, but have it as something in the PGraphics for expert use and not something intended to be exposed to the average user.