Head tracking Gradient brushes

Hey everybody,

I’m quite new to processing. I’ve done 1 very simple project before involving processing and a kinect where I only used a image sequence that changed when the distance changed of the user.

The problem I have now is with this project where I’m trying to move a Random gradient brush, via head tracking on the kinect. I have made a piece of code for this where it tracks the mouseX and mouseY but how do I change these coordinates to track a head of the user on the kinect?

the code:

ArrayList<Brush> brushes; 

void setup() {
  size(displayWidth, displayHeight);

  brushes = new ArrayList<Brush>();
  for (int i = 0; i < 10; i++) {
    brushes.add(new Brush());
  }
}

void draw() {
  PFont font;
  font = createFont("Montserrat-Regular.otf", 12);
  textFont(font);
  fill(0);
  text("SS20", 1250, 800);

// PImage img;
// img = loadImage("aura.jpg");
// image(img, 0, 0);
//  tint(255,20); 

  fill(255, 7);
  rect(0, 0, width, height);
  for (Brush brush : brushes) {
    brush.paint();
  }
}

void mouseMoved() {
  setup();
}

class Brush {
  float angle;
  int components[];
  float x, y;
  color clr;

  Brush() {
    angle = random(TWO_PI);
    x = random(width);
    y = random(height);
    clr = color(random(255), random(255), random(255), 15);
    components = new int[10];
    for (int i = 0; i < 10; i++) {
      components[i] = int(random(1, 1));
    }
  }

  void paint() {
    float a = 0;
    float r = 0;
    float x1 = x;
    float y1 = y;
    float u = random(0.5, 1);

    fill(clr);
    noStroke();    

    beginShape();
    while (a < TWO_PI) {
      vertex(x1, y1); 
      float v = random(0.95, 1);
      x1 = mouseX + r * cos(angle + a) * u * v;
      y1 = mouseY + r * sin(angle + a) * u * v;
      a += PI / 150;
      for (int i = 0; i < 2; i++) {
        r += sin(a * components[i]);
      }
    }
    endShape(CLOSE);

    if (x < 0 || x > width ||y < 0 || y > height) {
      angle += HALF_PI;
    }

    x += 0 * cos(angle);
    y += 0 * sin(angle); 
    angle += random(-0.01, 0.01);
  }
}
1 Like

The way I would approach this is to use opencv for processing. I’d take a look at the face tracking example. Also I think the live cam test is using face tracking as the example. It looks like it gives you rectangles. So you could convert that to a point by averaging the x and width and then the y and height.

2 Likes

But why am I not able to do this with the Kinect?

Sorry it took me a while to follow up. If you are using windows and the KinectPV2 library then there a number of ways to approach head tracking. You can you the skeleton tracker or the face tracking examples as a guide for where to go.

As far as I know this kind of stuff isn’t available in openkinect for processing because it relies on the Windows SDK. You could still try feeding the kinect image into opencv but I don’t know how opencv detects faces. It might have trouble with a grayscale depth image.

1 Like