Kinect camera - MIRROR using Type!


I am at a block with my code, and I don’t know where to go from here. I am creating a mirror out of type for a current project, and I managed to do so following the Coding Train’s point cloud tutorial and substituting the points for letters.

I am trying to take this concept farther .

I would like to transform different properties of the text depending on variables that Kinect can access. One of the issues I am having is that I can’t find a reference for all the things you can do with Kinect. For example, is there a way to use the speed of the user moving as a variable? This could mean that I would be able to change the font or size based on the speed of the user moving in the Kinect’s field of view. Just as an example of the sort of step I am wanting to take with this project.

Here is my code:

import org.openkinect.freenect.*;
import org.openkinect.processing.*;

// Kinect Library object
Kinect kinect;

// Angle for rotation
float a = 0;

// We'll use a lookup table so that we don't have to repeat the math over and over
float[] depthLookUp = new float[2048];

void setup() {
  // Rendering in P3D
  size(960, 720, P3D);
  kinect = new Kinect(this);

  // Lookup table for all possible depth values (0 - 2047)
  for (int i = 0; i < depthLookUp.length; i++) {
    depthLookUp[i] = rawDepthToMeters(i);

void draw() {


  // Get the raw depth as array of integers
  int[] depth = kinect.getRawDepth();

  // We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
  int skip = 15;

  // Translate and rotate
  translate(width/2, height/2, -50);
  // Nested for loop that initializes x and y pixels and, for those less than the
  // maximum threshold and at every skipping point, the offset is caculated to map
  // them on a plane instead of just a line
  for (int x = 0; x < kinect.width; x += skip) {
    for (int y = 0; y < kinect.height; y += skip) {
      int offset = x + y*kinect.width;

      // Convert kinect data to world xyz coordinate
      int rawDepth = depth[offset];
      PVector v = depthToWorld(x, y, rawDepth);

      // Scale up by 200
      float factor = 1000;
      translate(v.x*factor, v.y*factor, factor-v.z*factor);
      // Draw a point
      text("[]", 0, 0);
      //point(0, 0);

  // Rotate
  a += 0.015f;

// These functions come from:
float rawDepthToMeters(int depthValue) {
  if (depthValue < 2047) {
    return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161));
  return 0.0f;

// Only needed to make sense of the ouput depth values from the kinect
PVector depthToWorld(int x, int y, int depthValue) {

  final double fx_d = 1.0 / 5.9421434211923247e+02;
  final double fy_d = 1.0 / 5.9104053696870778e+02;
  final double cx_d = 3.3930780975300314e+02;
  final double cy_d = 2.4273913761751615e+02;

// Drawing the result vector to give each point its three-dimensional space
  PVector result = new PVector();
  double depth =  depthLookUp[depthValue];//rawDepthToMeters(depthValue);
  result.x = (float)((x - cx_d) * depth * fx_d);
  result.y = (float)((y - cy_d) * depth * fy_d);
  result.z = (float)(depth);
  return result;

// Daniel Shiffman
// Kinect Point Cloud example


I am using the 1473 model of the Kinect.

1 Like

If you are using this Kinect object from this library:

…then, browsing through the library source, it doesn’t appear to have any such advanced functions – just ways of accessing the underlying freenect object contained inside the Kinect object. I believe that the main way to get such things is to create a point cloud, then derive your data from that cloud.

Or, get a skeleton from SimpleOpenNI and calculate your derived statistics from the skeleton data.

Previous discussions:

Previous discussions:

…and for a different option see also the RealSense hardware / library: