Use Sensors to Play a Video Using Gestures on MacOS and Win

We are a few beginners in a school, we want to use Kinect to tell the computer to play a video or a bit of animation based on the movements of the person in front. We used OpenKinect Library on MacOS to try to do this, we are trying to combine Daniel Shiffman’s sample code of AverageTracking with a line of code that can allow us to play video. We have a Kinect sensor and an Intel Realsense sensor available to us. We could do this under Windows but would prefer to do it under MacOS.

We are currently stuck on the part where we tell the script to play a video, can anyone help us to get some idea on combining the video playing code with Average Point Tracking?

Any alternative suggestions would be appreciated. Thank you.

Hi Welcome to the forum!

Do you have your code? Even if it doesn’t do what you want, it helps to give a concrete idea.

Yes, thank you for your response, here is our code.
We are confused about how to combine them together. So how to let the script to play the video when the tracker detects something.

import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import org.openkinect.tests.*;

class KinectTracker {

  // Depth threshold
  int threshold = 745;

  // Raw location
  PVector loc;

  // Interpolated location
  PVector lerpedLoc;

  // Depth data
  int[] depth;

  // What we'll show the user
  PImage display;
  //Kinect2 class
  Kinect2 kinect2;
  KinectTracker(PApplet pa) {
    //enable Kinect2
    kinect2 = new Kinect2(pa);
    // Make a blank image
    display = createImage(kinect2.depthWidth, kinect2.depthHeight, RGB);
    // Set up the vectors
    loc = new PVector(0, 0);
    lerpedLoc = new PVector(0, 0);

  void track() {
    // Get the raw depth as array of integers
    depth = kinect2.getRawDepth();

    // Being overly cautious here
    if (depth == null) return;

    float sumX = 0;
    float sumY = 0;
    float count = 0;

    for (int x = 0; x < kinect2.depthWidth; x++) {
      for (int y = 0; y < kinect2.depthHeight; y++) {
        // Mirroring the image
        int offset = kinect2.depthWidth - x - 1 + y * kinect2.depthWidth;
        // Grabbing the raw depth
        int rawDepth = depth[offset];

        // Testing against threshold
        if (rawDepth > 0 && rawDepth < threshold) {
          sumX += x;
          sumY += y;
    // As long as we found something
    if (count != 0) {
      loc = new PVector(sumX/count, sumY/count);

    // Interpolating the location, doing it arbitrarily for now
    lerpedLoc.x = PApplet.lerp(lerpedLoc.x, loc.x, 0.3f);
    lerpedLoc.y = PApplet.lerp(lerpedLoc.y, loc.y, 0.3f);

  PVector getLerpedPos() {
    return lerpedLoc;

  PVector getPos() {
    return loc;

  void display() {
    PImage img = kinect2.getDepthImage();

    // Being overly cautious here
    if (depth == null || img == null) return;

    // Going to rewrite the depth image to show which pixels are in threshold
    // A lot of this is redundant, but this is just for demonstration purposes
    for (int x = 0; x < kinect2.depthWidth; x++) {
      for (int y = 0; y < kinect2.depthHeight; y++) {
        // mirroring image
        int offset = (kinect2.depthWidth - x - 1) + y * kinect2.depthWidth;
        // Raw depth
        int rawDepth = depth[offset];
        int pix = x + y*display.width;
        if (rawDepth > 0 && rawDepth < threshold) {
          // A red color instead
          display.pixels[pix] = color(150, 50, 50);
        } else {
          display.pixels[pix] = img.pixels[offset];

    // Draw the image
    image(display, 0, 0);

  int getThreshold() {
    return threshold;

  void setThreshold(int t) {
    threshold =  t;

KinectTracker tracker;

void setup() {
  size(640, 520);

  tracker = new KinectTracker(this);

void draw() {

  // Run the tracking analysis
  // Show the image

  // Let's draw the raw location
  PVector v1 = tracker.getPos();
  fill(50, 100, 250, 200);
  ellipse(v1.x, v1.y, 20, 20);

  // Let's draw the "lerped" location
  PVector v2 = tracker.getLerpedPos();
  fill(100, 250, 50, 200);
  ellipse(v2.x, v2.y, 20, 20);

  // Display some info
  int t = tracker.getThreshold();
  text("threshold: " + t + "    " +  "framerate: " + int(frameRate) + "    " +
    "UP increase threshold, DOWN decrease threshold", 10, 500);

// Adjust the threshold with key presses
void keyPressed() {
  int t = tracker.getThreshold();
  if (key == CODED) {
    if (keyCode == UP) {
      t +=5;
    } else if (keyCode == DOWN) {
      t -=5;

and Video Code:


Movie video;

void setup() {
  video = new Movie(this, "");

void movieEvent(Movie video) {;

void draw(){
  image(video, 0, 0);

Sorry the formatting is weird, it is my first time posting stuff on the forum, I apologize.

could you edit the post and use </> button in the editor to format the code?

while waiting for the formatting I see that you just have 2 sketches. Let’s start with an easier task - instead of Kinect, can you make the video play reacting to the mouse position? It may sound silly, but if you can make that, it’s easy to change the mouse position to Kinect.

Yes, we learned how to edit the post like that now. As we said, we are just a few beginners, both codes are really just found in open source platforms. We tried to use the mouse to control it but we failed, if you can sir can you send us an example of the mouse control code. That would be really appreciated.

I don’t know if this is your homework or project but I don’t think we can code something “for you” almost from scratch (maybe someone will do :slight_smile: but I personally think it’s not a good idea because if we do so then people will abuse it asking “can you make this for me?”). You can always learn processing by yourself. And it’s fine if you “fail” - you just have to ask questions here again with the code and description of what you tried and how you failed.

Okay mister that is totally fine, we will keep on trying to figure out how to do it ourselves. Thank you for your reply.