Kinect and processing 3 thesis project

Hi! Im attempting to produce a fairly simple art piece for my senior thesis as a visual artist. What I basically want to do is project a video onto the wall, and use the Kinect as a proximity sensor (its all I have available atm), I want the audience’s proximity to alter the speed, and possibly colors of the video. The Kinect I have is V1 1414, and I got it to work on processing well, as well as edit the minimum and maximum threshold.
I was also able to play the video from the laptop, and be able to alter the speed (didn’t get to color yet). but am finding difficulties in connecting the two elements, as in if the Kinect sees the offset depth within the threshold the program would speed up the video, and if there are no objects then the video speed to continue normally.
what is happening now is that the video plays at normal speed when I am closer than the given threshold, otherwise it doesn’t even play
I am a beginner in processing, in fact the only experience in coding I have was with C++ and in middle school (safe to say I don’t remember much)
If anyone can help I would be eternally grateful,
I apologize for the messy code, as I mentioned I have no experience with this and am trying to learn on the job…

import processing.video.*;
import org.openkinect.processing.*;

Movie movie;
Kinect kinect;

float minThresh = 480;
float maxThresh = 830;

PImage img;

void setup() {
  size(720, 480);
  frameRate(24);
  movie = new Movie(this, "vid.MP4");
  movie.loop();
  movie.speed(1.0);

kinect = new Kinect(this);
  kinect.initDepth();
}
 
void draw() {  
  background(0);
  if (movie.available()) {    
    movie.read(); 
    
  int[] depth = kinect.getRawDepth();
  
  for (int x = 0; x < kinect.width; x++) {
    for (int y = 0; y < kinect.height; y++) {
      int offset = x + y * kinect.width;
      int d = depth[offset]; 
  
  if (d > minThresh && d < maxThresh) {
        movie.speed(10.0);
      } else {
        movie.speed(1.0);
        }
      }
    }
  image(movie, 0, 0);
  }
}
1 Like

Hi @nourabdelbaky,
I know the kinect is what you have at the moment, but I dont think it is the rigth tool for this project. I mean, of course you can do it using the kinect, but it seems that is more complicated than you need.
Using a couple of cheap ultrasound sensors and an arduino or arduino compatible board you get the same result as far as i understand your idea.

See this (as a silly example)

I think kinect will work fine (no experience with kinetic myself). You could try arduino latter but at the end, kinect will give you a higher depth richness and I will bet (without looking a the specs) that kinect has a better spatial range.

The problem with the code above is that you are adjusting the speed of the video many times (unnecessary) every draw cycle. This is inefficient. There are two improvements I will suggest. I am providing only the draw() function:

1. Extract the closest detected object from the depth map first before changing the video speed.

You need to handle cases where this value is not found. For instance, no subject within the range of the Kinect unit. What value does the Kinect returns? You need to consult the specs of your unit and adjust the code. This is not important for this demo.

final float MINTHRESH = 480;
final float MAXTHRESH = 830;

float memoryMinDista=PApplet.MAX_FLOAT;
float movieSpeed = 1.0;

void draw() {  
    background(0);
    
    if (movie.available()) {    
	movie.read(); 
    
	float dista = proximityDetector();

	if (dista > minThresh && dista < maxThresh) {

	    memoryMinDista = dista;
	    movieSpeed = 10.0;
	    //IDEA: movieSpeed = map(dista,min_dista,max_dista,min_video_speed,max_video_speed);
	    //NOTE: If depth distance is in centimeters  and your min/max distance is 30/300 then
	    //EXAMPLE: movieSpeed = map(dista,30,300,1.0,10.0);

	    movie.speed(movieSpeed);
	}

	image(movie, 0, 0);
    }

}

/*
 * Returns closest detected vakue in depth array
 * @output Minimum value in native depth units
 */
float proximityDetector(){
    int[] depth = kinect.getRawDepth();

    //https://github.com/processing/processing/blob/master/core/src/processing/core/PConstants.java#L100
    float minDepth = PApplet.MAX_FLOAT;
  
    for (int x = 0; x < kinect.width; x++) {
	for (int y = 0; y < kinect.height; y++) {
	    int offset = x + y * kinect.width;
	    
	    if( depth[offset] < minDepth){
		minDepth = depth[offset];
	    }   
	
	}
    }

    return minDepth;
}

2. Second suggestion: Decouple your testing from the Kinect unit.

You shoud figure out the video speed and the color first using a simple schema. Second, understand the Kinect data. 3. Integrate both concepts. Doing it this way, it helps to debug more easily. Instead of using the Kinect, you can use the mouse pointer to simulate closeness. If the mouse is close to the left edge of the sketch window then it simulates a close subject. If the mouse is closest to the right side of the sketch window, then it is the furthest. Demo code (not tested) below.

Kf

final float MIN_KIN_DISTA=30;  //in cm
final float MAX_KIN_DISTA=300; //in cm

float memoryMinDista=PApplet.MAX_FLOAT;
float movieSpeed = 1.0;

void draw() {  
    background(0);
    
    if (movie.available()) {    
	movie.read(); 
    
	float dista = proximityDetectorSimulator();

	if (dista > MIN_KIN_DISTA && dista < MXN_KIN_DISTA) {

	    memoryMinDista = dista;
	    movieSpeed = map(dista,30,300,1.0,10.0);

	    movie.speed(movieSpeed);
	}

	image(movie, 0, 0);
    }

}

/*
 * Returns a depth value based on mouseX position within the sketch window
 * Assumes Kinext provides a value between 30 cm (min) and 300 cm(max)
 * Left most position in sketch window: lowest value and increases towards the right
 * @output Depth value simulated by mouseX value
 */
float proximityDetectorSimulator(){

    return map(mouseX,0,width,MIN_KIN_DISTA,MAX_KIN_DISTA);    
    
}
2 Likes

In fact, you are right. The kinect has a field of vision obviously wider than the sensors (which work in a very directional way). In terms of area in which they are able to work, both devices have more or less the same range, do not see beyond 3 meters (9.8 ft.). The ultrasound sensor sees nothing if it is closer than 4 centimeters (1.5 inch) and the kinect nothing that is closer than 20 centimeters (7.8 inch).

In addition, with the right light conditions the kinect has a much more stable operation, while the chinese sensors can be a headache with a lot of errors and phantom readings.

Now that I think about it, forget my comment. Use the kinect as @kfrajer say.