Hi all I am doing a project with a friend and we wanted to filter the heatmap colors to a person’s silhouette when they walk in front of the kinect camera. I am able to enable users, but from there I am not sure what to do. please let me know if there is anyway I can do this. This is the code I currently have. I have seen that you are able to put letters into a person’s silhouette, but I am new to this and not sure how to put multiple colors into a person silhouette. Thank you so much for you help!
import processing.opengl.*;
import org.openkinect.freenect.*;
import org.openkinect.processing.*;
import SimpleOpenNI.*;
SimpleOpenNI kinect;
int closestValue;
int closestX;
int closestY;
PImage cam;
void setup()
{
size(1450, 1350);
frameRate(25);
background(0);
surface.setResizable(true);
noStroke();
fill(102);
kinect = new SimpleOpenNI(this);
kinect.setMirror(true);
kinect.enableDepth();
kinect.enableUser();
colorMode(HSB);
}
void draw()
{
background(0);
kinect.update();
PImage depthImage = kinect.userImage();
cam = kinect.userImage().get();
image(cam, 0, 0, width, height);
int[] depthValues = kinect.userMap();
IntVector userList = new IntVector();
kinect.getUsers(userList);
for (int i=0; i<userList.size(); i++) {
int userId = userList.get(i);
PVector position = new PVector();
kinect.getCoM(userId, position);
kinect.convertRealWorldToProjective(position, position);
fill(255,255,255);
textSize(40);
text("you're hot", position.x, position.y, 25, 25);
loadPixels();
for (int y=0; y < 480; y++) {
for (int x=0; x < 640; x++) {
int pixel = x + y * 640;
int currentDepthValue = depthValues[pixel];
if ( currentDepthValue > 610 || currentDepthValue > 1525 || currentDepthValue < closestValue)
pixels[pixel] = color(0,0,0);
}
}
updatePixels();
}
}
For people who do not have a Kinect on-hand to test: what do you have working, and what specifically do you need to change? Can you see the silhouettes, but they are black-and-white? When you see heatmap, do you have a specific color palette in mind?
For a recent discussion of mapping linear grayscale pixel brightness values to a color map, see this recent post:
@jeremydouglass I am able to see the silhouette of the person with the code that I have provided. But now I want to to change the color of their silhouette to a gradient heatmaps colors at random. So everytime a person is detected there would be a random gradient heatmap color that would be shown on their silhouette.
Ok, so I think there are two challenges here, at least. what you should do first is to assign a different single color to each silhouette. This part of the code is important so to make sure you can do this before you try to assign a gradient (or gradient heatmap so I understand) to them. After you can do the first task, then you should focus on the second one. Related to the gradient, each user would have a different gradient? Does the gradient change over time? What does the gradient(heatmap) should look like?
I don’t know how to assign different color to each silhouette. the code that I have is able to change the a persons silhouette color by the distance. Since it enableusers and is part of the library that SimpleOpenNi. I would want to make it look something like the picture that is shown below ideally the guy on the threadmill. But if it’s too complex then just having the heatmap colors on random everytime a person shows up is ideal. The gradient color would change overtime, and also everytime it detects a new person. I really appreciate your guys help on this!
Let’s talk about those heat maps first. The one from the guy in the treadmill seems to come from an IR camera where it can detect heat sources. Try to implement this in Processing using an algorithm instead of using an IR camera is a big challenge. If you stick to a random generated heatmap, then it is doable and you can define your random generation to be dependent on time, colors (as based colors), distance and user (potentially… if you figure the second task).
For the second task, detecting different users, it could be challenging. I do not own any of these depth sensors but I have seen posts dealing with them. I am aware that at least one sensor can identify different users and they can be addressed by an ID number. Unfortunately I don’t know if I can access this post as it could be buried in the kinect section in the older forum. I don’t even know if the technology you are using is the same as from those previous examples. I have a better suggestion. Have a look at the documentation of your device. Does it provide a tag for each user?
How many users are you looking to dealing with?
Also, can you associate depth pixel info to your rgb from the camera? If you can change the colors of the image based on depth aka. you are working with a detected shape - then that is a good start. You could create a heatmap texture and applied to it. An easier approach would be to use random color assignation but that wouldn’t look like a heat map. Maybe using perlin noise could work better. It is just matter of setting some code and give it a try.
That’s right – if you do topographic coloring based on a Kinect depth camera then the face won’t be red – it will be the same color as other parts of the body that are the same distance away.
Is the goal here for each silhouette to have a different color palette, or for them to each be colored in the same way?
Yes that is an IR camera but I just used the picture to show you an example of the colors that I wanted when a user is detected. The goal here is to have each silhouette to have a different color palette. (It could also change depending on how far the user is)
I am also able to kinect.enableColorDepth(true); but when I do that I don’t know how to set the background to black. everytime I put background(0); it doesn’t work…
Your kinect “background” isn’t empty space – it is a height map of the walls (green and blue). Flood filling with background() before drawing the kinect image doesn’t fill in a color around the figure because that kinect depth map isn’t a partly transparent image – there is stuff in there (specifically, the shape of the walls).
So, if you want to do background subtraction, you need to iterate over the Kinect depth map and remove all values less than x (things that are far away).