Hi, I’m working on a project where I’m using a top-down camera over a space and I need to be able to get multiple people’s positions and a single x and y value each based on their movement. People will be the only thing moving in the camera’s frame. I’m using these points of movement to control the gain of various sounds based on the distance between them and the areas I designated for each sound. I tried adding code from the 11.6: Computer Vision: Motion Detection - Processing Tutorial and got motion detection from one point working well with the rest of the program, but I would like to track two or more points at a time.
From the examples I’m seeing around the internet, it looks like blob detection is the way to go, but I don’t need the whole areas they outline, and they look for certain colors. Is there a way I can use blobs to detect where the centers of movement are without having to track specific colors and can I get specific variable names for each blob to use for other parts of the code?
This is the part of the code dealing with motion detection from one source:
loadPixels();
// Begin loop to walk through every pixel
for (int x = 0; x < video.width; x++ ) {
for (int y = 0; y < video.height; y++ ) {
int loc = x + y * video.width;
// What is current color
color currentColor = video.pixels[loc];
float r1 = red(currentColor);
float g1 = green(currentColor);
float b1 = blue(currentColor);
color prevColor = prev.pixels[loc];
float r2 = red(prevColor);
float g2 = green(prevColor);
float b2 = blue(prevColor);
float d = distSq(r1, g1, b1, r2, g2, b2);
if (d > threshold*threshold) {
//stroke(255);
//strokeWeight(1);
//point(x, y);
avgX += x;
avgY += y;
count++;
//pixels[loc] = color(255);
} else {
//pixels[loc] = color(0);
}
}
}
updatePixels();
// We only consider the color found if its color distance is less than 10.
// This threshold of 10 is arbitrary and you can adjust this number depending on how accurate you require the tracking to be.
if (count > 200) {
motionX = avgX / count;
motionY = avgY / count;
// Draw a circle at the tracked pixel
}
lerpX = lerp(lerpX, motionX, 0.1);
lerpY = lerp(lerpY, motionY, 0.1);
//take motion from 320x240 video to a 1920x1080 screen
//pos1X and pos1Y are used to set the gain of sound loops based on distance from designated coordinates
pos1X = width-lerpX*6;
pos1Y = lerpY*4.5;
I apologize if this question is too vague. I feel like I’m in way over my head and I’m not sure I’m looking in the right direction, thanks.