This pertains to image/movie frame/capture frame pixel[] scan.
In the Coding Train examples Dan uses a nested FOR loop to check RGB values of each pixel. Within each loop the x/y coordinates are converted to pixel[] coordinates and then tested.
Could you folks have a look at this algorithm. I’m using a single FOR loop. It test pixel[] values and if it test TRUE only then does it calculate the x/y coordinates. I think it’s more streamline/efficient than the nested FOR loop which converts the x/y coordinates to pixel[] coordinates each loop. What do you think? I modified the “sketch_11_5_AveragePixelColorTracking” sketch with the code below. I am new to Processing 3, but not new to coding.
Thank you all for the knowledge you’ve given me.
// Begin loop to walk through every pixel
for (int pix = 0; pix < video.pixels.length -1; pix++ ) { //1 for loop only
// What is current color
color currentColor = video.pixels[pix];
float r1 = red(currentColor);
float g1 = green(currentColor);
float b1 = blue(currentColor);
float r2 = red(trackColor);
float g2 = green(trackColor);
float b2 = blue(trackColor);
float d = distSq(r1, g1, b1, r2, g2, b2);
// x and y are only calculated when needed
if (d < threshold*threshold) {
int y = floor(pix/video.width);
int x = pix - (video.width * y);
stroke(255);
strokeWeight(1);
point(x, y);
avgX += x;
avgY += y;
count++;
}
}