Realtime image processing using thread(); or class extending thread

hi forum i was able to do realtime image processing (color recognition )streamed from laptop camera but not succeed to use thread(); or class extending thread. need for help. thank you in advance.
bellow is code without thread and works as needed

import processing.video.*;

// Variable for capture device
Capture video;

// A variable for the color we are searching for.
color trackColor; 
float threshold = 80;

void setup() {
  //size(320, 240);
  size(640, 360);
  video = new Capture(this, width, height);
  video.start();
  // Start off tracking for red
  trackColor = color(255, 0, 0);
}

void captureEvent(Capture video) {
  // Read image from the camera
  video.read();
}

void draw() {
  video.loadPixels();
  image(video, 0, 0);
  
  threshold = map(mouseX,0,width,0,100);
  // Before we begin searching, the "world record" for closest color is set to a high number that is easy for the first pixel to beat.
 
  float avgX = 0;
  float avgY = 0;
  int count = 0;

  // Begin loop to walk through every pixel
  for (int x = 0; x < video.width; x++ ) {
    for (int y = 0; y < video.height; y++ ) {
      int loc = x + y*video.width;
      // What is current color
      color currentColor = video.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);
      float d = dist(r1, g1, b1, r2, g2, b2); 

      if (d < threshold) {
        stroke(255);
        strokeWeight(1);
        point(x,y);
        avgX += x;
        avgY += y;
        count++;
      }
    }
  }

  if (count > 10) { 
    avgX = avgX / count;
    avgY = avgY / count;
    
    // Draw a circle at the tracked pixel
    fill(trackColor);
    strokeWeight(2.0);
    stroke(0);
    ellipse (avgX, avgY, 16, 16);
    println(frameRate);
    println(count);
  }
}

void mousePressed() {
  // Save color where the mouse is clicked in trackColor variable
  int loc = mouseX + mouseY*video.width;
  trackColor = video.pixels[loc];
}

and code bellow is my trial n error


import processing.video.*;

// Variable for capture device
Capture video;
XCO xco = new XCO();
// A variable for the color we are searching for.
color trackColor; 
float threshold = 80;

void setup() {
  //size(320, 240);
  size(640, 360);

  video = new Capture(this, width, height);
  video.start();
  // Start off tracking for red
  trackColor = color(255, 0, 0);
    xco.start();
}

void captureEvent(Capture video) {
  // Read image from the camera
  video.read();
}

void draw() {
  video.loadPixels();
  image(video, 0, 0);
  
  
}

void mousePressed() {
  // Save color where the mouse is clicked in trackColor variable
  int loc = mouseX + mouseY*video.width;
  trackColor = video.pixels[loc];
}

class XCO extends Thread{

void run(){
calc();
}

void calc(){
  float avgX = 0;
  float avgY = 0;
  int count = 0;
threshold = map(mouseX,0,width,0,100);
  // Begin loop to walk through every pixel
  for (int x = 0; x < video.width; x++ ) {
    for (int y = 0; y < video.height; y++ ) {
      int loc = x + y*video.width;
      // What is current color
      color currentColor = video.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);
      float d = dist(r1, g1, b1, r2, g2, b2); // We are using the dist( ) function to compare the current color with the color we are tracking.
  if (d < threshold) {
        stroke(255);
        strokeWeight(1);
        point(x,y);
        avgX += x;
        avgY += y;
        count++;
      }
    }
  }
   if (count > 10) { 
    avgX = avgX / count;
    avgY = avgY / count;
    
    // Draw a circle at the tracked pixel
    fill(trackColor);
    strokeWeight(2.0);
    stroke(0);
    ellipse (avgX, avgY, 16, 16);
    println(frameRate);
    println(count);
  }
}
}

Yes, because you don’t want a thread for this. At least not like this. It cannot work. You would need to copy all the pixel data to pass in to another thread, and then pass the computed data back for drawing. You can’t draw in another thread.

What’s your motivation for using threads?

The only way that threads makes sense with pixel calculations is if you were to use something like the Java Executor and Future interfaces, pass sections of the pixel array to multiple different threads for analysis, and wait for them to complete, each frame. Basically a fork-join model.

1 Like

to increase FPS and to make computation work fast

Well, don’t assume threading will automatically do this - parallelizing things has overhead in order to make things correct.

There’s a bunch of things you could do to optimize your existing single-threaded approach first -

  • Work with raw pixel data (eg. int red = pixels[loc] >> 16 & 0xFF;) rather than float.
  • Use raw pixels rather than point(x,y).
  • Swap the loops around - always loop y then x so you read the pixels in the order they’re laid out in memory.

If you still think you really need to parallelize, then expect a steep learning curve and start looking into some of the JDK library support for this.

1 Like

what raw pixels mean??

When you need to draw a white pixel, use pixels[loc] = 0xFFFFFFFF; That’s the main canvas pixels, so you might need to use loadPixels() and updatePixels(). Really depends how many points you have whether that’s worth it. Won’t be exactly the same - might suit you and should be faster.

EDIT : actually looking back at your first code, you could probably do this on the video pixels too, and draw it to the canvas after the calculations.

  for (int x = 0; x < video.width; x++ ) {
    for (int y = 0; y < video.height; y++ ) {
      int loc = x + y*video.width;
      // What is current color
      //color currentColor = video.pixels[loc];
      //float r1 = red(currentColor);
      //float g1 = green(currentColor);
      //float b1 = blue(currentColor);
      int r1 = pixels[loc] >> 16 & 0xFF;
      int g1 = pixels[loc] >> 16 & 0x00FF;
      int b1 = pixels[loc] >> 16 & 0x0000FF;
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);

      // Using euclidean distance to compare colors
      float d = dist(r1, g1, b1, r2, g2, b2);

i try as you said but give error nullpointerexeption for r1,g1,b1

i do with this and works but not too faster

 int r1 = video.pixels[loc] >> 16 & 0xFF;

That would be -

int r1 = video.pixels[loc] >> 16 & 0xFF;
int g1 = video.pixels[loc] >> 8 & 0xFF;
int b1 = video.pixels[loc] & 0xFF;

And note that’s the raw 0-255 values, so you’ll have to change all your other calculations too.

with this code it could handle 400 points more with the same FPS

     int r1 = video.pixels[loc] >> 16 & 0xFF;
      int g1 = video.pixels[loc] >> 8 & 0xFF;
      int b1 = video.pixels[loc] & 0xFF;
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);

great progress

what about this? i couldn’t understand

this is my current code.


import processing.video.*;

Capture video;
color trackColor; 
float threshold = 80;

void setup() {
  //size(320, 240);
  size(640, 360);
  video = new Capture(this, width, height);
  video.start();
  // Start off tracking for red
  trackColor = color(255, 0, 0);
   //trackColor = pixels[] >> 16 & 0xFF;
}

void captureEvent(Capture video) {
  // Read image from the camera
  video.read();
}

void draw() {
  video.loadPixels();
  image(video, 0, 0);
  
  threshold = map(mouseX,0,width,0,100);

  float avgX = 0;
  float avgY = 0;
  int count = 0;

  // Begin loop to walk through every pixel
  for (int y = 0; y < video.height; y++ ) {
    for (int x = 0; x < video.width; x++ ) {
      int loc = x + y*video.width;
      // What is current color
     //color currentColor = video.pixels[loc];
     // float r1 = red(currentColor);
     // float g1 = green(currentColor);
     // float b1 = blue(currentColor);
      int r1 = video.pixels[loc] >> 16 & 0xFF;
      int g1 = video.pixels[loc] >> 8 & 0xFF;
      int b1 = video.pixels[loc] & 0xFF;
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);

      float d = dist(r1, g1, b1, r2, g2, b2); 

      if (d < threshold) {
        stroke(255);
       
        //stroke(video.pixels[loc] = 0xFF);
       
        strokeWeight(1);
        point(x,y);
        avgX += x;
        avgY += y;
        count++;
      }
    }
  }

  if (count > 10) { 
    avgX = avgX / count;
    avgY = avgY / count;
    
    // Draw a circle at the tracked pixel
    fill(trackColor);
    strokeWeight(2.0);
    stroke(0);
    ellipse (avgX, avgY, 16, 16);
    println(frameRate);
    println(count);
  }
}

void mousePressed() {
  // Save color where the mouse is clicked in trackColor variable
  int loc = mouseX + mouseY*video.width;
  trackColor = video.pixels[loc];
}

if there is another part of my code you suggest me to change especially related to raw pixels let me know

You should be able to get rid of the call to image(video, 0, 0); and try something like (not tested!)

void draw() {
  loadPixels();
  video.loadPixels();
  ...
  for ( ... {
   ...  
    if (d < threshold) {
      pixels[loc] = 0xFFFFFFFF;
      ....
    } else {
      pixels[loc] = video.pixels[loc];
    }
  }
 updatePixels();
}

If you want to do image processing fast, shaders are usually the way to go. Fragment shaders are programs that operate on every pixel on the screen concurrently. This is very applicable for finding if pixels in an image are within a certain range. However, they are less applicable for finding the average position, but it is still possible to use shaders to speed up the process. I wrote a couple of shaders and modified your program to use them.
I’ll start with the shader to find pixels if they are within a threshold:
filename: colorDetect.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

uniform sampler2D texture;

varying vec4 vertColor;
varying vec4 vertTexCoord;

uniform vec4 targetColor;
uniform float threshold; //between 0 and 1

void main() {
  // get pixel color
  vec4 texColor = texture2D(texture, vertTexCoord.st) * vertColor;
  vec3 a = texColor.xyz;
  vec3 b = targetColor.xyz;
  // compute "distance" between colors rgb components
  float dist = sqrt((b.x - a.x) * (b.x - a.x) + (b.y - a.y) * (b.y - a.y) + (b.z - a.z) * (b.z - a.z));
  // colors are from 0-1, so the max distance between colors is sqrt(3)
  if(dist < threshold * sqrt(3)){
    // display inverse color where pixels are within the threshold
    texColor = vec4(1) - texColor;
  }
  // force alpha to be 1 since inverting the color makes the alpha zero
  gl_FragColor = vec4(texColor.xyz, 1);
}

While the syntax of shaders is different than Java, it is not difficult to learn. The key concepts to take away from this shader are vectors, the way colors are computed in shaders, and the difference between varying and uniform variables. In shaders, colors are stored with 4 component vectors with each component ranging from 0 to 1. The components are usually marked x, y, z, and w. You can get individual components by doing vector.x, or you could get more components at once and in any order by using something like vector.xyz or vector.zx. Vectors can have 2 to 4 components as a vec2, vec3, or vec4, with colors being a vec4.
Varying variables are different for every pixel the shader computes. Uniform variables are constants in the shader, but can be modified by the external program that calls the shader, in this case the processing sketch. A more comprehensive introduction to shaders can be found in this Processing Tutorial by Andres Colubri.

But, how would you use shaders to compute the average position of the pixels in the threshold? The way it was done in the original was by adding the screen positions of all detected pixels then divide by the number of detected pixels. While the sum and division parts of the calculation cannot be done in a shader, finding the positions of all detected pixels can. This is what this next shader does:
filename: colorPos.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

uniform sampler2D texture;

varying vec4 vertColor;
varying vec4 vertTexCoord;

uniform vec4 targetColor;
uniform float threshold; //between 0 and 1

void main() {
  vec4 texColor = texture2D(texture, vertTexCoord.st) * vertColor;
  vec3 a = texColor.xyz;
  vec3 b = targetColor.xyz;
  float dist = sqrt((b.x - a.x) * (b.x - a.x) + (b.y - a.y) * (b.y - a.y) + (b.z - a.z) * (b.z - a.z));
  bool cond = dist < threshold * sqrt(3);
  // if color is within threshold, encode the pixel's position into red and green components
  // and use blue component as a marker that the pixel was in range
  // vertTexCoord is from 0 to 1, so after computing average, multiply by width and height to get screen position
  gl_FragColor = cond ? vec4(vertTexCoord.x, vertTexCoord.y, 1, 1) : vec4(0, 0, 0, 1);
}

You may have noticed the varying variable vertTexCoord in these shaders. It represents the uv coordinates of the texture. Most simply, uv coordinates are a 2D vector that ranges from 0-1 in each component, where a uv of (0, 0) means you are at the top-left corner of the texture and (1, 1) the bottom right. To get the screen position, you multiply the uv by the screen width and height. For the purposes of these shaders, vertTexCoord is just the uv coordinate of the current pixel on the screen. Since shaders output color, I can output the uv coordinate by making it part of the output color. Since colors have 4 components, I can use one to say whether that pixel was within the threshold. Then, in the main program, I loop over every pixel from this other shader. For each pixel with a blue value > 0, use its red and green values to add to a total vector, then divide the number by the number of additions.
Processing sketch:

import processing.video.*;
PShader colorFinder, colorPosShader;
PGraphics overlay, posBuffer;
// Variable for capture device
Capture video;

// A variable for the color we are searching for.
color trackColor; 
float threshold = 0.1;

void setup() {
  //size(320, 240);
  size(640, 480, P2D);
  overlay = createGraphics(width, height, P2D);
  posBuffer = createGraphics(width, height, P2D);
  colorFinder = loadShader("colorDetect.glsl");
  colorPosShader = loadShader("colorPos.glsl");
  printArray(Capture.list());
  video = new Capture(this, width, height);
  video.start();
  video.loadPixels();
  // Start off tracking for red
  trackColor = color(255, 0, 0);
}

void captureEvent(Capture video) {
  // Read image from the camera
  video.read();
}

void draw() {
  colorFinder.set("threshold", threshold);
  colorFinder.set("targetColor", red(trackColor) / 255.0, green(trackColor) / 255.0, blue(trackColor) / 255.0, 1.0);
  colorPosShader.set("threshold", threshold);
  colorPosShader.set("targetColor", red(trackColor) / 255.0, green(trackColor) / 255.0, blue(trackColor) / 255.0, 1.0);
  overlay.beginDraw();
  overlay.shader(colorFinder);
  overlay.image(video, 0, 0);
  overlay.endDraw();
  posBuffer.beginDraw();
  posBuffer.shader(colorPosShader);
  posBuffer.image(video, 0, 0);
  posBuffer.endDraw();
  //compute average position by looking at pixels from position buffer
  posBuffer.loadPixels();
  PVector avg = new PVector(0, 0);
  int count = 0;
  for(int i = 0; i < posBuffer.pixels.length; i++){
    // encoded so blue is > 0 if a pixel is within threshold
    if(blue(posBuffer.pixels[i]) > 0){
      count++;
      // processing takes 0-1 (float) color values from shader to 0-255 (int) values for color
      // to decode, we need to divide the color by 255 to get the original value
      avg.add(red(posBuffer.pixels[i]) / 255.0, green(posBuffer.pixels[i]) / 255.0);
    }
  }
  if(count > 0){
    // we have the sum of positions, so divide by the number of additions
    avg.div((float) count);
    // convert 0-1 position to screen position
    avg.x *= width;
    avg.y *= height;
  } else {
    // appear offscreen
    avg = new PVector(-100, -100);
  }
  image(overlay, 0, 0);
  fill(trackColor);
  stroke(0);
  circle(avg.x, avg.y, 16);
  fill(0, 50);
  noStroke();
  rect(0, 0, 150, 30);
  fill(150);
  text("Framerate: " + frameRate, 0, 11);
  text("Threshold: " + threshold, 0, 22);
}

void mousePressed() {
  // Save color where the mouse is clicked in trackColor variable
  video.loadPixels();
  int loc = mouseX + mouseY*video.width;
  trackColor = video.pixels[loc];
}

void mouseWheel(MouseEvent e){
  threshold -= e.getCount() * 0.01;
  threshold = constrain(threshold, 0, 1);
}
3 Likes

hi brother i directly copy and paste your code. and the error is “The file “colorDetect.glsl” is missing or inaccessible, make sure the URL is valid or that the file has been added to your sketch and is readable”. is there library need to import or class to added? thank you

You need to save the .glsl files into the same folder as the sketch file as separate files. You can do this by making a .txt file, pasting the code, saving, then renaming the file “filename.glsl”, ignoring the warning about changing file extensions.

did you mean the code you wrotte i need to paste in a .txt file and renaming the .txt file as .glsl after saving it?

You can download the Shader mode for processing. Then you can click on new tab und type in the name of the shader plus “.glsl”.

The Shader Mode is so far as I know the normal JAVA Processing but with this one feature

Yes, the .glsl files can be created by renaming .txt files. The shader mode is also a good option.

i already done your guide and remain the last one how to change colorPos.txt to colorPos.glsl or how to change the file name extention