How to use SPOUT input with 11.10: Computer Vision: Adding Lifespan to Blobs - Processing Tutorial

Hi, I am relatively new to Processing.
I am trying to make the Blobtrack Persistance example (as seen in this video) work with Input from SPOUT receiver instead of the webcam input.

Does anybody have an advice for me?

Many thanks in advance,

Marijn

Never heard of it but if you goto tools in processing, click add tool, then click library, and search for the spout library to install, then check out the link I sent, it may help you Pixel Flow - Spout - Processing 2.x and 3.x Forum

1 Like

Hi
If my understand to your question is true here
Some informations may help

Hello, Thanks for this. It is really interesting.

Altho it does not really answer my problem.

to be more specific:
The Blobtracking example relies on Capture() functions such as loadPixels(), video.start(), and others.

I am looking to replace this hardware input with Spout. But the problem is, that spout does not have any of these functions. So I need to somehow “convert” it into live videofeed.

I tried to use a virtual camera for this (OBS Virtualcam) but it does not work.
So the most elegant solution would be to receive SPOUT directly into processing, and to the blobtracking from there.

Does anyone have tips on how to do this “conversion”?

Hi there,

To give you some more information;
I am sending a video signal from TouchDesigner into Processing to do blobtracking.
Using Shiffman’s blobtracking persistence example 11_10.
I am trying to use SPOUT receiver instead of a hardware camera.
But I don’t know how to convert the Spout texture input, to a video format like Capture()
So that it can be used with functions like GetPixels() and other functions that are used in the blobtracking example.

Greetings, marijn

https://github.com/CodingTrain/website-archive/tree/main/Tutorials/Processing/11_video/sketch_11_10_BlobTracking_lifespan

Here’s the blobtracking code
The main question is: How do I make this work with SPOUT input?

Hello @marijn,

I thought I would take this out for a spin!

Steps:

  • Installed latest TouchDesigner and used the default example and changed output to TDSSyphonSpoutOut
  • Modified the SpoutDataReceiver example that comes with Spout library added to Processing. Partial code below… I only included setup() and draw().
// GLV Edit of SpoutDataReceiver in Examples
// 2023-01-29

//
//            SpoutDataReceiver
//
//      Receive texture and data from a Spout sender
//
//            Spout 2.007
//       https://spout.zeal.co
//

// IMPORT THE SPOUT LIBRARY
import spout.*;

PImage pgr, pgr1; // Canvas to receive a texture

// DECLARE A SPOUT OBJECT
Spout spout;

void setup()
  {
  // Initial window size
  size(640, 360, P3D);
  
  pgr1 = createImage(width, height, RGB);
  
  // Screen text size
  textSize(16);
  
  // CREATE A NEW SPOUT OBJECT
  spout = new Spout(this);

  frameRate(30.0);
  } 

int count = 0;

void draw() 
  { 
  background(0);  
  //
  // RECEIVE FROM A SENDER
  //
  
  if(spout.receiveTexture()) 
    {        
    pgr = spout.receiveTexture(pgr);
    
    if(pgr != null)
      {
      //image(pgr, 0, 0);  
      imageProcess1();     
      
      showInfo();
      count++;
      }   
    }
  println(frameCount, count);  //Check to see if synched 
  }
  
void imageProcess1()
  {
  // Works!  
  //image(pgr, 0, 0);
  //pgr1 = get(); // copy sketch window to pgr1
  
  // Works
  image(pgr, 0, 0);
  loadPixels();     //Required!
  arrayCopy(pixels, pgr1.pixels); // Makes a copy to work with
  
  // Fast and efficient for accessing x, y locations in pixel array
  for(int y=0; y<pgr1.height; y++)
    {
    int tmp = y*pgr1.width;  
    for(int x=0; x<pgr1.width; x++)  
      {
      int loc = x + tmp;
      if (y>height/2)
        {
        if (brightness(pgr1.pixels[loc]) < 128)
          pgr1.pixels[loc] = 0xFFFFFFFF;
        }  
      }
    }
    
  // Efficient for pixel array access
  //for(int i=0; i<pgr1.pixels.length; i++)
  //  {
  //  if (brightness(pgr1.pixels[i]) < 128)
  //      pgr1.pixels[i] = 0xFFFFFFFF;
  //  }  
  
  pgr1.updatePixels();  // Required! 
  image(pgr1, width/2, 0);
  }

And voila!

Left is spout data received.
Right is image processed… see code for details.

image

Thanks for this topic! I had fun with this.

References:
https://processing.org/tutorials/pixels

:)

1 Like

wow! thank you so much for taking the time to tackle this problem. This is extremely helpful.

I am trying to implement your example into my blobtracking sketch:

I do get the NDI input in!, Everything seems to be working, Except somehow the video does not end up being blobtracked.

What am I missing here?

// Daniel Shiffman
// http://codingtra.in
// http://patreon.com/codingtrain
// Code for: https://youtu.be/o1Ob28sF0N8

import processing.video.*;
import spout.*;
PImage pgr, video; 


// DECLARE A SPOUT OBJECT
Spout spout;


int blobCounter = 0;

int maxLife = 50;

color trackColor; 
float threshold = 40;
float distThreshold = 50;

ArrayList<Blob> blobs = new ArrayList<Blob>();

void setup() {
  size(640, 360, P3D);
  //String[] cameras = Capture.list();
 // printArray(cameras);
 
  video = createImage(width, width, RGB);
  spout = new Spout(this);
 
  
  // 183.0 12.0 83.0
  trackColor = color(183, 12, 83);
}

void captureEvent(Capture video) {
  video.read();
}

void keyPressed() {
  if (key == 'a') {
    distThreshold+=5;
  } else if (key == 'z') {
    distThreshold-=5;
  }
  if (key == 's') {
    threshold+=5;
  } else if (key == 'x') {
    threshold-=5;
  }
}

void draw() {
  
  video = spout.receiveTexture(video);
  video.loadPixels();
  image(video, 0, 0);

  ArrayList<Blob> currentBlobs = new ArrayList<Blob>();

  // Begin loop to walk through every pixel
  for (int x = 0; x < video.width; x++ ) {
    for (int y = 0; y < video.height; y++ ) {
      int loc = x + y * video.width;
      // What is current color
      color currentColor = video.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);

      float d = distSq(r1, g1, b1, r2, g2, b2); 

      if (d < threshold*threshold) {

        boolean found = false;
        for (Blob b : currentBlobs) {
          if (b.isNear(x, y)) {
            b.add(x, y);
            found = true;
            break;
          }
        }

        if (!found) {
          Blob b = new Blob(x, y);
          currentBlobs.add(b);
        }
      }
    }
  }

  for (int i = currentBlobs.size()-1; i >= 0; i--) {
    if (currentBlobs.get(i).size() < 500) {
      currentBlobs.remove(i);
    }
  }

  // There are no blobs!
  if (blobs.isEmpty() && currentBlobs.size() > 0) {
    println("Adding blobs!");
    for (Blob b : currentBlobs) {
      b.id = blobCounter;
      blobs.add(b);
      blobCounter++;
    }
  } else if (blobs.size() <= currentBlobs.size()) {
    // Match whatever blobs you can match
    for (Blob b : blobs) {
      float recordD = 1000;
      Blob matched = null;
      for (Blob cb : currentBlobs) {
        PVector centerB = b.getCenter();
        PVector centerCB = cb.getCenter();         
        float d = PVector.dist(centerB, centerCB);
        if (d < recordD && !cb.taken) {
          recordD = d; 
          matched = cb;
        }
      }
      matched.taken = true;
      b.become(matched);
    }

    // Whatever is leftover make new blobs
    for (Blob b : currentBlobs) {
      if (!b.taken) {
        b.id = blobCounter;
        blobs.add(b);
        blobCounter++;
      }
    }
  } else if (blobs.size() > currentBlobs.size()) {
    for (Blob b : blobs) {
      b.taken = false;
    }


    // Match whatever blobs you can match
    for (Blob cb : currentBlobs) {
      float recordD = 1000;
      Blob matched = null;
      for (Blob b : blobs) {
        PVector centerB = b.getCenter();
        PVector centerCB = cb.getCenter();         
        float d = PVector.dist(centerB, centerCB);
        if (d < recordD && !b.taken) {
          recordD = d; 
          matched = b;
        }
      }
      if (matched != null) {
        matched.taken = true;
        // Resetting the lifespan here is no longer necessary since setting `lifespan = maxLife;` in the become() method in Blob.pde
        // matched.lifespan = maxLife;
        matched.become(cb);
      }
    }

    for (int i = blobs.size() - 1; i >= 0; i--) {
      Blob b = blobs.get(i);
      if (!b.taken) {
        if (b.checkLife()) {
          blobs.remove(i);
        }
      }
    }
  }

  for (Blob b : blobs) {
    b.show();
  } 




  textAlign(RIGHT);
  fill(0);
  //text(currentBlobs.size(), width-10, 40);
  //text(blobs.size(), width-10, 80);
  textSize(24);
  text("color threshold: " + threshold, width-10, 50);  
  text("distance threshold: " + distThreshold, width-10, 25);
  
  
}


float distSq(float x1, float y1, float x2, float y2) {
  float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1);
  return d;
}


float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
  float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
  return d;
}

void mousePressed() {
  // Save color where the mouse is clicked in trackColor variable
  int loc = mouseX + mouseY*video.width;
  trackColor = video.pixels[loc];
  println(red(trackColor), green(trackColor), blue(trackColor));
}

Hello @marijn,

I was able to integrate my example:

// In setup()
video = createImage(640, 360, RGB);

//in draw()
  video = spout.receiveTexture(video);
  image(video, 0, 0);
  loadPixels();
  arrayCopy(pixels, video.pixels); // Makes a copy to work with

This will also work:

void draw() {
  
  video = spout.receiveTexture(video);
  image(video, 0, 0);
  loadPixels();

  ArrayList<Blob> currentBlobs = new ArrayList<Blob>();

  // Begin loop to walk through every pixel
  for (int x = 0; x < width; x++ ) {
    for (int y = 0; y < height; y++ ) {
      int loc = x + y * width;
      // What is current color
      color currentColor = pixels[loc];

// ...

If you set the maximum textSize() you are using in setup() the text in draw() won’t be blurry.

image

:)

Hey @glv thank you very much! It is starting to become more clear to me.
Somehow I Can not get it to work. I can’t figure out what is different in my code from your example :thinking:

// Daniel Shiffman
// http://codingtra.in
// http://patreon.com/codingtrain
// Code for: https://youtu.be/o1Ob28sF0N8

import processing.video.*;
import spout.*;
PImage  video; 


// DECLARE A SPOUT OBJECT
Spout spout;


int blobCounter = 0;

int maxLife = 50;

color trackColor; 
float threshold = 0;
float distThreshold = 10;

ArrayList<Blob> blobs = new ArrayList<Blob>();

void setup() {
  size(640, 360, P3D);
  //String[] cameras = Capture.list();
 // printArray(cameras);
 
  
  spout = new Spout(this);
  
  // 183.0 12.0 83.0
  trackColor = color(183, 12, 83);
}

//void captureEvent(Capture video) {
//  video.read();
//}

void keyPressed() {
  if (key == 'a') {
    distThreshold+=5;
  } else if (key == 'z') {
    distThreshold-=5;
  }
  if (key == 's') {
    threshold+=5;
  } else if (key == 'x') {
    threshold-=5;
  }
}

void draw() {
  
  video = spout.receiveTexture(video);
  image(video, 0, 0);
//arrayCopy(pixels, video.pixels); // Makes a copy to work with
  video.loadPixels();
  //arrayCopy(pixels, video.pixels); // Makes a copy to work with

  ArrayList<Blob> currentBlobs = new ArrayList<Blob>();


  // Begin loop to walk through every pixel
  for (int x = 0; x < video.width; x++ ) {
    for (int y = 0; y < video.height; y++ ) {
      int loc = x + y * video.width;
      // What is current color
      color currentColor = video.pixels[loc];
      float r1 = red(currentColor);
      float g1 = green(currentColor);
      float b1 = blue(currentColor);
      float r2 = red(trackColor);
      float g2 = green(trackColor);
      float b2 = blue(trackColor);

      float d = distSq(r1, g1, b1, r2, g2, b2); 

      if (d < threshold*threshold) {

        boolean found = false;
        for (Blob b : currentBlobs) {
          if (b.isNear(x, y)) {
            b.add(x, y);
            found = true;
            break;
          }
        }

        if (!found) {
          Blob b = new Blob(x, y);
          currentBlobs.add(b);
        }
      }
    }
  }

  for (int i = currentBlobs.size()-1; i >= 0; i--) {
    if (currentBlobs.get(i).size() < 500) {
      currentBlobs.remove(i);
    }
  }

  // There are no blobs!
  if (blobs.isEmpty() && currentBlobs.size() > 0) {
    println("Adding blobs!");
    for (Blob b : currentBlobs) {
      b.id = blobCounter;
      blobs.add(b);
      blobCounter++;
    }
  } else if (blobs.size() <= currentBlobs.size()) {
    // Match whatever blobs you can match
    for (Blob b : blobs) {
      float recordD = 1000;
      Blob matched = null;
      for (Blob cb : currentBlobs) {
        PVector centerB = b.getCenter();
        PVector centerCB = cb.getCenter();         
        float d = PVector.dist(centerB, centerCB);
        if (d < recordD && !cb.taken) {
          recordD = d; 
          matched = cb;
        }
      }
      matched.taken = true;
      b.become(matched);
    }

    // Whatever is leftover make new blobs
    for (Blob b : currentBlobs) {
      if (!b.taken) {
        b.id = blobCounter;
        blobs.add(b);
        blobCounter++;
      }
    }
  } else if (blobs.size() > currentBlobs.size()) {
    for (Blob b : blobs) {
      b.taken = false;
    }


    // Match whatever blobs you can match
    for (Blob cb : currentBlobs) {
      float recordD = 1000;
      Blob matched = null;
      for (Blob b : blobs) {
        PVector centerB = b.getCenter();
        PVector centerCB = cb.getCenter();         
        float d = PVector.dist(centerB, centerCB);
        if (d < recordD && !b.taken) {
          recordD = d; 
          matched = b;
        }
      }
      if (matched != null) {
        matched.taken = true;
        // Resetting the lifespan here is no longer necessary since setting `lifespan = maxLife;` in the become() method in Blob.pde
        // matched.lifespan = maxLife;
        matched.become(cb);
      }
    }

    for (int i = blobs.size() - 1; i >= 0; i--) {
      Blob b = blobs.get(i);
      if (!b.taken) {
        if (b.checkLife()) {
          blobs.remove(i);
        }
      }
    }
  }

  for (Blob b : blobs) {
    b.show();
  } 




  textAlign(RIGHT);
  fill(0);
  //text(currentBlobs.size(), width-10, 40);
  //text(blobs.size(), width-10, 80);
  textSize(24);
  text("color threshold: " + threshold, width-10, 50);  
  text("distance threshold: " + distThreshold, width-10, 25);
  
  
}


float distSq(float x1, float y1, float x2, float y2) {
  float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1);
  return d;
}


float distSq(float x1, float y1, float z1, float x2, float y2, float z2) {
  float d = (x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) +(z2-z1)*(z2-z1);
  return d;
}

void mousePressed() {
  // Save color where the mouse is clicked in trackColor variable
  int loc = mouseX + mouseY*video.width;
  trackColor = video.pixels[loc];
  println(red(trackColor), green(trackColor), blue(trackColor));
}

Ho @marijn,

I updated my last post.

Your code is not the same as mine.
Scrutinize your code and my example a bit more.

Take a look at the examples that come with the Spout library.

OPTION 4 will work directly with the pixels array and you should be able to integrate with your code:

    // OPTION 1: Receive and draw the texture
    - see example    
    
   // OPTION 2: Receive into PGraphics
     - see example  

    // OPTION 3: Receive into PImage texture
    - see example  
      
    // OPTION 4: Receive into PImage pixels
    // Note that receiving pixels is slower than receiving
    // a texture alone and depends on the sender size.
    img = spout.receiveImage(img);
    if(img.loaded)
    image(img, 0, 0, width, height);

image

:)

hi @glv Thanks a lot for the patient explanation.
I did manage to make it work thanks to your great examples.

Thanks a lot!

2 Likes