Changing webcam-pixels to "nearest" color from color palette array

Great! Thank you so much for sharing this.

Are there any strategies that you ended up using – for dealing with the holes, or warping, or glare, etc – that you would suggest to others!

Haven’t gotten to warping yet. So far I use blur to deal with holes but there is still a bit of noise, so might ask for advice here to improve it. Gonna clean things up a bit and then share the code.

As an alternative to blur, a combination of dilate followed by erode is a good simple way to fill small holes. Given your goals, I would guess that you could be fairly aggressive with how much you dilate – you just have to avoid completely filling empty bead slots before the dilate step restores them. I believe that there is some fairly good documentation of that with opencv.

You could also you blob detection to detect the holes, then patch them with a color taken from their edge. Or, since in your case the holes are also almost identical sizes, you could also use template matching to detect and patch. It is dead simple – although a potential downside is it might be more sensitive to the lighting conditions of your setup.

1 Like

Deciding what color is nearest to another in the digital realm is a notoriously difficult thing.

I wrote a Python script that uses a library someone developed that uses the currently most advanced color model to find nearest colors, which model does a great job sorting color. Example output is in this post of mine.

If Processing or a Processing library can do HSL sorting, that might be the best you can do for now.

I would be thrilled if a library were developed that did CIECAM02 color modeling in Processing.

3 Likes

Thanks for the tips! I have only done blob detection, erosion + dilation with a black/white image after thresholding. Any examples of how to do it with specific colors?

You can do it on binary buffers for each channel if you do it after quantization. During nearest-pixel, the results of each nearest-color channel is written to a different buffer, which can be processed as a binary image (eg erode dialate). The buffers are then combined.

Right… Would you happen to know where I might find an example that shows how to do that? I lost you a bit after the first sentence :stuck_out_tongue: (i’m a creative coder in the sense that I don’t really know what I’m doing, haha).

1 Like

@jeremydouglass I looked at your suggestion once more and I found this example for reference, so I think I know what you mean now: https://github.com/jorditost/ImageFiltering/tree/master/MultipleColorTracking.

You suggest I use to openCV library for processing to save the colors on different black/white channels, right?

One question is: How do I write to a openCV buffer from my own function? The example linked to does it with a build in function from openCV opencv.inRange that filter the image based on the range of hue values. I would like to do it from my own “nearest pixel” function…

1 Like

There are many ways to get image / pixel data into and out of opencv objects. Perhaps start with the simplest:

  1. create n PImages, one for each channel, and fill them black
  2. in the loop of your nearestpixel function
  3. depending on result, write the results to each pixel as a white dot into the correct mask image

Now you have n masks.

  1. load each mask into opencv with opencv = new OpenCV(this, img);
  2. close holes with opencv.dilate(); opencv.erode(); – you can also experiment with blur

Now you have n masks with closed holes.

For basic examples of moving color and grayscale pixels in and out of opencv object, see:

There are probably much faster ways of running your custom threshhold directly on the OpenCV buffer – I haven’t worked with it recently, so I’m not sure – but this is a great quick starting point to see if dilate+erode actually does what you want. If not, you may not need opencv at all, so no need to optimize.

1 Like

Thank you so much, will try it out!