Hi.
A bit puzzled with something in the blobDetection library (Julien ‘v3ga’ Gachadoat, BlobDetection library / v3ga).
I adapted the webcam script to display the blobs and also send blob coordinates over OSC.
It identifies blobs of darker color, compared to lighter surroundings.
I wanted to reverse this behavior: to get blobs for lighter image areas.
So I inserted filter(INVERT);
immediately after copying the camera data into a separate image.
void draw()
{
if (newFrame)
{
newFrame=false;
scale(-1, 1); // horizontal flip
image(cam, 0, 0, -width, height);
img.copy(cam, 0, 0, cam.width, cam.height,
0, 0, img.width, img.height);
filter(INVERT);
fastblur(img, 2);
theBlobDetection.computeBlobs(img.pixels);
drawBlobsAndEdges(true, true, true);
}
}
Without filter(INVERT)
, blobs consist of darker screen areas.
With filter(INVERT)
, blobs consist of lighter screen areas – same areas as before.
Should I assume, then, that filter(INVERT)
only affects display but does not affect the image contents?
Is there a function that inverts the actual pixel values? I go to Reference / Processing.org and search this page for “invert” and find only a Boolean “not” operator, which is certainly not the right thing.
If I wanted to track hands, for instance, currently the only way to get a meaningful result would be to wear a white shirt and black gloves. What if I wanted to wear a dark shirt and no gloves? That’s the point.
hjh