I am experimenting with a series of little test cases for image compositing, and am puzzled by what I’m seeing as the behaviour of pimage.mask().
PImage LayerA, LayerM;
int Col2 = width/2;
int Row2 = height/2;
size(600, 600);
background(255);
// 300 x 300 px
LayerA = loadImage("Aqua_Square.png");
// 300 x 300 px
LayerM = loadImage("BWMask_Square2.png");
// try to force LayerM to be the right kind of thing in case it isn't already
LayerM.filter(GRAY);
// show background, LayerA, LayerM, and LayerM applied as mask
image(LayerA, Col2, 0);
image(LayerM, 0, Row2);
LayerA.mask(LayerM);
image(LayerA, Col2, Row2);
What I get with this is:
What’s odd about this to me is that contour of the mask as shown in the final figure, where it is applied, doesn’t really closely match the information you can see is there in BWMask_Square2.png - when it is applied as a mask the result is really blocky and much detail is lost. The areas that are pure black on the mask are reproduced accurately. The areas that are greyscale on the mask, are not.
I initially thought this might be because of the cryptic (to me) statement in the docs for pimage.mask() that “This mask image should only contain grayscale data, but only the blue color channel is used.”
So I whacked the LayerM Pimage with a filter(GRAY) - which I think should have made it grayscale if it wasn’t already.
Any hints about how to figure this out? (I am chasing the idea that what I’m using as the mask maybe isn’t in the right format somehow. Maybe ‘manually’ resetting the R and G channels to null, pixel by pixel, would help. If such a thing is possible.)
Thanks for your advice!