Hello all
I’ve been programming interactive apps using a Kinect v2 model 1520 using Processing 3.5.3. This app will be for a top-down projection that uses a persons blob to reveal an image below another. In effect, the blob will temporarily mask out the Top image to reveal the Base image. The current app uses a Kinect depth image as a PImage, which is used as an image mask. The below code works fine for my purpose but has one issue:
The Kinect depth image is 512x424, and to use this as a mask, the Top and Base images also need to be 512x424, or Processing gives an error. This is too low a resolution to be projected at 3m square.
Is there any way to resize the Kinect depth map to be 1280x720 or 1920x1080, so that the Top and Base images can be this resolution?
I have tried to resize the depth PImage using PImage.resize with no luck. I’ve also tried to put the depth data into an int array and use this as a mask but no luck with this either. The blob hasn’t got to be amazing resolution, as its being blurred anyway. But the Top and Base images need to be high resolution.
I would really appreciate some help in making the overall sketch run at this higher resolution. Thanks!
This can work just fine, but could also be slow if you are doing it every frame.
An alternative is to loop over the pixels of a full-size depthImg and set each pixel value based on the nearest corresponding rawDepth pixel value. Here is an example with map() to make it clear. No Kinect on hand, so the brightness of a 512x424 Processing logo is used instead.
The input is a 512x424 brightness array; the output is a 1024x720 image suitable for masking an image of the same size – although it can be any size.
The key idea here is that the target doesn’t match the source – you just loop over the target, assigning the value from the equivalent source pixel. For improved performance you can rewrite map() statements as multiplication with pre-computed ratios.