Intel RealSense Library for Processing


Maybe someone is interested in the news that I have implemented a basic version of an Intel RealSense library for processing. The library basically is a JNI wrapper around the librealsense2 framework.

edwinRNDR and I started developing a Java wrapper for the librealsense and I found the time now to add the support to processing.

Currently out of the box supported are all major os with x64 (no ARM / x86). I have included the prebuilt binaries to make it very easy to start with the RealSense cameras.

Cameras I know that they are tested:

  • Intel RealSense D435 & D415
  • Razer Stargazer (SR300)

Check out the library here:

Would be great if you could try it out and give me feedback.

:zap: The API of the library can and will still change, because it is just a pre-release.


This is really cool. I haven’t used it but I read about realsense before. I guess you just need the camera, collect data and then pass it to Processing? Or you are collecting data directly from the camera?



The library currently reads the framedata and converts it into a processing friendly PImage object. Or what do you mean with your question? :slight_smile:


@cansik – thank you so much for sharing this!

What are the black pixels on the example depth image – are those null values due to edge reflection?


Also: how would you compare using using the Intel RealSense D435 and this library with using Kinect 1 or 2 with the SimpleOpenNI library by @totovr76?


Also I’m using the D435, but has plus and contras vs Kinect V2, in general Kinect V2 is better in hardware the problem is that Windows already shutdown the project, by the way I’m using more SimpleOpenNI because it has the pose estimation algorithm to track the user contour and of course the joints, but in certain point must be migrated to another cameras.


Yes, the black spots are the zero depth points and usually because of the reflection. I convert the depth buffer into a depth map with a simple mapping. I think there could be a better solution, or it should be possible to use the autorange mapper from the Intel library.

OpenNI is doing more then just returning the images of the camera. But it should be simple to just use openpose or another framework to track the pose. This does not really need a depth input, but just the colored image.


For openpose you need a nice GPU to process all the frame or you will have a low fps, I recommend better to use posenet


Another difference will be the intel processing blocks. I am currently adding them to the library, and extend a lot of the native binding. For example, here you can see the native Colorizer:

I am developing it now in another branch of my librealsense fork because I had to break the API & build setup: