Intel RealSense Library for Processing

Maybe someone is interested in the news that I have implemented a basic version of an Intel RealSense library for processing. The library basically is a JNI wrapper around the librealsense2 framework.

edwinRNDR and I started developing a Java wrapper for the librealsense and I found the time now to add the support to processing.

Currently out of the box supported are all major os with x64 (no ARM / x86). I have included the prebuilt binaries to make it very easy to start with the RealSense cameras.

Cameras I know that they are tested:

  • Intel RealSense D435 & D415
  • Razer Stargazer (SR300)

Check out the library here:

Would be great if you could try it out and give me feedback.

:zap: The API of the library can and will still change, because it is just a pre-release.

8 Likes

This is really cool. I haven’t used it but I read about realsense before. I guess you just need the camera, collect data and then pass it to Processing? Or you are collecting data directly from the camera?

Kf

1 Like

The library currently reads the framedata and converts it into a processing friendly PImage object. Or what do you mean with your question? :slight_smile:

1 Like

@cansik – thank you so much for sharing this!

What are the black pixels on the example depth image – are those null values due to edge reflection?

1 Like

Also: how would you compare using using the Intel RealSense D435 and this library with using Kinect 1 or 2 with the SimpleOpenNI library by @totovr76?

1 Like

Also I’m using the D435, but has plus and contras vs Kinect V2, in general Kinect V2 is better in hardware the problem is that Windows already shutdown the project, by the way I’m using more SimpleOpenNI because it has the pose estimation algorithm to track the user contour and of course the joints, but in certain point must be migrated to another cameras.

2 Likes

Yes, the black spots are the zero depth points and usually because of the reflection. I convert the depth buffer into a depth map with a simple mapping. I think there could be a better solution, or it should be possible to use the autorange mapper from the Intel library.

OpenNI is doing more then just returning the images of the camera. But it should be simple to just use openpose or another framework to track the pose. This does not really need a depth input, but just the colored image.

2 Likes

For openpose you need a nice GPU to process all the frame or you will have a low fps, I recommend better to use posenet

4 Likes

Another difference will be the intel processing blocks. I am currently adding them to the library, and extend a lot of the native binding. For example, here you can see the native Colorizer:

I am developing it now in another branch of my librealsense fork because I had to break the API & build setup:

4 Likes

Hi and thanks for this! It works well and I’m liking the D435. Do you know when you’d be able to port in the other functions from the SDK? It would be nice to use some of the post processing filters to clean up the depth image. Flip axis for camera input would help also. Thanks again!

1 Like

Yes, the library is ready and all the pipeline processors are implemented as well. At the moment I am planning on how to include the new features into the existing processing library, so that the old code won’t break but all the features are easy accessible.

If you are already interested to use the new library, head over to this repository which includes examples for Processing as well (atm only with java). I have already built a prebuilt dev-build which you can download and use in your own processing sketch (just drag the jar into the processing-ide).

2 Likes

I am ready to test your code, working with Kinect 2 and would like to test the 435. Interested in max frame rate motion capture of raw vectors for post-production in Maya.

2 Likes

I have spent a bit of time into redesigning it and add a lot of features, like adding filters, depth-colorisation, frame-alignment, advanced device settings and so on. You can download the library directly from github, after a (short) testing phase I will release it to the contribution manager.

3 Likes

Thanks Cansik, looking forward to get playing with it on monday!

1 Like

4 posts were split to a new topic: Orbit view with RealSense and PointCloudViewer

Thanks for all the hard work! It’s rockin so far. I’m using D435 on Win10 x64, here are my notes so far:

I couldn’t figure out how to set the enums for the temporal and hole-filling filters, and it gave me RealSenseException error when using addTemporalFilter() without the args.

I couldn’t get addDecimationFilter() to work except if I used 1 for the filterMagnitude. Anything else gave me BufferUnderflowException.

addZeroOrderInvalidationFilter() gave me RealSenseException error.

2 Likes

Hello! So it is possible to use the SimpleOpenNI library with a D435? That would help me a lot

1 Like

Thank you for testing the library. The enums can be found in the source code of the library.

The problem with the decimation filter was, that it changes the image size and my copy method tried to read as many pixels as the PImage buffer needs. So it was reading too many pixel. I fixed it, by adding a buffer reinitialisation if the image resolution changes.

There was a bug when setting up the temporal filter and setting its options. The persistency index has to be set over the hole-filter attribute, which is really strange and not well documented by Intel. I have already fixed it and I will release version 2.1.1 with this bugfix.

ZeroOrderInvalidationFilter is just something I’ve added, but I do not really have a clue what it is good for. Intel is not really providing a lot documentation about it, so maybe it’s for another camera (with the IMU) or idk. I have added a caution mark to the documentation. Btw. the RealSenseError you get there is a frame-queue-timout.

@Chrou This library has nothing to do with the SimpleOpenNI lib.

1 Like

@cansik I know, I’m using your library (thanks a lot by the way) but I need features provided by the SimpleOpenNi one (skeleton tracking and hand detection) so I was actually asking @totovr76 because he talked about using a RealSense camera. I should have tagged him, sorry!

1 Like

On the Intel RealSense Viewer, I’m getting the cleanest image using the Depth to Disparity and Temporal filters combined. Is there a way to get the Depth to Disparity working?
All I can see from using addDisparityTransform is that is gives a negative image, which looks the same as the Disparity to Depth filter does on Intel’s viewer.

1 Like