I am currently working on a project which brings pose-estimation (and object recognition) to processing. It is still in early alpha but you can follow the process here:
Yes, the transformation depends on the parameter you set. Either it is transforming from depth to disparity, or from disparity to depth. To be honest, the implementation with the filters is not great at the moment, I will have a look at it.
But for now it should be possible to apply the transform filters like shown in this example.
Hey Cansik, is there anyway i can disable the IR emitter? is see there is āEmitterOnOff(46)ā in src/main/java/org/intel/rs/types (in the java wrapper) which im assuming is related. Couldnāt find anything related in the processing library.
Setting sensor values is still an open issue, but there as a workaround: Simple use the json loader, which loads a config to the sensor (example).
I would recommend to only set the emitter in the config:
String param = "\"controls-laserstate\": \"off\"";
String jsonConfig = "{" + param + "}";
Ah, perfect. Thanks!
Looking at your deep-vision lib as we speak. Do you have a thread here for it yet?
There is no thread atm because I did not really share it with the community. I am still working on how to integrate it into Processing, the API is missing a good structure. Also the documentation is nowhere near finished
Hi there! I am wondering if the LiDAR Camera L515 would be compatible with this library? Thanks!
Do you already own one? Yes, I guess it would be possible to add the features of the LiDAR L515 because it is also based on the realsense2 framework, but the one I ordered is still not delivered. Of course this would take a bit of time and I can not yet guarantee the support.
Looking at the datasheet, it seems that the depth sensor uses 1024 x 768 pixels with 30 FPS and Z16 depth format (and Y8 for infrared). All of them are already supported, so I guess it could even run out of the box (without IMU data). Would be great if you could try it out!
Hi there! This may be a redundant question, but is this library compatible with a mac OS? Is there any additional setup required (intel sdk) or can you plug in the camera and the device connects in Processing?
Thank you!
The library is compatible with MacOS (I mainly developed it there), and yes you do not need any Intel SDK, all the binaries are shipped with it.
But the best way to find it out, is by just running an example
@pieteke Today I finally could test the library with the new RealSense L515 LIDAR sensor and it does not work, because the base librealsense2 library is still on version 2.29.0
. I created a PR to update the C-Java wrapper to 2.38.1 (bytedeco/javacpp-presets/pull/946) and as soon as this is merged and released, I can integrate it into the realsense-processing project.
My test showed that with the new 2.38.1 bindings, itās exactly the same API to extract the color & depth image from the L515 in Processing.
Iāve been trying to get my RealSense working, but I have kept getting āRealsense: no device found!ā
The version Iām trying to use is a RealSense VF0800 (I got it at a hackathon years ago), and Iām now concerned that it may not work with any current software.
Iām not trying to do anything fancy - I just want to use it as a greyscale depth camera in Processing, but Iām getting no results.
Tbh, I never heard of this camera before. And it seems that this camera is not supported anymore by the librealsense library: Senz3D Creative Labs VF0780 support (or VF0800 microphone) Ā· Issue #18 Ā· IntelRealSense/librealsense Ā· GitHub
I just released version 0.2.3 which adds support for the newer RealSense models like the L515 and D455.
Arm support is now shipped too, so it would be possible to use the camera together with a raspberry pi.
Thanks for working on this library! I tried using the latest version, with a L515 camera on Mac OS (Big Sur). I get this error with almost all of the example code:
RealSenseException: Couldn't resolve requests
I did get the depth stream example to work a few times, but it also failed a few times as well. Any ideas what could be causing the error? Or is the L515 just not been fully supported in the library yet?
The L515 should be supported. I do not have one here, so itās hard to tell whatās going wrong. Usually couldn't resolve requests
means that the streams you are requesting are not supported.
Please check with the realsense-viewer what kind of streams are allowed and try them out.
Iāve been using the RealSense-viewer, and have access to all of the streams (color, depth, motion). The L515 works fine there. Iāve tried the processing library on Mac OS and a VM running Windows 10, but get the same errors (except the Depth Stream example). I was surprised that the color stream didnāt work, the example code is so similar.
Yes that is strange indeedā¦maybe I come by my lab tomorrow to test it out. Are you sure you have a USB-3 connection to the camera?
Yes, definitely a USB-3 connection, using the supplied cable. Any example that calls
camera.start();
fails with that same errorā¦ except the depth stream example. The camera is running firmware 1.5.3.0. Would it be a missing dependency or another library/sdk I have installed on the machine that is interfering? Not sure why only the depth stream would work though.
Hi everyone,
Hi @cansik, thanks a lot for your work, it has been a pleasure to get my D455 to work on Processing thanks to you.
Unfortunately, I bought a M1 Macbook, and decided to work on Processing 4 and it doesnāt work anymore. Iāve tried a few things including this famous tutorial (Build RealSense for macOS Monterey (Intel + Apple Silicon) | LightBuzz) but I still have a weird error āA library used by this sketch relies on native code that is not available.
UnsatisfiedLinkError: no jnirealsense2 in java.library.path:ā¦etcā
It looked like it was a problem with the lib so I manually removed and installed the last version 2.4.3 (with Mac OS Monterey) but it doesnāt help.
Any idea of what I could try next ?
Thanks a lot