import gohai.glvideo.*;
GLCapture video;
void setup() {
size(320, 240, P2D);
String[] devices = GLCapture.list();
println("Devices:");
printArray(devices);
// this will use the first recognized camera by default
video = new GLCapture(this);
video.start();
while (!video.available()) {
delay(10);
}
println("read video");
video.read();
image(video, 0, 0, width, height);
}
This is the result I’m getting in the console after running the sketch:
Devices:
[0] "mmal service 16.1"
Final caps: video/x-raw(memory:GLMemory), width=(int)320, height=(int)200, framerate=(fraction)90/1, format=(string)RGBA, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, texture-target=(string)2D
read video
So everything looks ok… Except I’m getting just a black picture and not the result of the camera
The camera turns on when the sketch is launched (the LED lights on) and I successfully tried this command on the terminal: raspistill -o photoTest.jpg
Would have thought so. Seen reports of the Pi camera not working with that set too low. Not sure of anything else. If you’re happy with the CLI perhaps see if you can get it working with gst-launch.
There’s unfortunately a bit more to it than that, and it’s probably called gst-launch-1.0, and it might not be installed. Perhaps try
gst-launch-1.0 autovideosrc ! autovideosink
That might work. If not, gohai might be able to suggest a pipeline that replicates what the library does but takes the library / Processing out of the equation as the source of the bug (that’s the approach we always try with the main Java bindings anyway).
I found out this link. Maybe it can be helpful even though it refers to python packages (if I get it correctly).
The “Blank and/or black frame” section looks exactly like my issue so I try to ran the rpi-update command as they said but I still have the same problem.
The other action is related to the picamera module of python packages so it is not related…
EDIT:
The rpi-update did not solved my problem but now I at least get some error messages in the console:
GLVideo: glcolorconvertelement0: Failed to convert video buffer
Debugging information: gstglcolorconvertelement.c(218): gst_gl_color_convert_element_prepare_output_buffer (): /GstPipeline:pipeline0/GstGLColorConvertElement:glcolorconvertelement0
GLVideo: v4l2src0: Internal data stream error.
Debugging information: gstbasesrc.c(2939): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason error (-5)
I just tested with the Raspberry Pi Camera v1.3, which worked without a hickup on a Pi 3+ with our latest Raspbian image. @msurguy similarly had no problems with the Raspberry Pi Camera v2.
Which makes me believe the issue is your (“third-party”) camera
The raspistill command uses a different API than V4L2, which is what Processing is using to capture images from the camera, so it could be that this works, but the V4L2 with your particular camera doesn’t.
My suggestion for you would be: try to see if gst-launch-1.0 autovideosrc ! autovideosink works for you. Alternatively, you can also install the application “Cheese” and see if this works with your camera. (If those work, then you could perhaps try to find out what is special about your camera’s capabilities that brakes the assumptions of the GL Video library.)
I found out the hard way that you have to be very certain about the framerate and the dimensions that your camera provides. Try doing this in your setup to get that information:
Then, knowing the dimensions and the framerates available for each dimension, be explicit about which size of the image you are requesting from the camera:
// instead of video = new GLCapture(this); use this ->
video = new GLCapture(this, devices[0], 320, 200, 90);
For example, from your sketch in the top post, the width and height were reported to be 320x200 at 90FPS framerate for a sketch of dimensions 320x240.
Sadly, I don’t think I know enough about those subjects to deal with it myself
@msurguy I also found out that you need to be very precise… And I did, indeed, specified all the parameters to get the error messages I posted. So no luck for me it is not the issue here.
I have another weird behavior when using raspistill because when I use the -p parameter I get no preview. I can save a picture and even a video but no matter wich command line I try I can’t manage to have a live preview of the camera…
I updated everything yesterday and I have 2 or 3 more things to try but I think I’ll have to change the camera and try with an official picamera.
Step 3:
Use the following function to set up your camera: video = new GLCapture(this, devices[0], 320, 200, 90);
I really don’t know what was going on before it feels like I did those 3 steps over and over but this time it worked
.
.
. I still have some issues… It wouldn’t be fun otherwise
I checked the spec of my camera and it says that it can use one of the following:
2592 x 1944 - 15 fps
1920 x 1080 - 30 fps
1280 x 960 - 45 fps
1280 x 720 - 60 fps
640 x 480 - 90 fps
320 x 240 - 120 fps
I played around with the parameters and 2 things are weird:
I can set the parameters I want inside GLCapture() and it will work
The quality of the video increase with lower framerate and it is less laggy
For the first point I tried to create a GLCapture like this: video = new GLCapture(this, devices[0], 300, 300, 15);
And I got no error, it even displayed a crop (I think) image.
Even weirder, I try to set a higher fps than the one in the spec (90 for the 1280x720 for exemple) and again, no error. Not sure if it is automatically cropped or not.
For the 2nd point it is way more annoying.
I tried several fps for a 640x480 setup and here are the results:
For fps larger than 45 the quality of the image is really bad, specially around the borders (It is a bit like an antialiasing issue). And the video is super laggy.
For fps = 40, the quality become normal but it is still a bit laggy (a bit less though)
For fps = 30, the quality is good and it is fluid (some little lag from time to time)
For lower fps, quality and fluidity are good.
I also tried with other resolutions and it behaves the same. With high fps there is a really bad image quality and the video lags. With low fps, the quality is good and the framerate is coherent.