How to make the computer think Processing is a camera (video capture device)?

I have heard of spout & syphon but I have no idea on how to get started.

In these pandemic, videoconferencing, times, it could be fun to feed a sketch instead of me.

I suppose one could do it with OBS too? I’m on Linux.

1 Like

Potentially PipeWire will offer something similar to Spout and Syphon - https://pipewire.org/

You can potentially use GStreamer with v4l2loopback to capture any window, including a Processing sketch, and route it back as a camera. Or stream it directly.

However, if you just want to share it in video conferencing, you shouldn’t need any of this - just share (present) the window directly?!

1 Like

Thanks! I’ll investigate PipeWire and v4l2loopback!

No really, because I’m not presenting. It would replace my ‘normal’ camera feed, possibly with a ‘filtered’ version of me (like my circles-grid avatar here, for instance)

Yes, v4l2loopback is interesting - have used that a few times. It’s a kernel module, so once installed you usually need to use -

sudo modprobe v4l2loopback

By default that will create a new /dev/video* device. On my desktop without a camera that will be /dev/video0, but on a laptop usually /dev/video1. You can also add parameters to the module install to create more loopback devices - eg. this project used 2 cameras and 4 loopback devices at different resolutions for motion tracking and display - https://youtu.be/9D3cJDRF6Do

Once you have the loopback device installed, try running a sketch and then in the terminal run (where device is from above) -

gst-launch-1.0 ximagesrc xname="<SketchWindowTitle>" use-damage=false show-pointer=false ! videoconvert ! v4l2sink device=/dev/video0

This should make the sketch available as a camera at the same device name. You can test in Processing itself, or from a different terminal

gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! autovideosink
1 Like

Hi! I tried this last year:

There’s 3 examples. I haven’t looked at the code since then but I guess you can figure it out? :slight_smile:

I used akvcam I think on ArchLinux.

I see I posted a video about it on twitter, but they reduced it to 5 bits per second:

https://twitter.com/search?q=%40hamoid%20akvcam&src=typed_query

Maybe I can find the videos again… yes:

4 Likes

Thanks! I tried to install but that is not working, yet, I’m on Manjaro so I’ll see if changing kernels helps…

Wow, that looks wonderful, I’l have a look. I’m not very skilled at Linux, but Manjaro is close to ArchLinux I think, so I might get lucky.

Interesting! Not seen that before. Looks like they work the same way? In which case the GStreamer pipelines above should also work the same, and vice versa - your sketches work with v4l2loopback. Any idea why choose one over the other? It’s been a while since I’ve needed to do this, so I’m curious what the pros and cons are.

The sketch writing raw pixel data to the device is interesting. That work OK? Never thought of trying that.

1 Like

hehehe, I have to make time to try again, but couldn’t make either work yet…

It did work when I tried :slight_smile:

Not sure how to choose… I guess try and if the first doesn’t work, try the second :slight_smile:

@villares There’s no official package. I compiled it somehow. But the aur package seems to be broken at the time: AUR (en) - akvcam-dkms

1 Like

Yeah I tried the AUR and then saw in your video that you compiled, I have to try it soon :slight_smile: Thanks again!