How do i make my webcam detect a pose I make?

Hi, I’ve started using Processing for an interactive artwork i’m working on.

I’m looking for the best/easiest way to solve the following issue:
Set a boolean variable to true, if I take a hold of THIS POSE in front of my webcam (for like 2/3 secs)

The background should be no problem since i’m using a large projection screen behind me, so i can project a solid green color if that makes it easier.

I’ve brainstormed for several ideas to tackle this issue, but realizing them into code is just too hard for me:
I could use OpenCV MarkerDetection (detects certain pictograms with the webcam) and project 3 pictograms on the screen with dedicated positions. If I block the view of these 3 with my head and hands, the boolean will be set to true.

I know that this method isn’t very accurate since it doesn’t recognise a real human body in view, but in my case it’s already sufficient.
So using a couple of indication points aligned to the pose I will make, can be okay too.
At least, I don’t know if there’s any easier&more efficient way than that.

Anyway, every little bit of help is appreciated.
Thanks by advance!

(and sorry by advance cause i didn’t know whether to post this under coding questions or project guidance)

1 Like

Hi @WindJammer,

Just brainstorming here but have you considered using Wekinator for your project ? It’s a user-friendly and open-source software for creative applications (including gesture analysis) that uses machine learning. They have a bunch of examples you can run within Processing.

Basically all you would have to do is to train a classifier via your webcam input in order to recognize a specific pose or gesture (demo).

I haven’t used it yet so can’t say much but I think it could be appropriate for your project.

2 Likes

I’ll have a look at it, thanks!

1 Like

An idea if probably to use the skeleton tracking from kinect projects. Not usre exactly what library it would belong to and what kinect flavor does it belong to (kinectV1, V2 or openKinect). The concept is this. You load the library and access the skeleton algorithm they have implemented there. The chalenge is that kinect might be using the depth map to detect the user and to extract the user’s silhouette. In essence, it makes the gesture detection easier (?). In your case, if you can use a screen behind your subject, then you could use the chroma key concept to extra the figure/silhouette and apply the skeleton algorithm. Now, this is an idea and it is untested. Not even sure if you could use the skeleton class by itself. I don’t see why that is not possible, but I could only guess you will need to do some tweaking to get it working. If you decide to go this route, there will be some work to do and probly digging into the source code. Maybe the AI algorithm proposed in the prev post could be easier to implement at the end.

Kf

1 Like