Hi! I’m new to processing and I’m working on a Kinect project. I’m trying to make my interactions with mouseX & Y first, so that I can then translate it over to the Kinect part.
I want to make my fish swim away from the mouse/ person’s silhouette movements interacting. (I’m thinking this is collision detection or blob tracking?)
Then, I want to make the bubbles follow the persons movement/ group near them, but still have other bubbles flow throughout the scene (I have another class for this group of bubbles).
I’m also trying to make it fullscreen. The sketch itself is full screen, but not the Kinect part… Any help would be much appreciated… thank you!
Instead of sharing a snippet, share an MCVE that can be run.
Is the issue you are dealing with actually related to the Kinect in any way? It seems like perhaps you could ask this question for a mouse-based sketch just as well – lose the Kinect from the title, description, and make a simple sketch to explore the avoid behavior your are trying to create. Or am I misunderstanding?
Hi, I updated my code. I was thinking that if I did the mouse X&Y interactions, that it would be easier to translate over into Kinect since I’m having difficulty with both. The mouseX&Y interactions don’t necessarily have anything to do with the Kinect parts. Let me know if there is anything else I should add. Thanks!
Can you post the updated code here in a new response to your thread? Save your minimal sketch as a separate sketch. Keep in mind that it must run – your example is missing lots of classes etc., so there is no way to tell what it does or test it.
Your current post have three different questions. I recommend you keep on question per post so each question gets a proper deserved answer.
For the first question I suggest the next exercise. Draw a big blue circle in the middle of the sketch simulating a person (as if it was detected by a kinect unit). Then with your mouse, you click+hold anywhere in the scene. This defines the initial position of the fish. Then you drag and release and that will define the velocity and direction the fish will travel with respect to the starting point. Your third task would be to implement a code for collision avoidance. What do we need for this operation?
Some thoughts: You will need to determine the distance between the “fish” and the person. Since the person is a single blob, you should think (or do) the circle to be filled with many points. Next, how would you determine the distance between the fish and the person? Would you use the center of the person? Or would you use a point in the contour of the person? (Please ask if you don’t know or you are not sure)
An additional note. If you mix the depth and the image, you will get a 3D space. Would the fish move in 2D or 3D? I suggest to stick to 2D and explore the power of Kinect for “person/people” detection. After you understand this concept, you can move to the 3D approach or you could try to implement the sphere (second question). For the third question, you will need to experiment with the depth/image data. Resizing an image is a common task in Processing. However, I can see some challenges resizing the depth data provided by Kinect after is mapped to the image. Maybe it is a simple transformation. More research is needed plus probably access to some sample data.