Below is an image of a basic grid that I have in processing. I want to map a users physical location in a space to a location on a grid in processing in real time. Can anyone tell me if it is possible to track a users physical location just using a webcam or is a kinect required for physical tracking in a 2D space?
Any suggestion for possible libraries or technologies would appreciated.
It depends on the lighting, scene, camera angle etc. etc., and what you are projecting onto the grid. Is this a frontal webcam, and you want them to wave their arm and intersect with the 9 boxes, triggering events? Or are you taking an overhead cam and trying to map someone’s location on a floorplan, e.g. 9 boxes on the floor in a square room? More details will give more helpful answers.
Background subtraction, blob detection, and basic object tracking are available through the opencv, boofcv, and blobdetection libraries. You have lots of options, and there are existing example sketches and tutorials online (e.g. Coding Train).
Thank you very much for your response. I will try to clarify with more details. I want to take a camera(possibly over head) and try to map someone’s location on a floorplan, e.g. 9 boxes on the floor. I want to give an instruction on a screen that the user will have to mimic on the floor. So if the screen tells the user to go to the top left box, then the user goes to the top left box on the floor and so on. Is this possible with a web cam?
Apogies if I am not explaining myself clearly. I will look at those resources you mentioned.
Yes, quite possible – try Blob Detection. That will give you an xy location for a person.
Depending on the angle and height of the camera, you may have to do some calibration to map those locations to where people’s feet probably are on your floor map.