I need location data (x,y,z) of the audience for my art project. I used to use Kinect v.1 but for my next project, I need location data of several people that are walking in different rooms. I also need to track their location in a bigger space than a room. That’s why I want to build a wearable custom tracker that I can attach it to their clothes (preferable their heads like a crown maybe :)). I am also using Processing and Max/MSP for that Project. It looks like I can use Flora and Flora Wearable Ultimate GPS Module. I can also use Arduino cause I already have it, and I don’t mind using a bulkier tracker. Have any of you got any experience using a tracker like that? I am open to any suggestions.
Would you be doing the tracking indoors? I do not have experience with indoor tracking but I am going to take a guess and say that it is worse than if you do outdoors gps tracking. What accuracy are you aiming for? I suggest you use the ketai library and collect some sample data yourself (or download any of the apps available in the app store that can do this). Choose 5 fixed separated physical locations in your space and collect data traveling to those 5 different points in order and also in disorder (not following the same arrival pattern) to understand gps performance in your use case.
You could also look at the beacon concept which uses bluetooth.
I am going to be doing indoors tracking but I’ll check your suggestions. I need almost precise data. I am kind of trying to get rid of Kinect’s jumpy output.
I am a Mac and iPhone user by the way.
Thank you very much.
To track people from room to room, beacons as @kfrajer mentioned are the best method. Otherwise it is hard to associate the tracked people. But for positioning tracking, beacons are maybe too less accurate. Multi-room-positioning systems are usually created out of multiple systems (sensor-fusion).
If accuracy matters, you won’t get around IR tracking systems (or at least something visual). But writing it on your own is rather complex, so I would suggest using an existing one. There are several good motion capturing systems, but they usually cost. A cheaper solution is the HTC Vive tracking system, which does a very good job in 3d positioning.
If you really want to implement it yourself, I’ve done a minimal proof of concept to just track infrared light points (but only 1 camera) in 2d space:
Maybe it also makes sense for you to smooth the position tracking of your kinect, by using moving averages and easing.
Thank you cansik. I also think HTC Vice might be a good solution for me.
What size simultaneous audience are you talking about tracking? 2-3, 10, 20…? Also, how important is your z axis?
If HTC Vive tracking is ~120€/person, that doesn’t scale for a larger crowd the way that a bunch of cheap IR-LED active marker headbands do (and I’m not sure what the connected tracker limit might be) – although the marker headbands approach gets you back towards the “implementing your own tracking system” problem.
I want to start with two audience and then see what happens. The number of audience depends on the limitations of the tracking method that I will be using. I wish I can track let’s say 10 people in a whole gallery/building. For the current version (my old project), I am mapping the z-axis of Kinect to my 2d simulation’s y-axis because I only use head location and my project doesn’t mind if they jump or sit down.
Actually, I want to start working on a new project with at least two audience and possibly without Kinect. Because I am using Synapse for extracting data and I know it is not going be out there soon. Kinect-via-synapse already not working with Max 8 and I need to transition to a more current sensor.