I am using the nyar4psg library for augmented reality. I am able to see the 3d cube in the below picture in my tablet when its camera is pointing to a marker image, but how is it coming out on the paper. I mean how is it projected in the real world? Its not coming out as in the below picture for me, its showing only in the phone/tablet. Could you please help. Is it some sort of digitized or image tracking paper?
He is in front of his laptop, recording it with a camera or web cam. The video stream gets the cube added to it by processing, using video processing. He cannot see the cube in the air in front of him – it isn’t there. He can only see the cube through his laptop screen – it is never projected into the real world.
Watch this part closely. He can’t see the cube in his hand, because it isn’t projected – he can only see it on his screen. That is why he is watching his laptop so carefully.
There are ways to project onto fiducials from a fixed perspective, but they are usually not close-up 3D – they are either top-down onto tables or floors like reactable, or else they are onto the side of a building like projection mapping. You can have projectors map 2D videos (not cubes!) onto moving 2D surfaces with fiducials – like a sign on a stick – but it is hard to get right, and it works much better with 2D than with 3D due to needing to force a perspective.
The setup in this particular video doesn’t use a projector at all.
Yeah you are right. Maybe we could use a desktop projector to see our screen on the surface to see a 3d view of the cube as it appears in 3d in the screen.