GSoC 2019 - p5.xr

Hey,
I am considering to contribute to the project Stabilizing and improving p5.xr during Alpha release as part of GSoC 2019. I want to know whether the development of the library p5.xr has been started. If so, where can i find the code? Also, how much knowledge of webxr api is required?

Thank You

1 Like

Hello @vedhant. I am developing p5.xr now. I want to finish up a few more basic features and organize the code before I make it public. Expect the initial relase of the code to be sometime in April but likely not before the GSOC proposal deadline. That said, I can try to shed a little light before then.

There are a ton of different features that this library could potentially support but the initial release will likely only have mobile VR and (hopefully) AR with ARCore/ARKit. The current library allows a user to simply add:

function preload() {
    createVRCanvas();
}

to any WebGL p5 sketch and the sketch automatically creates a button for a user gesture to enter into a Google Cardboard-style VR experience that includes all of the elements from the rest of their sketch. This should be translated relatively easily to room-scale for computers with connected headsets, but that probably won’t exist in the alpha initial release.

As noted earlier, the goal is to have functioning AR sketches before initial release. This is what I am working on now but there is still a certain amount of uncertainty because of the current browser support for AR. Worst case scenario this will only be available to users that are able to load their sketches in Chrome Canary.

The current code base is modeled after the p5 repo as much as possible. All of the public methods such as createVRCanvas are implemented as added prototypes added to p5. The development environment uses node/npm. The only major difference is that the code base uses ES6 standards, but p5 is transitioning there anyways.

After that initial release a lot of testing, stabilizing, and planning for the next feature set will need to happen. The WebXR API is also in really heavy development, and there will potentially be changes that need to be kept up with.

So the GSOC project really is to help plan the continued development of the library. Since the student would be jumping in on library in its infancy stage there would be a lot of room for design contribution. That said, there are also more straightforward tasks that we know will need to be completed. These include:

  • Setting up a unit test solution (currently I am relying on manual tests)
  • Develop strategies for adjusting performance to be device-specific (for example: scaling the framebuffer provided by GL)
  • Introduces interactivity via gamepads (this would likely require access to a desktop VR headset such as Oculus or Vive)
  • Create a 2D in VR option. Perhaps a plane situated in 3D with a 2D canvas being drawn on it?
  • Gaze tracker implementation for controller-free interactivity (perhaps using the XRRay interface)
  • Positional audio
  • Interface for setting and releasing AR anchors

This is all to say that there will be a large number of potential features to work on. The student doesn’t need to have a full understanding of the WebXR API going into it but the hope would be that some comfort and familiarity would be established early on in the summer. A decent understanding of the RendererGL implementation in p5 would also be helpful, but this is also something that can be learned as the summer progresses.

When writing your proposal I would take into consideration that a huge part of the summer would be devoted to stabilization and testing. This would also include project management such as helping with the work of overseeing and helping to respond to bugs. I know this can be a bit tricky to narrow down since the project proposal should be specific so that success can be objectively evaluated, so I would say that any proposal should include at least one major feature that you hope to implement. One from the list above or another feature of your own choosing. This feature should be something that you think is integral to the success of the library, and it should be feasible for the time frame. You should also include in your proposal that you will be helping with the initial stabilization phase. This is where the project management, testing, and debugging aspect of the project comes into play.

So to summarize, a successful proposal should:

  • Name 1-2 major feature(s) that would be added over the course of the summer with help/guidance
  • Explain how the above feature(s) would contribute to the goal of creating a simple and accessible VR/AR library
  • Express a clear understanding of how the above feature implementation would occur alongside more general stabilization and testing
  • Outline your demonstrated proficiency in the applicable knowledge sets (VR/AR, computer graphics, linear algebra, etc), or indicate a willingness to learn along with a plan for how to do so.

Hopefully this help! Sorry that I can’t just show you the library right now but it is very near to a early-alpha stage and I want to get it there first so that the starting point is somewhat orderly.

5 Likes

@stalgiag I have a question. Is p5.xr being built on top of p5 or is it completely separate from it? If it is on top of p5, then I assume it also uses p5’s webgl functions?

It is being built on top of p5. The goal for the library is to add a few simple functions to p5. Right now the primary function createVRCanvas when placed into a preload does the following:

  • checks if the device is capable of entering an “immersive session”
  • creates a button for “Entering VR”
  • once the button is pressed, p5 is told to make a WEBGL canvas, and the app enters full screen with the eye offset, warping, and masking for google cardboard
  • p5 is set to not loop, the preload is resolved, and p5xr takes control of the draw loop only calling another draw when an “XRFrame” is available
  • when an XRFrame is available, p5’s WebGL Renderer’s model view matrix is set to the current XRPose (which is based on several sensors in your phone and is monitored by the WebXR API)
  • then all of the code inside of the user’s draw function is executed. Currently this is done once each with offsets for each eye which is fine for simple sketches

Anyways that is all to say that, the end goal is a library that a user loads in addition to p5 that adds a few functions. Similar to p5.sound, but most likely less robust. All of the actual rendering etc is still handled by p5’s WebGL mode. In an ideal world most (non-interactive) 3D p5 sketches would only need to add 1 or 2 lines of code to work with mobile VR.

2 Likes

@stalgiag Can u elloborate a little on contributing to design?

Thanks a lot for your replies!

@vedhant mostly I just mean that the project will be in its infancy so any selected GSOC student would have a significant role in the design process. Questions one would have to consider could include -

  • What kind of API should be exposed to the user for features I introduce?
  • What is the goal of the features I introduce?
  • How can I make sure that the features I introduce are easily maintainable in the future?
  • and many more
1 Like

@stalgiag By gaze tracker, you mean eye tracking right? This assumes that the users device is capable of eye tracking. Can you throw more light on this?
I was thinking of foveated rendering.

Thank you

@vedhant no I think that is a little outside of scope. I mean something more like a gaze-based interaction that you see on a lot of vr platforms (gear vr, cardboard, daydream). As in a ray that casts from the center point of the camera outwards in the direction the camera is facing so we can see which objects the user is facing. This may actually not make sense for the project and this may be something users should be relied on to make themselves as they would with mouseX&mouseY with 2D interactions. Not sure yet. Above I was just throwing out a few ideas for cool features. No need to stick too strictly to the list.

@stalgiag I am considering Interface for setting and releasing AR anchors. I find this feature implementation a core part of AR and I found this explainer and ARCore anchor explanation for anchor which introduces me well with this concept.
Since you mentioned this feature, I am assuming that it has not been implemented yet. If so, I would like to implement this feature and put this up in my proposal.

Is anchor implemented in the webxr api or will it be added in future? If it is not, then hit test will have to be used to implement anchor.

ARCore uses anchor on top of trackables and session. Maybe I can do something similar where I can anchor objects on a trackable (giving sticky position on the surface tracked) or an object can be anchored to the session (fixed pose in the world space). Is trackables in any form implemented in p5.xr?
It would be very helpful if you can throw more light on this.

Thank you

Hi @vedhant! I decided to go ahead and make a simplified version of the source public on my github. There is some explanation over there but WebXR is currently under heavy development so there is a bit of a hold on testing with anchors etc. Anyways feel free to open an issue to discuss this more over on the github repo.

2 Likes

@stalgiag thanks a lot for making the repo public. I have opened up an issue on this topic.

Hi @stalgiag, I am sharing the initial draft for my gsoc proposal. Please review it and suggest me changes.

Thank you

1 Like

Hi @vedhant! This looks good. I would say that it might be good to briefly outline the difficulty in doing virtual hit tests with the p5 renderer. This will be the main challenge when it comes to building the raycasting/virtual hit test. p5 geometry objects do not persist as objects with transform and scale information. They are stored with some identifying information in the gHash but this won’t be sufficient to determine hit between a ray and geometry. As of now, I think the primary solution that comes to mind is that p5xr will need to track all geometric objects in the scene. This might be slow and expensive for certain kinds of sketches so this may need multiple attempts with different approaches. I would say that unit test work should be squeezed into Phase 1. The library will be under significant development by me during that period and I can add unit tests as I go as well, so I think it might be better to assign more of your time to the raycasting problem. That way both Phase 2 and Phase 3 can be spent experimenting with the raycasting and polishing it. Perhaps the solution that you arrive at re persistent transform information for the geometric objects can also be applied to the anchoring of AR objects. Not sure but I do think that getting to work on the bigger problem earlier is a good approach.

Otherwise, I’d say this draft is well-written. Make sure to use some of the ‘More about you’ section to outline your interests, experiences, and qualifications. Best of luck!

2 Likes

Thanks a lot for your feedback!! I agree that I should be spending more time in working with raycasting. I’ll make the necessary changes.

Hi @stalgiag! I have modified my draft based on your feedback. Please review it again and share your thoughts on it. Thanks for your valuable time!