I’m exploring p5 and p5xr to do Spatial Computing or Mixed Reality.
Do you know any people who are working with p5 in 3D? Most examples use game engines like Unity or maybe Zach Lieberman’s work with ARKit and OpenFrameworks.
I have only found the p5.xr project by @stalgiag and related forum posts. I made some modifications to make it work through Oculus Quest 3 using passthrough, and I’m working on further examples and sketches.
I’m interested in making a better interface for sketching in 3D by having the headset as the camera and the environment light as a given; I think it could remove a lot of friction from using 3D, as Processing did back in the day with its simplified API.
Unfortunately, progress has more or less stalled on development for p5.xr. I originally started the project because I wanted an easy way to introduce students to XR. I was teaching in environments without access to VR headsets and my hope was that students would be able to easily view their existing p5 sketches in simple cardboard viewers with their phones. At the time, it seemed like Apple was going to adopt the WebXR spec in Safari and in the interim, the Mozilla WebXR app was a fine workaround. Unfortunately, after years of indicating otherwise, Apple still hasn’t adopted the WebXR spec and Mozilla abandoned the development of the WebXR Viewer for iOS. For these reasons, I haven’t continued development. That said, now that Quest headsets have a pretty broad audience, I think that p5 could be used to make some great XR art. There have also been some great updates to the WebGL implementation in p5.js that make p5/p5.xr an even better candidate.
If you or anyone else who is doing active XR development with a headset is interested in becoming a steward of this project, please do let me know.
I have been working on a tutorial about the camera() in p5.js and playing with Zig Sim Pro to map ARKit camera position to the p5.js camera (see the gif below). This is still very much a WIP but I’d be very curious to hear more about your research in AR with the Quest 3!
Maybe @davepagurek can tag other ppl doing cool work in 3D with p5.js?
For lighting based on the scene: p5 recently added support for (static) image-based lighting based on a single equirectangular image, so if you’re always in the same scene, you could potentially grab an image from the headset’s camera and then use that for the lighting going forward? One of the things I’d like to work on at some point is a more dynamic approach where you could sample the current frame of a video rather than using a single image though. I’m happy to help consult if anyone else is interested in taking that on!
I’m very interested in contributing more to the project. Look out for extra pull requests and activity on GitHub. I don’t know what it entails to be a project steward, but I would like to help.
I listed my motivations (draft) for why this is interesting, as it’s hard to explain quickly.
I think the main hiccup at the moment is having a way to quickly edit parameters. Ideally, I want to come up with a way to do this directly in immersive mode. Even though editing in the headset speeds things up, going in and out of the sketch adds extra friction, but this is probably something that is not the highest priority to address right now.
I will keep adding updates here occasionally; getting feedback on these sorts of things is important.
I quickly glanced at the WebXR documentation about this, it seems the main way they return lighting data is through something called spherical harmonics, although there is also a cube map. I will have to check what is actually implemented and I might get back to you.
I’m in the process of better learning WebGL for this project, I see there is an implementation of this in ThreeJS already. I will look deeper into how it works and I will get back on that.
Talking about on-device editing, did you ever try Rec Room’s “circuits”? I found it to be one of the best examples of VR-based programming. A similar system based on p5 to create immersive environments would be incredible.
It would be great to be able to swap out p5’s static image based lighting with one that can use the dynamic spherical harmonics data from WebXR!
Possibly one route to this is through a proposed API to be able to swap out small chunks of p5’s materials with your own code: Make the WebGL material system more easy to make libraries for · Issue #6144 · processing/p5.js · GitHub With that, you could in theory replace the function that uses the static equirectangular lookup with something to evaluate the spherical harmonics. That’s all still in progress though, but hopefully would unblock plugins like p5.xr from adding support for the things you’ve mentioned.
I managed to get live coding working with p5.live by Ted Davis / @ffd8 . Sometimes it still crashes out into normal mode, but it is very workable for the most part.
It’s an interesting challenge to come up with a p5 like interface that is friendly for beginners. The WebXR spec is very complete, but most of the features are probably not that useful for quick interactions, sketching.
For example I came up with the idea of having a main finger, the index behave somewhat like mouseX, mouseY for quick interactions.
Of course it also needs to take into account handedness, ie. left handed people or right handed people.
For example the sphere on the finger boils down to:
function preload() {
createARCanvas();
}
function setup() {
describe("A sphere on your right index finger");
mainHandMode(RIGHT);
}
function draw() {
normalMaterial();
push();
translate(finger.x, finger.y, finger.z);
sphere(0.01);
pop();
}
I’ve been working on an upload script to the web editor.
I should get it working in the next couple of days, there will be links where you can try out the code as well.