I’m trying to get into the world of fragment shaders but ran into an issue that only appears on some devices. I’m driving the shader with particles that I’m calculating in the sketch, then passing a list of those particles to the shader by using uniform arrays. It seemed to work exactly as expected. Then someone reported that they were getting an ERROR: too many uniforms on their ipad.
I went digging into this and of course it makes sense, all devices have their own maximum uniforms count.
So my question: What can I do to resolve this so that it can work reliably on any device? Right now I just hard-coded a maximum amount of particles which sets the uniform’s starting array length. Is there a better approach to this? Or maybe there’s a better approach to draw particles on the screen instead of a huge uniform array? I’m new to shaders so I’m oblivious if this not what I should be doing.
You can find sketch and shader here: https://www.openprocessing.org/sketch/835887#
First, very nice effect!
One approach to pass a lot of information to a shader is using a texture. You put into the texture the data you want encoded as RGBA values (so you can convert positions, speeds, whatever into RGBA values). Then in the shader you sample the texture and use the RGBA values for whatever other purpose you had in mind (positions, speeds, …)
Do you think that would work in your case?
I think that could absolutely be a better approach to take when dealing with so many particles. I know this may be a tall order, but do you think you can provide a very simple example?
Here something done very quickly and without much thinking I hope the code is not too bad. I’m curious about performance on mobile, as for-loops are not recommended in shaders. For each pixel in the screen it checks the distance to all particles to calculate brightness. The brute force approach.
This is fantastic, thank you so much! I find that most tutorials out there are overly complicated but I can easily follow the code here. Time to try out some GPU particles
Welcome! I’m glad you found it understandable.
I made a variation
It’s fun to do things with fragment shaders but it’s not always the best performing. It’s better to know when to use them and what for For instance this one with 50 points stops working on my GTX1060, when I could draw looots of lines and achieve a similar effect by applying post processing (some king of glow / blur).
Another limitation of this technique in Processing: your stuck with 8 bit integers, so a value encoded in RGBA has only 256 variations. If your canvas is 256x256 pixels it can be ok for storing positions.
Also, note the jitter when rotating. If two points near each other define a line, and those points can only move in pixels (and not subpixels) the tilt of the line will jump greatly, meaning that the possible tilts of lines is limited (compared to using floats).
You could work around this by using both R and G to define one value (256x256 = 65536 variations) and B and A for another similar 2 byte value.
In more flexible environments like openFrameworks or OPENRNDR you can use float buffers.
Oh that’s interesting. Why is it that the gpu can’t take 50 points in this case?
Hard to say, because I just tried on a 5 year old laptop with integrated graphics and it was working with much higher numbers (I tried up to 500). The framerate just dropped, but it didn’t turn black as it did with the muuuuch better GTX 1060. Maybe a bug in the driver? I don’t know really.
I now managed to to something with mouse tracking, thank you:
Some particle rain… p5.js Web Editor
Next up I will train with dots from text vectors… using reference | p5.js
Really thank you for showing this solution, even though this is difficult to understand at first
Happy to see you creating things with it