So I’m trying to create a particle system using a frag shader - for purposes of fast GPU use.
I know this can be done by using a texture to store all the particle info like position and velocity etc. I know how to encode floats to RGBA and back again, and how to retrieve a pixel from a texture using texelfetch - but what I can’t figure out is how to send / change a pixel in a texture buffer (storing the encoded data).
Any advice on this or alternative approaches welcome!
I haven’t done this yet, so I can’t speak with authority, but my impression is that you use two shader passes. The first is a fragment shader that generates a new particle state texture image based on the previous one – look at the Conway shader example in the Processing examples using ppixels in the fragment shader. Each pixel in the fragment shader can only update one particle component, e.g. pos.y, so four shaders would be running the same particle update calculation at the same time with each one writing out one of the four particle state variables, pos.x, pos.y, vel.x, and vel.y. A second shader, presumably a vertex shader, would then read the particle state texture map to position and draw the particles.
https://developer.amd.com/wordpress/media/2013/02/Chapter7-Drone-Real-Time_Particle_Systems_On_The_GPU.pdf goes into much more detail.
If your particle motion is simple enough, that is if you can represent the motion as a simple function of time such as a parabolic arc from an explosion, then you could skip the fragment shader simulation part and just render the particle motion directly in the vertex shader. The web site https://www.vertexshaderart.com/ shows a huge variety of what you can do with just the vertex shader without the simulation feedback loop.
The more modern way to do this is using a compute shader since then you can use floats directly and don’t have four fragment shaders duplicating work, but then you have to peel back Processing’s covers to use OpenGL directly.
https://discourse.processing.org/t/compute-shader-particle-system-in-processing/26611 shows how to use compute shaders from Processing.
Hi - thanks for all this - really useful.
I did think about the 2-shaders approach - but never fully tested the idea - at least I know in theory it should work. My project needs frame per frame updating of data - it uses complex stipple / diffusion algorithms to rasterise a 2d image.
The compute shader I’ve never heard of - but will look into it thanks.
And wow - vertex shader art - never knew it was a thing - it’s super cool!!