I guess I can give you some advice. I’m not to great with shaders so I’m having a hard time reading the more mixed together parts of this but there’s a few things that I’m seeing in that code that are pretty out of place.
uniform sampler2D texture;
uniform float u_time;
uniform float amt;
uniform float intensity;
uniform float x;
uniform float y;
uniform float noiseAmt;
uniform float u_time2;
uniform vec2 resolution;
First thing is, you’re bringing in a bunch of uniforms but you’re not using them all. I can see you using texture and resolution but nothing else in this list is in use.
uniforms must be sent in through the shader pipeline, either from earlier shader steps (the vertex shader is the only one that should be of note to you) and need to be synchronized across all gpu cores, which can cause unecessary memory and time complexity overhead. You should only bring in what you use. Just like how you should only take what you eat and you should eat what you take.
out lowp vec4 fragColor;
vec4 finalColor;
Secondly, you have two declared variables, one that you’ve marked as an output and the other that you’re not using. I guess this is the “eat what you take” part. You also have your output set to lowp (low precision) which directly conflicts with the only precision declaration you make at the beginning of the program where you said precision mediump float;
. It doesn’t make much sense to calculate everything with medium precision just to turn around and drop it to low precision.
float strength = 1;
Thirdly, you have a variable here that you’re not using. This is what we call a scalar. A scalar can be used to scale other values, but it only works when you multiply strength
to something. For example if you wanna turn up the brightness you can have fragColor*=strength
and you can modulate what strength is equal to to adjust the brightness of the full scene. fragColor*=strength
in this case would multiply each of the four things stored in fragColor by the value of strength and then would store it back into fragColor.
#define PI 3.14
Storing constants is cheap for the GPU. It needs to use all those bits to store the whole float anyways… Use #define PI 3.1415926535897932384626433832795
instead. Have a little fun
float random(vec3 scale,float seed){return fract(sin(dot(gl_FragCoord.xyz+seed,scale))*43758.5453+seed);}
I recognize this function from google. It’s a good idea to know and understand what you’re using, at least at a rudimentary level. later on in your code you call the random function. Specifically on this line: float offset=random(vec3(12.9898,108.233,151.7182),10.0);
.
First of all I should explain that random numbers are really hard for computers to generate. Your computer typically stores entropy bits based on mouse movement and keyboard input. GPU code is typically low level and doesn’t have the luxury of being able to snoop hardware. This random function is a workaround to generate pseudo random values. However, in order to actually generate random values, the function relies on a few things. Firstly, a seed
of some kind. the only thing the seed needs is to be different based on how you want it to be different. For example, if you want the random number to be different every frame, you’ll want to give it a seed of time
since time is always changing. You’re taking in some uniforms and one of them is definitely time. I think two of them are actually.
As for what the function is doing right now, is its getting the sine( (dot product of the (input FragCoord.xyz
added to the seed) and the scale)) mod 1. I’m hoping you know what sine does and what mod 1 does, but I don’t expect you to know what getting the dot product of a vec3 and scale should be but you do seem to understand that putting some numbers into the random function will get out a random value. Do pay attention to how it’s using FragCoord.xyz
You should know that the internals of this function are doing work to return a value between 0 and 1 but it will always be tied to the location of whatever fragment you’re shading. Meaning every frame the program runs, random() won’t really be doing too much different. Not knowing that may have you run into problems later with other code if you don’t understand it now. Just know if you want the random values to actually be different, right now its configured to only generate different numbers based on the location of what you’re shading, not based on anything else. Moving the objects in your scene may cause some weird results to offset
so depending on what you’re actually using offset
for you may be getting strange results.
We haven’t even got to the main part of the code yet!
vec2 uv = TexCoord.xy;
vec2 center = 0.5 * resolution;
vec2 center2 = 0.5 * resolution;
I can only guess that some values are being set up here. uv
is a bit redundant as you should know that TexCoord.xy should relate to the uv location on the texture of the current fragment you’re working on. It’s nice that it’s being named here but you should know that if you’re typically printing out the normal texture location you would traditionally use texture(textureObjectInput, TexCoord.xy);
to get the color value of the location of the fragment as a vec4.
I have no idea what center and center2 are mapped to as resolution could be based on anything. pixel density/inch size, pixel count etc. It’s better to think of screenspace as ranging from -1 to 1 in both the x and y direction when writing shaders and to make everything resolution agnostic (not caring about res)
vec4 color = vec4(0);
float total = 0.0;
- storing the color black
- storing the number 0
vec2 toCenter=center-uv*resolution;
vec2 toCenter2=center2-uv*resolution;
once again i’m very confused for the same reason as up above. it seems like you’re getting a pixel amount difference between the center of the screen and the texture coordinate position of whatever you’re drawing as if it was superimposed onto the screen but i actually don’t know why that’s there. Seems like its just another way to calculate the difference between the center of the screen to the pixel location of whatever uv coordinate you’re currently trying to draw to. I’m not sure how this benefits or does anything for this shader.
then we have the offset line which is just generating a random number per fragment
that will never change unless the geometry in the space changes.
for(float t=0.0;t<=40.0;t+=1){
oh no.
keep in mind that loops in shaders are bad. Whatever you run inside of it needs to run however many times for each fragment in the scene. This does NOT mean for each pixel. Anything you run should therefore stay simple and relatively bug free and should definitely not have any reason to have to run more than once. A good example is the last line of the loop. total+=weight;
being in this loop means total will always be equal to 40*((40!+offset/2)/40.0)^2+((40!+offset/2)/40.0)). I might be using the ! symbol wrong it’s been a while since I took a math class but I guess what you can see is that you’re calculating a number based on another number, and nothing more. total is a function of offset and it should be simple to calculate outside of this loop if you can crunch the numbers well enough.
vec4 sample=texture(texture,uv+toCenter2*percent*strength/resolution);
I’m getting pretty tired so I’m gonna cut this a bit short and say I don’t understand the variables toCenter2, percent and resolution well enough to know exactly what uv coordinate you’re gonna end up getting, but something tells me that your original issue lies around here. By the time you run this line, any semblance of calculating these values has lost a most of its meaning and therefore causing unexpected behavior. I’d also mention that floating point error being in this loop as well may cause drift in the actual desired output.
vec4 sample2=texture(texture,uv+toCenter2*percent*strength/resolution);
sample2 isn’t used as its commented out and it being here is actually weighing down the whole program as once again it needs to be run on every core and for every fragment. It’s also doing the same thing, so instead of calculating the same thing twice its better to just store the already calculated value as a copy if you intend to use it for a different purpose.
sample.rgb*=sample.a;
this is a pretty rough way to handle alpha as you’re also applying alpha at the end of the program as well on the last line.
fragColor=color/total;
fragColor.rgb/=fragColor.a;
You should know that all the work in the loop comes down to just an averaging of all the work you did in the loop by a number you could have calculated outside of the loop. You’re then also applying alpha by dividing by the alpha channel
instead of multiplying by it
which is typically how alpha is applied.
Alpha (transparency) is typically applied by looking at the alpha of whatever data is stored at your texture coordinate and multiplying it by that, not fragColor.a. since fragColor.a is a result of your final output on the previous line, which is a result of a loop that runs 40 times with crazy amounts of floating point arithmetic, it’s astounding that it’s doing its job in any meaningful way whatsoever.
Anyways, I obviously don’t know enough or have all the necessary information to know what’s going on in this shader, nor do i have the vertex shader or your inputs, but I hope that some of my information can help you think a little bit more about what goes into making shaders in order to practice lean good code.
Also you should check out ShaderToy if you haven’t yet. There’s a lot of good demos on there. You might not learn much but if you’re getting into the c code playing around with what you’re doing right now you should foster that interest and explore with it and trash some of the demos on there (everything on their website is open to messing with).
I hope you learn something from what I had to offer. This is probably the most off topic forum post I’ve participated in.