TexCoord postProcessing shader problem

hello! im trying to implement some ligth effect to a sphere. the idea is that it seems to emit light in a volumetric way, in the direction of the normal vectors. but i dont know exactly how can i do it.

my fragment shader is this, i implement a “radial” blur, and it works fine but only when the camera is front facing the sphere.

but when the camera is not facing, this is what i get, not so much a volumetric style:

#version 150
#define PROCESSING_TEXTURE_SHADER
#ifdef GL_ES
precision mediump float;
#endif
in vec4 TexCoord;
in vec3 Color;
uniform sampler2D texture;
uniform float u_time;
uniform float amt;
uniform float intensity;
uniform float x;
uniform float y;
uniform float noiseAmt;
uniform float u_time2;
uniform vec2 resolution;

out lowp vec4 fragColor;
vec4 finalColor;

 float strength = 1;

#define PI  3.14


float random(vec3 scale,float seed){return fract(sin(dot(gl_FragCoord.xyz+seed,scale))*43758.5453+seed);}

void main() {
vec2 uv = TexCoord.xy;
vec2 center = 0.5 * resolution;
vec2 center2 = 0.5 * resolution;

vec4 color = vec4(0);
float total = 0.0;
vec2 toCenter=center-uv*resolution;
vec2 toCenter2=center2-uv*resolution;

float offset=random(vec3(12.9898,108.233,151.7182),10.0);

for(float t=0.0;t<=40.0;t+=1){
  float percent=(t+offset/2)/40.0;
  float weight=40.0*(percent+percent*percent);
  vec4 sample=texture(texture,uv+toCenter2*percent*strength/resolution);
  vec4 sample2=texture(texture,uv+toCenter2*percent*strength/resolution);
  sample.rgb*=sample.a;


  color+=sample*weight;
  // color+=sample2*weight;
  total+=weight;
}

fragColor=color/total;
fragColor.rgb/=fragColor.a;
}

any ideas?

To the best of my ability, all I can tell you is that there’s a lot that’s wrong with your shader code.
I can see a lot of copy-pasted code from the internet and errors as well. You seem on the right track as you’re beginning to become comfortable with glsl and c’s syntax but asking for help on a task like this won’t accomplish your main goal in a meaningful way.

This is likely not the best place to ask, not only because this is a forum for specifically Processing as an isolated language but also because there are likely not a lot of people in this forum using P3D specifically to write shaders.

If you have specific questions on shaders you can reach out to a community that is better focused and centered around writing shaders. If you want to get a real leg up in learning shader basics as well as to start learning more practical applications I recommend finding some tutorials that are suited for beginners who are comfortable with the syntax but not the proper application or a professional course on computer graphics. You can even watch some live coding demos using shadertoy

Processing is very beginner-oriented and therefore unfortunately doesn’t have a lot of community members who know the first few things about shaders. I know enough about them to get them started and to create unique effects, but I’m just starting to learn the terminology and best practices and wouldn’t feel like any criticism I could give you about your code would be truly constructive. I don’t think anyone in this forum could do so either without spending a full afternoon and a half with you.

I hope you take some of the links I posted and use them. They helped me a lot in the long run.

Thanks for the answer. actually yes, is a shader took from internet, but im learning from the begin and i have a lots of questions to do. What is wrong with my shader? do you have some idea about what i want to do? thanks for the answer, i`ll check all those links, but i would like a answer to keep learning.

I guess I can give you some advice. I’m not to great with shaders so I’m having a hard time reading the more mixed together parts of this but there’s a few things that I’m seeing in that code that are pretty out of place.

uniform sampler2D texture;
uniform float u_time;
uniform float amt;
uniform float intensity;
uniform float x;
uniform float y;
uniform float noiseAmt;
uniform float u_time2;
uniform vec2 resolution;

First thing is, you’re bringing in a bunch of uniforms but you’re not using them all. I can see you using texture and resolution but nothing else in this list is in use.

uniforms must be sent in through the shader pipeline, either from earlier shader steps (the vertex shader is the only one that should be of note to you) and need to be synchronized across all gpu cores, which can cause unecessary memory and time complexity overhead. You should only bring in what you use. Just like how you should only take what you eat and you should eat what you take.

out lowp vec4 fragColor;
vec4 finalColor;

Secondly, you have two declared variables, one that you’ve marked as an output and the other that you’re not using. I guess this is the “eat what you take” part. You also have your output set to lowp (low precision) which directly conflicts with the only precision declaration you make at the beginning of the program where you said precision mediump float;. It doesn’t make much sense to calculate everything with medium precision just to turn around and drop it to low precision.

float strength = 1;

Thirdly, you have a variable here that you’re not using. This is what we call a scalar. A scalar can be used to scale other values, but it only works when you multiply strength to something. For example if you wanna turn up the brightness you can have fragColor*=strength and you can modulate what strength is equal to to adjust the brightness of the full scene. fragColor*=strength in this case would multiply each of the four things stored in fragColor by the value of strength and then would store it back into fragColor.

#define PI  3.14

Storing constants is cheap for the GPU. It needs to use all those bits to store the whole float anyways… Use #define PI 3.1415926535897932384626433832795 instead. Have a little fun :smile:

float random(vec3 scale,float seed){return fract(sin(dot(gl_FragCoord.xyz+seed,scale))*43758.5453+seed);}

I recognize this function from google. It’s a good idea to know and understand what you’re using, at least at a rudimentary level. later on in your code you call the random function. Specifically on this line: float offset=random(vec3(12.9898,108.233,151.7182),10.0);.
First of all I should explain that random numbers are really hard for computers to generate. Your computer typically stores entropy bits based on mouse movement and keyboard input. GPU code is typically low level and doesn’t have the luxury of being able to snoop hardware. This random function is a workaround to generate pseudo random values. However, in order to actually generate random values, the function relies on a few things. Firstly, a seed of some kind. the only thing the seed needs is to be different based on how you want it to be different. For example, if you want the random number to be different every frame, you’ll want to give it a seed of time since time is always changing. You’re taking in some uniforms and one of them is definitely time. I think two of them are actually.

As for what the function is doing right now, is its getting the sine( (dot product of the (input FragCoord.xyz added to the seed) and the scale)) mod 1. I’m hoping you know what sine does and what mod 1 does, but I don’t expect you to know what getting the dot product of a vec3 and scale should be but you do seem to understand that putting some numbers into the random function will get out a random value. Do pay attention to how it’s using FragCoord.xyz

You should know that the internals of this function are doing work to return a value between 0 and 1 but it will always be tied to the location of whatever fragment you’re shading. Meaning every frame the program runs, random() won’t really be doing too much different. Not knowing that may have you run into problems later with other code if you don’t understand it now. Just know if you want the random values to actually be different, right now its configured to only generate different numbers based on the location of what you’re shading, not based on anything else. Moving the objects in your scene may cause some weird results to offset so depending on what you’re actually using offset for you may be getting strange results.

We haven’t even got to the main part of the code yet!

vec2 uv = TexCoord.xy;
vec2 center = 0.5 * resolution;
vec2 center2 = 0.5 * resolution;

I can only guess that some values are being set up here. uv is a bit redundant as you should know that TexCoord.xy should relate to the uv location on the texture of the current fragment you’re working on. It’s nice that it’s being named here but you should know that if you’re typically printing out the normal texture location you would traditionally use texture(textureObjectInput, TexCoord.xy); to get the color value of the location of the fragment as a vec4.

I have no idea what center and center2 are mapped to as resolution could be based on anything. pixel density/inch size, pixel count etc. It’s better to think of screenspace as ranging from -1 to 1 in both the x and y direction when writing shaders and to make everything resolution agnostic (not caring about res)

vec4 color = vec4(0);
float total = 0.0;
  1. storing the color black
  2. storing the number 0
vec2 toCenter=center-uv*resolution;
vec2 toCenter2=center2-uv*resolution;

once again i’m very confused for the same reason as up above. it seems like you’re getting a pixel amount difference between the center of the screen and the texture coordinate position of whatever you’re drawing as if it was superimposed onto the screen but i actually don’t know why that’s there. Seems like its just another way to calculate the difference between the center of the screen to the pixel location of whatever uv coordinate you’re currently trying to draw to. I’m not sure how this benefits or does anything for this shader.

then we have the offset line which is just generating a random number per fragment that will never change unless the geometry in the space changes.

for(float t=0.0;t<=40.0;t+=1){

oh no.

keep in mind that loops in shaders are bad. Whatever you run inside of it needs to run however many times for each fragment in the scene. This does NOT mean for each pixel. Anything you run should therefore stay simple and relatively bug free and should definitely not have any reason to have to run more than once. A good example is the last line of the loop. total+=weight; being in this loop means total will always be equal to 40*((40!+offset/2)/40.0)^2+((40!+offset/2)/40.0)). I might be using the ! symbol wrong it’s been a while since I took a math class but I guess what you can see is that you’re calculating a number based on another number, and nothing more. total is a function of offset and it should be simple to calculate outside of this loop if you can crunch the numbers well enough.

vec4 sample=texture(texture,uv+toCenter2*percent*strength/resolution);

I’m getting pretty tired so I’m gonna cut this a bit short and say I don’t understand the variables toCenter2, percent and resolution well enough to know exactly what uv coordinate you’re gonna end up getting, but something tells me that your original issue lies around here. By the time you run this line, any semblance of calculating these values has lost a most of its meaning and therefore causing unexpected behavior. I’d also mention that floating point error being in this loop as well may cause drift in the actual desired output.

vec4 sample2=texture(texture,uv+toCenter2*percent*strength/resolution);

sample2 isn’t used as its commented out and it being here is actually weighing down the whole program as once again it needs to be run on every core and for every fragment. It’s also doing the same thing, so instead of calculating the same thing twice its better to just store the already calculated value as a copy if you intend to use it for a different purpose.

sample.rgb*=sample.a;

this is a pretty rough way to handle alpha as you’re also applying alpha at the end of the program as well on the last line.

fragColor=color/total;
fragColor.rgb/=fragColor.a;

You should know that all the work in the loop comes down to just an averaging of all the work you did in the loop by a number you could have calculated outside of the loop. You’re then also applying alpha by dividing by the alpha channel instead of multiplying by it which is typically how alpha is applied.
Alpha (transparency) is typically applied by looking at the alpha of whatever data is stored at your texture coordinate and multiplying it by that, not fragColor.a. since fragColor.a is a result of your final output on the previous line, which is a result of a loop that runs 40 times with crazy amounts of floating point arithmetic, it’s astounding that it’s doing its job in any meaningful way whatsoever.

Anyways, I obviously don’t know enough or have all the necessary information to know what’s going on in this shader, nor do i have the vertex shader or your inputs, but I hope that some of my information can help you think a little bit more about what goes into making shaders in order to practice lean good code.

Also you should check out ShaderToy if you haven’t yet. There’s a lot of good demos on there. You might not learn much but if you’re getting into the c code playing around with what you’re doing right now you should foster that interest and explore with it and trash some of the demos on there (everything on their website is open to messing with).

I hope you learn something from what I had to offer. This is probably the most off topic forum post I’ve participated in.

2 Likes

thanks, your information is very important!

something about the original question? do you have any clue to offer?

shad-sphericallight2

i want something like this, in 3D, using fragment, no lights();

my dude is how can implement this 3d ligth style with a 2d texture coordinates. its possible?

can’t help ya, i’ve gotta be honest I can’t even really tell what’s happening in this picture other than using a point light to light up a sphere.
I might also add that all of the functionality related to lights that can be controlled inside of the lights section of the reference e.g.
ambientLight()
directionalLight()
lightFalloff()
lights()
lightSpecular()
noLights()
normal()
pointLight()
spotLight()
are all functions based on Processing’s built in P3D shader and are completely overridden and ignored when using your own shader. All of these things, ambient, directional, falloff, specular, point and spotlights are implementation dependent. When making your own shader, you choose what and what not to implement.

this is what im trying to do:

notice what happens when you change the position of the sphere.
here is my attemp, i use the same shader, but i cant get the same result:

import controlP5.*;

ControlP5 cp5;
import peasy.*;

PGraphics canvas;

PGraphics verticalBlurPass;

PShader blurFilter;
PeasyCam cam;


void setup()
{
  cam = new PeasyCam(this, 1400);
  size(1000, 1000, P3D);

  

  canvas = createGraphics(width, height, P3D);


  verticalBlurPass = createGraphics(width, height, P3D);
  verticalBlurPass.noSmooth(); 

  blurFilter = loadShader("bloomFrag.glsl", "blurVert.glsl");
}

void draw(){
  background(0);

  canvas.beginDraw();
  render(canvas);
  canvas.endDraw();

  // blur vertical pass
  verticalBlurPass.beginDraw();
  verticalBlurPass.shader(blurFilter);
  verticalBlurPass.image(canvas, 0, 0);
  //render(verticalBlurPass);
  verticalBlurPass.endDraw();

  cam.beginHUD();

  image(verticalBlurPass, 0, 0);

  cam.endHUD();
  
println(frameRate);
}

void render(PGraphics pg)
{
  cam.getState().apply(pg);
   
  pg.background(0, 50);
  pg.stroke(255, 0, 0);


  
      canvas.fill(100, 0, 255);
      canvas.fill(255);
 
      pg.noStroke();
    canvas.sphere(100);
    pg.noFill();
     pg.stroke(255);
    pg.strokeWeight(10);
   

}
#version 150

   in vec4 position;
   in vec3 normal;

   uniform mat4 transform;

   in vec2 texCoord;
   out vec4 TexCoord;

   in vec3 color;
  out vec3 Color;


  uniform float u_time;
  uniform mat4 texMatrix;


 	void main() {
Color = color;

    TexCoord = texMatrix * vec4(texCoord, 1.0, 1.0);
 		gl_Position = transform * position ;
 	}
#version 150
#define PROCESSING_TEXTURE_SHADER
#ifdef GL_ES
precision mediump float;
#endif
in vec4 TexCoord;
in vec3 Color;
uniform sampler2D texture;
uniform float u_time;
uniform float amt;
uniform float intensity;
uniform float x;
uniform float y;
uniform float noiseAmt;
uniform float u_time2;
out mediump vec4 fragColor;
vec4 finalColor;
#define PI  3.14


uniform vec2 lightPosition = vec2(0,0);
uniform float exposure = 0.09;
uniform float decay = .95;
uniform float density = 1.0;

uniform float weight = 1;
uniform int samples = 100;
const int MAX_SAMPLES = 100;


uniform vec2 resolution; // screen resolution

void main(){

  vec2 texCoord = TexCoord.xy;
  // Calculate vector from pixel to light source in screen space
  vec2 deltaTextCoord = texCoord - lightPosition;
  // Divide by number of samples and scale by control factor
  deltaTextCoord *= 1.0 / float(samples) * density;
  // Store initial sample
  vec4 color = texture(texture, texCoord);
  // set up illumination decay factor
  float illuminationDecay = 1.0;

  // evaluate the summation for samples number of iterations up to 100
  for(int i=0; i < MAX_SAMPLES; i++){
    // work around for dynamic number of loop iterations
    if(i == samples){
      break;
    }

    // step sample location along ray
    texCoord -= deltaTextCoord;
    // retrieve sample at new location
    vec4 color2 = texture(texture, texCoord);
    // apply sample attenuation scale/decay factors
    color2 *= illuminationDecay * weight;
    // accumulate combined color
    color += color2;
    // update exponential decay factor
    illuminationDecay *= decay;

  }
  // output final color with a further scale control factor
  fragColor = color * exposure;
}

what im doing wrong? why always the radial is projected to a point and is not volumetric? maybe @hamoid @cansik have any idea?

1 Like

I’m sorry but I have no time to test your code ATM. I know it’s not exactly what you are trying to do, but what if you pass the light position to the fragment shader and then simulate a 2D glow by increasing the RGB intensities if they are close to the light position? It would not be a volumetric light, but it can be good enough in some cases…
Otherwise, maybe https://discourse.threejs.org/tags/shaders has more experts in this area?

thanks for answer, i`ve been studie and im a little close to the answer but withour results yet. is not problem with the type of effect, becouse with any other texture (feedback for example) when i want to calculate “radial” for each object, the texture do the calculations but always with the same center, the center of the screen. so i need to transform that coordinates for each object. i dont know exactly how. but, if i have an object in (-100, 0, 0) i need to use like it was the center of the screen.

in this images you will see how the center is always the same (the center of the screen) and it doest transform when the sphere is moving.

but maybe im also wrong… but how about i`ve multiples objects? i would not need multiples fbos for each one?

i dont understand why this shader result this:
with the same sketch. the shader blur is toke from this page:

https://www.airtightinteractive.com/demos/ribbons/

 #define PROCESSING_TEXTURE_SHADER 

    uniform sampler2D texture;

		uniform float glowSize = 10;
		uniform float glowAmount = 2;
		uniform vec2 resolution;

		uniform float vigOffset = 0;
		uniform float vigDarkness = 0;

		uniform float brightness = 0;
	       uniform float contrast = 0.4;

		uniform float hue = 0;
	       uniform float saturation = 0;

		uniform float rgbShiftAmount;

varying vec4 vertTexCoord;
vec4 vUv;
varying vec4 vertColor;


		const float rgbAngle = 0.1;

		void main() {

vUv = vertTexCoord;

			float h = glowSize / resolution.x;
			float v = glowSize / resolution.y;

			vec4 sum = vec4( 0.0 );

			//H Blur
			sum += texture2D( texture, vec2( vUv.x - 4.0 * h, vUv.y ) ) * 0.051;
			sum += texture2D( texture, vec2( vUv.x - 3.0 * h, vUv.y ) ) * 0.0918;
			sum += texture2D( texture, vec2( vUv.x - 2.0 * h, vUv.y ) ) * 0.12245;
			sum += texture2D( texture, vec2( vUv.x - 1.0 * h, vUv.y ) ) * 0.1531;
			sum += texture2D( texture, vec2( vUv.x, vUv.y ) ) * 0.1633;
			sum += texture2D( texture, vec2( vUv.x + 1.0 * h, vUv.y ) ) * 0.1531;
			sum += texture2D( texture, vec2( vUv.x + 2.0 * h, vUv.y ) ) * 0.12245;
			sum += texture2D( texture, vec2( vUv.x + 3.0 * h, vUv.y ) ) * 0.0918;
			sum += texture2D( texture, vec2( vUv.x + 4.0 * h, vUv.y ) ) * 0.051;

			//V Blur
			sum += texture2D( texture, vec2( vUv.x, vUv.y - 4.0 * v ) ) * 0.051;
			sum += texture2D( texture, vec2( vUv.x, vUv.y - 3.0 * v ) ) * 0.0918;
			sum += texture2D( texture, vec2( vUv.x, vUv.y - 2.0 * v ) ) * 0.12245;
			sum += texture2D( texture, vec2( vUv.x, vUv.y - 1.0 * v ) ) * 0.1531;
			sum += texture2D( texture, vec2( vUv.x, vUv.y ) ) * 0.1633;
			sum += texture2D( texture, vec2( vUv.x, vUv.y + 1.0 * v ) ) * 0.1531;
			sum += texture2D( texture, vec2( vUv.x, vUv.y + 2.0 * v ) ) * 0.12245;
			sum += texture2D( texture, vec2( vUv.x, vUv.y + 3.0 * v ) ) * 0.0918;
			sum += texture2D( texture, vec2( vUv.x, vUv.y + 4.0 * v ) ) * 0.051;

			//orig color
			vec4 col = texture2D( texture, vUv.xy );

		

			//Add Glow
			col = min(col + sum * glowAmount, 1.0);


			gl_FragColor = col;

		}

im doing something terrible wrong? i even delete the vertex shader to see if theres any problem there. but no, its seems to be very similar to the previous comment. why is that effect when camera is changin distance?