[SOLVED] PGraphics or PImage as Uniform not working when using GLSL in Java OpenGL

Hello everyone,

I am keeping exploring the field of shaders in Processing, even if I am facing many problems.

My last problem is when I try to use a texture (for example a PGraphics of PImage) to use it as parameters from Processing to the Shader, it keeps saying that the Uniform was removed during compilation.

To pass the texture from CPU to GPU (Processing to Shader) I do it in this way:

shader.set("texture", myTexture);

Intuitively this should work, but it keeps giving me the error:

The shader doesn't have a uniform called "texture" OR the uniform was removed during compilation because it was unused.

If I try to access the texture via sampler2D(); it crashes.

Here is the full code:

// Import PeasyCam, for easy camera controls
import peasy.*;
PeasyCam cam;

// 2D arrays for points and to keep track of noise, so we don't calculate it everytime
PVector[][] points;
PVector[][] noise;

// Radius of the sphere
float radius = 200;

// Total number of points
// 2D array means 250*250=62.500 points in total
int total = 250;

// This creates the animation for the sphere
float updateNoise = 0;

// offscreen render to use as texture in the shader
PGraphics txt;

void settings() {
  size(1280, 720, P3D);
}

void setup() {
  // Maximum speed, for testing purpose
  frameRate(1000);
  
  txt = createGraphics(1024, 1024);
  txt.beginDraw();
  txt.background(100);
  txt.endDraw();
  
  // Initialize the shader, check the shader tab
  initShader();

  // Disable the depth test to not have weird shading on the colors
  hint(DISABLE_DEPTH_TEST);

  // field of view and perspective of the camera
  float fov = PI/3.0; 
  float cameraZ = (height/2.0) / tan(fov/2.0);
  perspective(fov, float(width)/float(height), 
    cameraZ/10.0, cameraZ*10000.0);

  // Initialize the points and noise arrays
  points = new PVector[total][total];
  noise = new PVector[total][total];

  // Initialize the camera
  cam = new PeasyCam(this, 500);

  // Variable for the 2D noise, x coordinate
  float nx = 0;

  for (int i = 0; i < total; i++) {

    // Calculate the latitude
    float lat = map(i, 0, total-1, 0, PI);

    // Second variable for the noise
    float ny = 0;

    for (int j = 0; j < total; j++) {

      // Longitude
      float lon = map(j, 0, total-1, 0, TWO_PI);

      // Radius with noise applied
      float r = radius * noise(nx, ny);

      // Spherical coordinates
      float x = r * sin(lat) * cos(lon);
      float y = r * sin(lat) * sin(lon);
      float z = r * cos(lat);

      points[i][j] = new PVector(x, y, z);
      noise[i][j] = new PVector(nx, ny);

      ny += 0.01;
    }

    nx += 0.02;
  }
}

void draw() {
  background(0);

  update();        // Update the coordinates
  updateShader();  // Update the buffer in the openGL shader
  showShader();    // Display the shader
}

void update() {

  // TODO: optimize the update method
  
  for (int i = 0; i < total; i++) {

    float lat = map(i, 0, total, 0, PI);

    for (int j = 0; j < total; j++) {

      float lon = map(j, 0, total-1, 0, TWO_PI);

      float r = radius * noise(noise[i][j].x + updateNoise, noise[i][j].y + updateNoise);

      points[i][j].x = r * sin(lat) * cos(lon);
      points[i][j].y = r * sin(lat) * sin(lon);
      points[i][j].z = r * cos(lat);
    }
  }

  updateNoise += 0.01;
}

OpenGL:

// Import NIO from Java, for I/O operations, and to exchange data from Java to C# in the shader
import java.nio.*;
import com.jogamp.opengl.*;

// Declare shader variables
float[] shaderPoints;              // The points to share with openGL
int  vertLoc;                      // Keeps the vertex location for the buffer
PGL     pgl;                       // PGL, Processing openGL
PShader sh;                        // Vertex and Fragment shader
FloatBuffer pointCloudBuffer;      // FloatBuffer of the openGL implementation
ByteBuffer byteBuf;                // ByteBuffer to share floats between Processing sketch and OpenGL
int vertexVboId;                   // Vertex Buffer Object ID

void initShader() {
  // Load the vertex and fragment shader
  sh = loadShader("frag.glsl", "vert.glsl");

  // In openGL there are no vectors, or multivalue variables,
  // it will take every 3 values as x, y and z for each vertex.
  shaderPoints = new float[total*total*3];

  // Start new PGL
  PGL pgl = beginPGL();
  // Declare an int buffer, it  will identify our vertex buffer
  IntBuffer intBuffer = IntBuffer.allocate(1);
  // Generate 1 buffer, put the resulting identifier in vertexbuffer
  pgl.genBuffers(1, intBuffer);
  // The the ID from the int buffer
  vertexVboId = intBuffer.get(0);
  // Allocate the float array in a byte buffer
  byteBuf = ByteBuffer.allocateDirect(shaderPoints.length * Float.BYTES); //4 bytes per float
  // End the PGL
  endPGL();
}

void updateShader() {

  // This will update the values in our shader, by sharing the buffer from Processing to OpenGL.
  // In the float array we will share the coordinates of the vertices. Each value is one f the x, y or z
  // coordinate from our vertices. THe index is to keep track of the actual value in the array
  int index = 0;

  for (int i = 0; i < total; i++) {
    for (int j = 0; j < total; j++) {
      shaderPoints[index+0] = points[i][j].x;  // X
      shaderPoints[index+1] = points[i][j].y;  // Y
      shaderPoints[index+2] = points[i][j].z;  // Z
      index += 3;                              // index increments by 3
    }
  }

  // Allocate the float array in a byte buffer
  // This has been moved to the Shader setup because it was draining the memory, thanks to Neil Smith for the fix
  // byteBuf = ByteBuffer.allocateDirect(shaderPoints.length * Float.BYTES); //4 bytes per float
  // Order the byte byuffer
  byteBuf.order(ByteOrder.nativeOrder());
  // Converts the byte buffer in a float buffer
  pointCloudBuffer = byteBuf.asFloatBuffer();
  // Put the values in the float buffer
  pointCloudBuffer.put(shaderPoints);
  // Set the position to 0, starting point of the buffer
  pointCloudBuffer.position(0);
}

void showShader() {
  sh.set("txtr", txt);
  // Begin PGL
  pgl = beginPGL();
  
  // Get the GL Context and enable gl_PointSize
  GLContext.getCurrentGL().getGL3().glEnable(GL3.GL_PROGRAM_POINT_SIZE);
  
  // Bind the Shader
  sh.bind();
  // Set the vertex location, from the shader
  vertLoc = pgl.getAttribLocation(sh.glProgram, "vertex");

  // Enable the generic vertex attribute array specified by vertLoc
  pgl.enableVertexAttribArray(vertLoc);
  // Get the size of the float buffer
  int vertData = shaderPoints.length;

  // Binds the buffer object
  pgl.bindBuffer(PGL.ARRAY_BUFFER, vertexVboId);
  // Give our vertices to OpenGL.
  pgl.bufferData(PGL.ARRAY_BUFFER, Float.BYTES * vertData, pointCloudBuffer, PGL.DYNAMIC_DRAW);
  pgl.vertexAttribPointer(vertLoc, // Gets the vertex location, must match the layout in the shader.
    3, // Size, 3 values for each vertex from the float buffer
    PGL.FLOAT, // Type of the array / buffer
    false, // Normalized?
    Float.BYTES * 3, // Size of the float byte, 3 values for x, y and z
    0                   // Stride
    );

  // Here I try to enable gl_PointSize
  //pgl.Enable( PGL.VERTEX_POINT_SIZE );

  // The following commands will talk about our 'vertexbuffer' buffer
  pgl.bindBuffer(PGL.ARRAY_BUFFER, 0);
  // Draw the sphere
  pgl.drawArrays(PGL.POINTS, // Type of draw, in this case POINTS
    0, // Starting from vertex 0
    vertData                     // Drawing all the points from vertData, the array size
    );
  // Disable the generic vertex attribute array specified by vertLoc
  pgl.disableVertexAttribArray(vertLoc);
  // Unbind the vertex
  sh.unbind();
  // End the POGL
  endPGL();
}

Vertex Shader:

uniform mat4 transform;
uniform sampler2D txtr;

attribute vec4 vertex;
attribute vec4 color;

varying vec4 vertColor;

void main() {
  gl_Position = transform * vertex;  
  float st = texture2D(txt_in, vec2(0.5, 0.5)).r;
  gl_PointSize = st * 10.0;  
  vertColor = color;
}

Fragment Shader:

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

varying vec4 vertColor;

void main() {
//outputColor
gl_FragColor = vec4(vertColor.xyzw);
}
1 Like

These two lines use different variable names.

uniform sampler2D txtr;
float st = texture2D(txt_in, vec2(0.5, 0.5)).r;

Also, txt_in is not declared or? it should fail to compile?

1 Like

My mistake, thanks.

The problem was not solved anyway.

Could not run the sketch.

The shader doesn't have a uniform called "txtr" OR the uniform was removed during compilation because it was unused.
Could not run the sketch (Target VM failed to initialize).
For more information, read revisions.txt and Help ? Troubleshooting.

new Vertex Shader:

uniform mat4 transform;
uniform sampler2D txtr;

attribute vec4 vertex;
attribute vec4 color;

varying vec4 vertColor;

void main() {
  gl_Position = transform * vertex;  
  float st = texture2D(txtr, vec2(0.5, 0.5)).r;
  gl_PointSize = st * 10.0;  
  vertColor = color;
}

I tried running the program but it gives me a NPE at sh.bind().

Anyway: your texture is probably not in the GPU. It’s not of type P2D or P3D. Maybe that helps?

Hi @hamoid thank you, I tried both P2D and P3D as renderer in the PGraphics but still I receive the same error.

I also tried with both jpg and png images with PImage, same result. Looks like that since the sampler2D uniform is removed during the compilation, when it tries to get the texture it doesn’t find the texture anymore.

Does adding a version to the shader help? Processing will rewrite shaders to match the OpenGL version in use otherwise. Perhaps something is failing here. It should work - I use uniform textures a fair bit.

This example works for me:

PShader fx;
PGraphics tex;
void setup() { 
  size(300, 300, P3D);
  fx = loadShader("tex.frag", "tex.vert"); 
  tex = createGraphics(200, 200, P2D);
  tex.beginDraw();
  for(int i=0; i<20; i++) {
    tex.fill(random(255), 0, random(255));
    tex.rect(random(200), random(200), 20, 200);
  }
  tex.endDraw();
   
  textureWrap(REPEAT); 
  noStroke();
  shader(fx);
  fx.set("tex1", tex);
}
void draw() {
  background(80, 100, 200);
  ellipse(mouseX, mouseY, 300, 300);
}
// frag
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

void main() {
  gl_FragColor = vec4(1.0);
}
// vert
uniform mat4 transformMatrix;
attribute vec4 position;

uniform sampler2D tex1;

void main() {
  vec4 c = texture2D(tex1, position.xy / 300.0);
  vec4 pos = position;
  pos.x += c.r * 20.0;
  gl_Position = transformMatrix * pos;
}

Screenshot%20from%202019-08-22%2010-48-19

1 Like

@kesson I get that too. Seems to be caused by setting a texture uniform prior to first bind. At least, doing this gets it working for me.

// Load the vertex and fragment shader
  sh = loadShader("frag.glsl", "vert.glsl");
  sh.bind(); sh.unbind();

Thanks for the replies guys!

@hamoid that code works for me as well. The problem is when I use textures in Java opengl and glsl shaders.

@neilcsmith I tried to put the texture after binding the shader, but the sketch keeps crashing.

pgl = beginPGL();
sh.bind();
sh.set("txtr", img);  

// all the opengl stuff

sh.unbind();
endPGL();

In all of this what I want is working with GPGPU. So the first step is to try to make a texture working in shaders so I can make animations from textures passed from CPU to GPU.

Example: I have 1024 particles. I want to pass a 32x32 (=1024) texture to the vertex shader so that each particle color (or size) correspond to the color of a specific coordinate in the texture.

For example:

// I am omitting some parts of code
uniform sampler2D texture;
varying vColor;

void main() {
  vec3 st = texture2D(texture, uv).rgb;
  gl_PointSize = st.r * 10.0;
  vColor = vec4(st.r, st.g, st.b, 1.0);
}

Maybe I am using the wrong approach or perhaps in the OpenGL realm it works different (I am porting some codes I wrote from WebGL to OpenGL in Processing).

**
SMALL UPDATE
**
If I pass the sampler2D to the fragment shader it works like a charm. The problem is that in that case I can only focus on the last part (rasterization). AFAIK, in the fragment shader we cannot change the vertex position and point size.

New fragment shader:

uniform sampler2D txtr;

void main() {
    vec2 uv = vec2(0.5, 0.5);  // uv coordinate on the texture, ideally an attribute
    vec3 s = texture2D(txtr, uv).rgb;  // get the color of that pixel
    gl_FragColor = vec4(s.r, s.y, s.z, 1.0);  // apply the color to the particle
}

I have to assume that in Processing OpenGL I cannot pass textures to the vertex shader?

UPDATE - PROBLEM SOLVED

Looks like that in order to pass a texture to the vertex shader, you need to pass it also to the fragment shader, something I never considered.

New Vertex Shader:

uniform mat4 transform;

attribute vec4 vertex;

uniform sampler2D txtr;

void main() {
    vec2 c = vec2(0.5, 0.5);  // uv coordinate on the texture, ideally an attribute
    float st = texture2D(txtr, c).r;  // get the color of that pixel
    gl_Position = transform * vertex;  // apply the transformation
    gl_PointSize = st * 10.0;    // point size according to the red channel of the specific pixel
}

New fragment shader

varying vec4 vertColor;

uniform sampler2D txtr;

void main() {
    vec2 c = vec2(0.5, 0.5);  // uv coordinate on the texture, ideally an attribute
    vec3 s = texture2D(txtr, c).rgb;  // get the color of that pixel
    gl_FragColor = vec4(s.r, s.g, s.b, 1.0);   // apply the color to the particle
}
3 Likes

Note that it’s calling bind() that sets up the textures, hence suggesting a bind / unbind in the initialization.

That’s weird! Seems to be working fine here without doing that.

@neilcsmith I was testing passing the texture each frame, not just at the beginning.

Anyway the solution is indeed weird, it works normally in WebGL. Can be different OS / GPU can have different approach if in your case it works?

Yes, and I still wonder whether the preprocessor in Processing that rewrites shaders is coming into play here - it’s different for vertex and fragment shaders. GLSL appears even more than Java to be a case of write once, test everywhere! :confounded:

Which OSes, GPUs and Processing versions are we talking about? I remember shaders that would work fine in Linux but required changes on a Mac (maybe because of different GPUs).

@kesson If you work with shaders… do you know about DISABLE_OPTIMIZED_STROKE? It got me very confused before I discovered it.

1 Like

It’s a mix - I guess depending on the GPU driver. I’ve had shaders that won’t work on different machines with the same GPU (yes, Mac often!), and IIRC the same machine and GPU with different drivers, but also the same OS / machine with different GPU’s - eg. Intel stricter than nVidia on laptop with PRIME.

Thanks for that link - literally looking into hints now, as I’m updating some missing hint mapping in PraxisLIVE!

1 Like

Didn’t test on a Mac, I am on a Windows machine in this moment with NVidia GPU.

@neilcsmith I realized that GLSL is like Java (write once, test anywhere). But I think is something related to Processing itself, as if I compile the same identical shader in WebGL it compiles on any OS/GPU combination. Might be how Three.js handles the compilation? Or the compiler in that case acts differently?

@hamoid I replied in the other topic (actually there was a documentation, not sure if that was very recent actually)