Can't pass a 3D integer array into a GLSL shader

I have been working on a shader project using the Processing Java Library. I need to be able to use PShader.set() to pass a 3D array of integers into my GLSL shader.

I tried all of the present set() methods that they have, and none work. It seems processing only supports sending 1D arrays. I cannot find any way of sending a 3D array as a uniform into my GLSL shader.

My only other idea is maybe I can acess the lower level JOGL class and update my uniform variable there.

Also, is there a way that i can update just a few of the items in the array instead of having to pass the entire array to the shader over and over again? The data that i am passing as a uniform is thousands of elements in size. Having to keep and resend the entire thing takes up a lot of RAM and CPU.

Any help?

1 Like

If you want to stick with Processing’s calls, you could put your data in an image, pass it to the shader as a texture, do the 3D coordinate index math yourself, and read out the values with texelFetch().

1 Like

I am fine sending data via jogl, i just need to know how to do it.

Also, is it possible to update a piece of a variable? I only need to change little parts of my uniform, and the fact that it is thousands of values makes it computationally intensive when i just want to update a few of those values

If you’re willing to use direct OpenGL calls, then there are (too) many different buffering mechanisms you can try and at least some of them support sub-buffer updating. I haven’t used any of them other than textures and vertex buffers, though.

If you search, you can find threads like where people offer suggestions and links to talks or papers about different buffering mechanisms.

My main point is to not worry about the 3D format of your data. Just use a 1D buffer of data and do the 3D mapping math in your shader.

Ok. That method sounds best. However, I am very new to the world of OpenGL. Could you please provide an applicable code example?

You haven’t said what you’re doing with your 3D volume of data, so I just contrived this example for my own amusement and edification.

This code creates a 144x144x144 volume of integer data with noise() scaled up to 1,000,000 and stores it in a texture. The fragment shader shows xy-plane slices of the volume looping from bottom to top. The fetchData() function converts the 3D ijk coordinates into 2D texture coordinates, fetches the data as ARGB, and converts it into an integer.

PShader shdr;
PImage dataImg;

void setup() {
  size( 900, 900, P2D );
  frameRate( 30 );
  shdr = new PShader( g.parent, vertSrc, fragSrc );
  int nDatai = 144, nDataj = 144, nDatak = 144;
  shdr.set( "nData", nDatai, nDataj, nDatak );

  dataImg = createImage( 2048, 2048, ARGB );
  float ns = 0.05;
  for( int k=0; k<nDatak; k++ )
    for( int j=0; j<nDataj; j++ )
      for( int i=0; i<nDatai; i++ ) {
        int idx = (k*nDataj+j)*nDatai+i;
        dataImg.pixels[idx] = int((noise( i*ns, j*ns, k*ns )) * 1000000);
  shdr.set( "dataImg", dataImg );

void draw() {
  shdr.set( "time", (frameCount/144.) % 1. );
  shader( shdr );
  rect( 0, 0, width, height );

String[] vertSrc = { """
#version 330
precision highp float;
precision highp int;

uniform mat4 transformMatrix;
attribute vec4 position;
void main() {
  gl_Position = transformMatrix * position;
""" };

String[] fragSrc = { """
#version 330
precision highp float;
uniform vec2 resolution;
uniform float time;
uniform sampler2D dataImg;
uniform ivec3 nData;

out vec4 fragColor;

int fetchData( ivec3 ijk ) {
  ivec2 imgSize = textureSize( dataImg, 0 );
  int idx = (ijk.z*nData.y+ijk.y)*nData.x+ijk.x;
  int j = idx / imgSize.x;
  int i = idx - j * imgSize.x;
  ivec4 v = ivec4(texelFetch( dataImg, ivec2( i, j ), 0 ) * 255.0);
  return ((v.a*256+v.r)*256+v.g)*256+v.b;

vec3 draw( vec2 p ) {
  vec3 col = vec3( 0. );
  ivec3 ijk = ivec3( vec3((p+vec2(1.))*0.5, time) * nData );
  int d = fetchData( ijk );
  if( d > 400000 )
    col = vec3( float(d)/1000000. );
  return col;

void main() {
  float ss = 2.;
  vec3 tot = vec3(0.);
  for( int ssi=0; ssi<ss; ssi++ )
    for( int ssj=0; ssj<ss; ssj++ ) {
      vec2 of = vec2( ssi, ssj ) / ss - 0.5;
      vec2 uv = (2.*(gl_FragCoord.xy+of)-resolution)*vec2(1,-1)/resolution.y;
      tot += draw( uv );
  tot /= (ss*ss);

  fragColor = vec4( clamp( tot, 0., 1. ), 1.0 );

You haven’t said what you’re doing with your 3D volume of data

I am making a Minecraft style Voxel game. Each element of the 3D array represents a light value (0-15) for that voxel (-1 if the voxel is opaque), since the shader is applied to individual chunks, I need to send the all the light values of the loaded chunks to the shader as if they were connected.

The reason why I wanted to know how to use direct OpenGL Calls and “sub-buffer updating” is so that I dont have to send the entire lightmap array to the shader every time I set a block. :joy:

In order to do this, I’m guessing I would need to access processing’s JOGL object (wherever that is) and then update the shader’s uniforms there. Would you be so kind as to show me a code example of how I could do this?

PGL is Processing’s wrapper around JOGL, which, in turn, is a Java wrapper around OpenGL. If you are doing GL calls up to around version 3.3, you can probably use PGL. Otherwise, you’ll want to use gl directly. Nearly all of the tutorials out there are written for C/C++ or for webgl. JOGL mostly works the same except when it comes to passing data as you have to convert Java’s data types into native formats.

I haven’t done anything with creating my own textures which is probably what you would prefer to do, so I don’t have any wisdom to offer there.

In the Processing IDE, bring up the File: examples menu - Demos - Graphics - LowLevelGLVboInterleaved and start from their code. The meat of it is here:

import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import java.nio.IntBuffer;

import com.jogamp.opengl.*;

PJOGL pgl;
GL4 gl;

void draw() {
  pgl = (PJOGL) beginPGL();
  gl =;

  // Get GL ids for all the buffers
  IntBuffer intBuffer = IntBuffer.allocate(1);  
  gl.glGenBuffers(1, intBuffer);
  posVboId = intBuffer.get(0);


FloatBuffer allocateDirectFloatBuffer(int n) {
  return ByteBuffer.allocateDirect(n * Float.BYTES).order(ByteOrder.nativeOrder()).asFloatBuffer();

IntBuffer allocateDirectIntBuffer(int n) {
  return ByteBuffer.allocateDirect(n * Integer.BYTES).order(ByteOrder.nativeOrder()).asIntBuffer();

Or look at fuller examples like

You might also need to add


to setup() to keep Processing from batch rendering its buffers after (and on top of) your OpenGL calls.

Ok. Thanks! Will PGL work for me if I am using processing 4.0?

It makes little difference whether you use pgl or gl. pgl is just a wrapper. It simply doesn’t have much of the more modern opengl calls. So you might as well just use gl instead and don’t bother with pgl.