Hi, im completely new to glsl and looking for a decent resource to start coding them for processing, just looking for the bare basics of drawing 2d pixels to the screen at this point, cant wrap my head around them, watched so many videos and still no clearer and every time I try and follow a tutorial they dont work in processing, am I missing something?
If you switch to p5.js mode and go to Examples there is a basic shader demo in the 3D folder (ex08).
Here is another simple demo (can’t recall where I got it):
PShader shader;
void setup() {
size(640, 360, P2D);
noStroke();
shader = loadShader("shader.frag");
}
void draw() {
shader.set("u_resolution", float(width), float(height));
shader(shader);
rect(0,0,width,height);
}
code for shader.frag to be in sketch/data folder (although will work just in sketch folder)
#ifdef GL_ES
precision mediump float;
#endif
#define PROCESSING_COLOR_SHADER
uniform vec2 u_resolution;
uniform vec3 u_mouse;
uniform float u_time;
void main() {
vec2 st = gl_FragCoord.st/u_resolution;
gl_FragColor = vec4(st.x,st.y,0.0,1.0);
}
Usually there are two files: shader.vert for the vertices and shader.frag for the color. This code can also be put inline instead of in separate files; there are a few examples in recent threads. It’s not an easy topic, although Processing is about as easy a way as you will find.
So it doesnt work with standard java processing? I managed to get color to the background of an image but I used a file ending in *.glsl, changing it to frag fails to load, its such a confusing process lol, id consider myself extremely good at processing itself but shaders is what I want now for more power
I only started looking at shaders a couple of weeks ago and it took me awhile to just to get my head around the idea of GLSL so some of what I say here might not be “the truth, the whole truth and nothing but the truth”
A shader comprises two parts
- Vertex shader file (filename extension .vert), and
- Fragment shader (filename extension .frag)
Somewhere in the process these two files are combined by Processing into a single script. The combined script would have a .glsl extension if it existed
Processing appears to do its own thing with shaders so some scripts found on the internet don’t work directly in Processing.
My focus is on using shaders with p5.js (WEBGL) and it appears that Processing (Java mode) and p5.js process shaders differently but that might be due to my inexperience.
If you are using p5.js the I suggest you use the web editor if you search can find shader examples that work.
Other resources I found useful were
- a tutorial Introduction to p5.js shaders
- the Book of Shaders
Start with this:
PShader shdr;
void setup() {
size( 800, 800, P2D );
shdr = new PShader( g.parent, vertSrc, fragSrc );
}
void draw() {
shdr.set( "time", frameCount/60. );
shader( shdr );
rect( 0, 0, width, height );
}
String[] vertSrc = {"""
#version 330
uniform mat4 transform; // passed in by Processing
in vec4 position;
void main() {
gl_Position = transform * position;
}
"""};
String[] fragSrc = {"""
#version 330
precision highp float;
uniform vec2 resolution; // passed in by Processing
uniform float time;
#define TAU 6.283185307179586
out vec4 outColor;
void main() {
vec2 uv = (2.*gl_FragCoord.xy-resolution)/resolution.y;
uv = vec2( time*0.5 - log(length(uv)), atan( uv.y, uv.x ) / TAU + .5 );
uv = fract( vec2( 1.*uv.x+3.*uv.y, -1.*uv.x+6.*uv.y ) );
outColor = vec4( step(0.8, uv), 0., 1. );
}
"""};
resolution
is passed in by Processing, so you don’t need to set it. If you want to use mouse coordinates, you have to pass them yourself as a “uniform” to the shader.
If you look through the Processing File menu Examples… you can find some other examples to work from.
This code, and most of what’s on shadertoy.com, uses gl_FragCoord which gives you the screen-space coordinates of each pixel. An alternative is to use texture coordinates which you would more likely use in general 3D OpenGL programming.
Ask more specific questions about your confusions and we’ll try to help.
Cheers, cant quite get it to run lol, ive figured out the difference between vert and frag and managed to play with background colors now, all im looking at doing is writing a simple frag shader to display objects I have in an array in processing as pixels, what I cant figure out is how to pass said objects into the shader
Yeah it does thanks, they’re quite hard to wrap your head around
Are you using Processing version 4? The multi-line strings I used in my example only work in the newer version of Java that Processing started using with version 4.
What form of objects are you trying to render. What do you mean by
You can pass a pixel array as a texture into a shader. I typically use this for tiled surfaces where I create a texture of tile IDs. For each pixel, the fragment shader computes which tile it’s on, looks up the ID from the texture and then draws the appropriate image for that tile.
The 3D objects you see rendered on shadertoy.com are all defined in fragment shader code and rendered using either ray tracing or, more likely, ray marching of SDFs (signed distance functions). For sufficiently small scenes, you could pass a scene into a shader encoded as the pixel data of a texture, but I haven’t tried that so I don’t know how well it would work.
Still using 3.5.4, Don’t like the themes and when I went back to 3.5.4 the themes applied to that too so had to uninstall everything then delete the folders then reinstall 3.5.4 lol.
Im just wanting my own class object, like a point that moves around that can be passed into the frag shader to render the pixels instead of using the pixel array as that can be slow full screen, also, is it possible to power the movement of the points with shaders? hope im making sense
You can render a single rectangle, use vertex shaders to position it and use fragment shaders to draw whatever you want on it.
Or with the code
PJOGL pgl = (PJOGL) beginPGL();
GL4 gl = pgl.gl.getGL4();
shdr.bind();
gl.glDrawArraysInstanced( PGL.TRIANGLES, 0, 3, nParts );
shdr.unbind();
endPGL();
resetShader();
you can render millions of triangles using vertex and fragment shaders to position and render them. In the vertex shader, you can use gl_InstanceID
and gl_VertexID
to know which triangle and vertex you are processing at the moment. In the fragment shader, you can use discard
to throw away any of the triangle pixels that are outside of whatever geometry you want to render.
The web site vertexshaderart.com is similar to shadertoy.com but uses the vertex shaders to position thousands of flat shaded points, lines, or triangles. Both of those sites are great places to pick up techniques. But, of course, if you write both shaders, you can do even more on your own.
As an example: Steven Dollins: "Fuzzy toroid 262,144 spheres animated entirely i…" - genart.social - A Home For Generative Artists
Ok, thanks for your help but thats even more confusing, maybe shaders are not for me lol
Any chance you could give me a bare minimum example in full of a basic particle system? just so I can study it and try and wrap my head around the concept of it all?
I strongly recommend you bite the bullet and switch to P4. Yeah, it trashed my color settings, but I found a theme I could stomach and tweaked enough to make it tolerable. It was worth it to me to be able to edit the shaders directly in the Processing editor along with the rest of my code.
Here is code (that requires Processing 4) that renders a wavy surface of triangles. The triangle positions are computed entirely in the vertex shader so no data has to be passed in from Processing other than the time
(and the number of triangles). In this example, I’m not using any of Processing’s camera or lighting values. The vertex shader just positions the triangles into the clip volume which runs from (-1,-1,-1) in the near lower left corner to (1,1,1) in the far upper right corner.
import com.jogamp.opengl.*;
int nTriangles = 4096;
PShader shdr;
void setup() {
size( 900, 900, P3D );
hint( DISABLE_OPTIMIZED_STROKE );
shdr = new PShader( g.parent, vertSrc, fragSrc );
shdr.set( "nTriangles", nTriangles );
}
void draw() {
shdr.set( "time", frameCount / 60.0 );
background( 0 );
PJOGL pgl = (PJOGL)beginPGL();
GL4 gl = pgl.gl.getGL4();
shdr.bind();
gl.glDrawArraysInstanced( PGL.TRIANGLES, 0, 3, nTriangles );
shdr.unbind();
endPGL();
resetShader();
}
String[] vertSrc = {"""
#version 330 core
uniform int nTriangles;
uniform float time;
out vec3 vColor;
vec2 rot( in vec2 p, float a ) {
return cos(a)*p + sin(a)*vec2(p.y, -p.x);
}
void main() {
int nRows = int( sqrt( float(nTriangles) ) );
int j = gl_InstanceID / nRows;
int i = gl_InstanceID - j * nRows;
vec2 c = vec2( float(i), float(j) );
c += vec2( float(gl_VertexID & 1), float(gl_VertexID & 2)*0.5 );
c = c / float(nRows) * 1.6 - 0.8;
c = rot( c, time*6.283/12. );
vec3 p = vec3( c, 0.1*sin(6.283*c.x)+0.03*cos(6.283*(c.y*3.+c.x*2.-time))+0.1 );
p.yz = rot( p.yz, 1. );
p.xy /= (3. - p.z) / 4.;
vColor = vec3( p.z*0.25+0.75 );
gl_Position = vec4( p, 1. );
}
"""};
String[] fragSrc = {"""
#version 330 core
in vec3 vColor;
out vec4 outColor;
void main() {
outColor = vec4( vColor, 1.0 );
}
"""};
Cheers again mate, ill have to play around and see how it works, only just started shaders but so far i see the potential power in them but may take a long time to learn, im fluent in java for the most part and would love the same for these things
Thats really confusing lol, is it possible to just use a normal processing sketch with class objects, then use a simple frag shader to draw a point from the position vector of that object? pass in the array of objects to the frag shader just for rendering purposes, preferably without imports?
Shaders run on the graphics card. They don’t know anything about Processing or Java objects. Vertex shaders take in a arrays of vertex attributes, typically position, normal, color, texture coordinates, but they can also be any arbitrary data that is per vertex. The vertex shader outputs the vertex position as gl_Position and also passes on values to the fragment shader that are interpolated between the vertices. The fragment shader uses the interpolated values passed to it to compute a color for each pixel (or pixel fragment). You can pass in single values for the whole vertex mesh as uniforms. Or you can pass in textures which are meant to be colors, but you can also interpret them as arbitrary data.
Processing only barely exposes the simplest subset of OpenGL that it needs to render traditional geometry. It doesn’t, for instance, give you any way to pass arbitrary vertex attribute data to a mesh. General OpenGL allows textures to have huge variety of data types, but Processing only really supports colors, so in your shader, you have to convert data from colors back into whatever form you might like it to be. Processing doesn’t support bufferless rendering which is what I was using above.
In Processing, you could render an array of quads and then write your own shaders to position and color them. My code is the faster way to do that because it doesn’t waste any time or space to create and store a data buffer of vertex data.
For your given objects, you’d have to decide how easy it is to render them either as a mesh of triangles using a PShape, for instance, or to draw them on separate quads using a complex fragment shader or combination of fragment shaders. Or draw all your objects on one quad as in shadertoy.
Ok, seems I have a lot to learn, havent even looked into vertex shaders, was hoping there was a simple frag shader that could handle it all, thanks for taking the time to explain it to me mate.