Horrible fringing artefacts

Hi,
I’m having this problem when using this particular process of drawing an image multiple times to the screen. I’m getting either white or black fringing, depending on which renderer I use (Default / OpenGL). I imagine this is something to do with premultiplied alpha.

The code below replicates the problem.

What I find strange is that every time a new image is drawn, the previous images on the screen are still having their ‘fringe’ effect increased. Seems like some sort of OpenGL memory buffer thing.

PGraphics pg;
PImage img;
void setup(){
size(800,800,OPENGL);
background(0);

pg=createGraphics(800,800,OPENGL);
img=createImage(800,800,ARGB);
pg.beginDraw();
pg.noStroke();
pg.fill(200,0,0);
pg.ellipse(400,400,300,300);
pg.endDraw();
frameRate(1);
}

void draw(){
float x=map(random(1),0,1,-300,300);
float y=map(random(1),0,1,-300,300);
img.copy(pg,0,0,800,800,int(x),int(y),800,800);
image(img,0,0);
}

1 Like

You are not clearing the background between frames so any anti-aliasing will be lost as fringe pixels are constantly becoming more opaque.

1 Like

Yes that is intentional - I want to repeatedly draw on top of the screen to build up a final image. Is there a better way to do this that doesn’t have the fringe problem?

Use an array or other data structure to keep track of what your old drawing was. In psuedocode, your code would look like:

  1. Create data structure.

  2. Render data structure.

  3. Do thing.

  4. Add thing to data structure.

  5. Repeat.

1 Like

Could you clarify?
Do you mean create an array of screen pixels and then add new image to it using a method that does it pixel by pixel - bypassing all OpenGL stuff? Seems a pain!

I tried writing to a screen buffer, but still same result…

PGraphics pg, buffer;
PImage img;
void setup() {
size(800, 800, OPENGL);
background(0);

pg=createGraphics(800, 800, OPENGL);
buffer=createGraphics(800, 800, OPENGL);
img=createImage(800, 800, ARGB);
pg.beginDraw();
pg.noStroke();
pg.fill(200, 0, 0);
pg.ellipse(400, 400, 300, 300);
pg.endDraw();
frameRate(1);
}

void draw() {
background(0);
float x=map(random(1), 0, 1, -300, 300);
float y=map(random(1), 0, 1, -300, 300);
buffer.beginDraw();
buffer.copy(pg, 0, 0, 800, 800, int(x), int(y), 800, 800);
buffer.endDraw();
image(buffer, 0, 0);
}

Well, lack of it! Processing blending in OpenGL when using multiple surfaces is broken because of the lack of support for premultiplied alpha. It ignores the fact that output of OpenGL blending operations is premultiplied and treats it as if it isn’t (ie. multiplies it again)

Out of interest, why are you using copy() vs image()?

Yeah I thought is was a buggy thing with OpenGL in Processing.
I was using copy() to see if it was any different from image() blend() etc…
Think I’ll just write my own pixel level method for adding an image to the screen. At least I know it will work.

Well, it’s not that difficult to extend the built in renderers to support this either - praxis/praxis.video.pgl/src/org/praxislive/video/pgl/PGLGraphics.java at master · praxis-live/praxis · GitHub

Interesting, thanks for that.