I have a very simple and small image, easily generatable with code alone (it’s essentially just 4 differently colored squares and the image alone is 32x32 px), and I wish to know what is more expensive:
- using textures along with
textureWrap(REPEAT);
to tile a part of the canvas with this image, or - generate the image with code and use for loops to do the tiling instead.
To give you an idea, I’m trying to create a drawing canvas, and I need a background texture, similar to that of transparent images, with the white and gray checkerboard square pattern. The texture obviously mustn’t stretch, it should appear like a static background, and as you adjust the size of the canvas, you “mask” parts of it away. I did think of creating an image with the same size as my screen and displaying that first and then masking it, but I want my program to be compatible with multiple screen sizes, which means no set image size, but rather a tiling approach.
I also want to ask one more thing in advance. Another element I will be adding to this program will be a resizable square-shaped window, on which I’m planning to draw some sort of 3D canvas (the ultimate goal will be to have a 3D rendering of a sphere on one side of the screen, and a 2D map of the surface of that sphere on the other), and I need to know what the best way of going about this is. I’m not sure whether PGraphics allows for a resizable canvas, so I’m guessing the only other way is to use PGraphics in conjunction with a PShape or vertex-related functions, which I will texture with a PImage rendering of this PGraphics canvas.
As in:
create a PGraphics called pg
draw on pg
beginShape();
texture(pg);
draw the 4 vertices
endShape();
Will this work? The fact that vertex()
allows for automatic mapping of textures onto the shape certainly helps a lot. But more importantly, is there any other way?