We have a n x n canvas and m (say 3) points in it. The m points/nodes all have different fixed RGB colors. The color of each dot in n x n is defined by the distance from any of the color nodes. So each dot is a mixture (gradient) of the m nodes, ruled by the distance from the node.
How can you do this efficiently in processing, given that n and m can be large?
I implemented an n-point shader for OPENRNDR (a different framework). Maybe that approach is efficient enough? If you can read GLSL you might understand the code at orx/NPointGradient.kt at master · openrndr/orx · GitHub and port it to Processing.
Basically it’s passing an array of positions and array of colors, then using them to calculate the colors of every pixel. The built in uniforms for the shader are listed here.
Even faster would be to use GLSL shaders but that’s a whole different language and not ideal if you don’t know it already. The reason it would be faster is because it would calculate the pixel colors in parallel instead of sequentially. Also only important if you want to do it animated in real time. If the goal is to save an image it doesn’t matter that much.