I’m currently writing a visualizer for a live band (MIDI in > pixels out). Because it’s going to be running alongside audio software, I’m trying to keep it as small and efficient as possible. I’m currently drawing each frame to PGraphics in 320 x240 and upscaling to full screen… and since I’m in QVGA resolution anyways, I thought it might be fun to go one step further and use an indexed 16 or 256 color palette. I don’t generally work with super low-level or hardware-specific code, so I’ve never needed to get this nitty-gritty on color before.
I’m familiar with the colorMode() options, but I’m not really sure how they work under the hood. For example, does changing from colorMode(HSB, 255) to colorMode(HSB, 2) actually have any effect on the data type used when creating a color() variable? If so, is there a way to manually specify using 1 byte total for all combined H,S,B / R,G,B values?
Further: If not, would it make more sense to try to roll my own class or data type for this? Or, is this whole idea kind of a fool’s errand (as the smaller datatype color values would eventually be kicked up to an int later via the current ColorMode)? Any thoughts or feedback would be appreciated.
Processing is open source, so you can see exactly what it does with the colorMode() function. I would take a look here:
Specifically check out the colorMode() and colorCalc() functions. It looks like the colors are stored as individual R, G, and B values, as both float and ints. Then those values are used in a more specific type, like this one:
Basically I’m not sure how much value you’d get from trying to store the internal color as a single byte. You might represent it that way in your sketch, but then do the conversion to normal R, G, B values before you use any Processing function.