I am working on a visualizer project for mobile devices and I found a great example of clouds using noise on Open Processing. It works good on laptops and crashes on mobile devices. Are noise operations too much for phones to display?
Not noise per se but the amount of operations required to calculate it can be pretty intensive especially if the size you are working on is large. 10x10 square should have no problem, I would try resizing and change the size until it crashes. This way you can isolate the problem, if it is the size you should be able to use this to find the largest area you can work with or maybe there’s something in your code causing it to crash.
Using huawei mate 20 and it does run albeit slowly
So, it works well on large screens/laptops but not mobile devices. So perhaps it is not the size. When I adjust the program to be flexible in size - it still works on large screens but crashes on my Iphone. That is why I was thinking it was a mobile thing…
I asked Sinan from Open Processing and this is what he replied:
“I suspect it would be related to limited memory available on mobile devices. Many sketches that use direct pixel access on each frame end up using a lot of memory, which may not work well on some mobile devices.”
I’ve forked & refactored the “Blurry Clouds” sketch for performance.
I’ve converted the function drawNoise() to a class named Blurry.
It didn’t turn out exactly like the original code, but close enough I guess.
But hopefully now it should run on your mobile:
Just a little context, I am aiming to create an interactive cloud visualizer for some sounds I am making. I was able to add in my sound button and I noticed that my bracket [ ] usage is more structured than yours - which I believe means I am programming in beginner style (best for me!). Anyhow, I think I working through this.
The other manipulation I wan aiming to do was color changes depending on where the mouse/touch position is on mobile. I noticed this line of code had some effect of over color:
class Blurry {
static isLittleEndian() {
return new Uint8Array(Uint32Array.of(0x12345678).buffer)[0] === 0x78; //77 equals pink/orange
}
I was wondering if you had the time at some point, if you would annotate your example to show me how/where I could implement color changes responsive to touch position. (rollovers?)
As long as we’re using 1 of the 3 8-bit versions of Typed Arrays we can fully ignore endianness.
But as you can see within Blurry::resetImageColor() method there’s a Uint32Array viewer of the p5.Image::pixels[] there: const pix32 = new Uint32Array(img.pixels.buffer),
It means that instead of needing 4 indices from p5.Image::pixels[] in order to get a pixel’s RGBa color, 1 index now represents the whole 4-byte (32-bit) color value.
The problem about using a multibyte viewer on a little endian device is that its byte order is inverted.
That is, when we read an RGBa pixel we get aBGR mirror of it instead.
And when we write a pixel, we gotta arrange its values in aBGR order, so it’s correctly stored as RGBa: c[3] << 0o30 | c[2] << 0o20 | c[1] << 0o10 | c[0] : // aBGR
Above the c[] array’s 4 indices represent RGBa, but they’re ordered as aBGR for the sake of the Uint32Array viewer.
But again, all this complex explanation can be skipped if you don’t use a viewer for pixels[].
Even though I’ve refactored the original code for performance, I’ve only focused on some specific parts of it. Its algorithm basis is still unknown for me.
However, I’ve collected & placed all constant values I’ve identified at the end of the Blurry class: