Is noise too much for mobile devices to handle in p5js?

Hello Forum :slight_smile:

I am working on a visualizer project for mobile devices and I found a great example of clouds using noise on Open Processing. It works good on laptops and crashes on mobile devices. Are noise operations too much for phones to display?

Here’s the sketch:

https://www.openprocessing.org/sketch/666329

Not noise per se but the amount of operations required to calculate it can be pretty intensive especially if the size you are working on is large. 10x10 square should have no problem, I would try resizing and change the size until it crashes. This way you can isolate the problem, if it is the size you should be able to use this to find the largest area you can work with or maybe there’s something in your code causing it to crash.

Using huawei mate 20 and it does run albeit slowly

2 Likes

Thank you Paulgoux for your help!

So, it works well on large screens/laptops but not mobile devices. So perhaps it is not the size. When I adjust the program to be flexible in size - it still works on large screens but crashes on my Iphone. That is why I was thinking it was a mobile thing…

This is with the draw loop running a nested for loop to 100 or the program defaults,

This is to 50

Again thus is on huawei mate 20 the iPhone should be able to run it, the for loop is probably to large for its processor.

Hi Paulgoux,

Thanks for the further clarification! Maybe the Iphone is the issue because no matter how small I reduce the for loop, it still crashes.

I asked Sinan from Open Processing and this is what he replied:

“I suspect it would be related to limited memory available on mobile devices. Many sketches that use direct pixel access on each frame end up using a lot of memory, which may not work well on some mobile devices.”

1 Like

I’ve forked & refactored the “Blurry Clouds” sketch for performance. :cloud:
I’ve converted the function drawNoise() to a class named Blurry. :nerd_face:
It didn’t turn out exactly like the original code, but close enough I guess. :woozy_face:
But hopefully now it should run on your mobile: :iphone:

3 Likes

It sure does!!! Wow - I am pretty excited about this. Thank you very much GoToLoop!!!

Hello Again GoToLoop!

Just a little context, I am aiming to create an interactive cloud visualizer for some sounds I am making. I was able to add in my sound button and I noticed that my bracket [ ] usage is more structured than yours - which I believe means I am programming in beginner style (best for me!). Anyhow, I think I working through this.

The other manipulation I wan aiming to do was color changes depending on where the mouse/touch position is on mobile. I noticed this line of code had some effect of over color:

class Blurry {
  static isLittleEndian() {
    return new Uint8Array(Uint32Array.of(0x12345678).buffer)[0] === 0x78; //77 equals pink/orange
  }

I was wondering if you had the time at some point, if you would annotate your example to show me how/where I could implement color changes responsive to touch position. (rollovers?)

No problem - is this is a big ask :slight_smile:

Thanks again for getting me this far!

Just theoretically, b/c I’ve just read that we should expect that almost all mobile devices are configured to use little endian byte order:

That utility function is just to make 100% sure our code is indeed running on a little endian device (which statistically should always be so):

The datatype of pixels[] is Uint8ClampedArray: reference | p5.js

And there are 11 Typed Array versions:

As long as we’re using 1 of the 3 8-bit versions of Typed Arrays we can fully ignore endianness.

But as you can see within Blurry::resetImageColor() method there’s a Uint32Array viewer of the p5.Image::pixels[] there:
const pix32 = new Uint32Array(img.pixels.buffer),

It means that instead of needing 4 indices from p5.Image::pixels[] in order to get a pixel’s RGBa color, 1 index now represents the whole 4-byte (32-bit) color value.

The problem about using a multibyte viewer on a little endian device is that its byte order is inverted.

That is, when we read an RGBa pixel we get aBGR mirror of it instead.

And when we write a pixel, we gotta arrange its values in aBGR order, so it’s correctly stored as RGBa:
c[3] << 0o30 | c[2] << 0o20 | c[1] << 0o10 | c[0] : // aBGR

Above the c[] array’s 4 indices represent RGBa, but they’re ordered as aBGR for the sake of the Uint32Array viewer.

But again, all this complex explanation can be skipped if you don’t use a viewer for pixels[]. :wink:

2 Likes

Even though I’ve refactored the original code for performance, I’ve only focused on some specific parts of it. Its algorithm basis is still unknown for me.

However, I’ve collected & placed all constant values I’ve identified at the end of the Blurry class:

const BLUR_PROTO = Blurry.prototype;

BLUR_PROTO.W = Blurry.W = 100;
BLUR_PROTO.H = Blurry.H = 100;

BLUR_PROTO.RES_SCALE = Blurry.RES_SCALE_W = 6.5;
BLUR_PROTO.RES_SCALE = Blurry.RES_SCALE_H = 6;

BLUR_PROTO.SPD = Blurry.SPD = 5;
BLUR_PROTO.NOISE_SCALE = Blurry.NOISE_SCALE = .02;

BLUR_PROTO.OCTAVES = Blurry.OCTAVES = 4;
BLUR_PROTO.FALLOFF = Blurry.FALLOFF = .45;

BLUR_PROTO.LITTLE_ENDIAN = Blurry.LITTLE_ENDIAN = Blurry.isLittleEndian();

You may turn some of them into global variables instead if you intend on dynamically reassigning their value during sketch execution.

1 Like

Hi GoToLoop,

Wow! Definitely a challenging explanation but it super helpful for understanding the waters I am trying to tread in with a mobile device. Thank you!

That makes sense - will give it a whirl. Thank you!