# 2D displacement mapping for water effect

Hi all,

I’ve recently stumbled on the work of Leonardo Solaas and was particularly interested in the technique used for his “Displacements” series.

According to the brief description provided by the author, the watery effect is the result of the combination of two images: one being used as a displacement map for the second one.

And, indeed, if you look carefully you can discern at the center of the picture the outline of a sloping body leaning towards the left. The shape of that body seems to “push” the pixels (of the first picture) all around as if it was emerging from the water.

I suspect, however, that the process is a bit more complex than that, and for two reasons at least:

• looking at the picture I can’t help but to think that noise is involved at some point
• I’ve tried myself to use a picture as a displacement map for another picture (of same dimensions) and the output was, of course, very different not to say disappointing:

a painting by Dante Gabriel Rossetti is used as a displacement map for this Da Vinci portrait

here Perlin Noise is used for displacement mapping

Here the displacement rules are simple:

• take the brightness of each pixel in the first picture
• map that value to a smaller range : `v = map(brightness, 0, 255, 0, 20)`
• substract that value to the x and y coordinates of the second picture : `set(x - v, y - v, color)`

Notice however how the painting used as a displacement map is not visible unlike on Solaas’s work.

I therefore have 2 questions:

• What do you think the displacement rules could be in Solaas’w work ?
• Do you think another step is involved in the process ? If so, what could it be then ? (adding noise ? implementing a convolution matrix ?)

I would really like to hear your suggestions.

4 Likes

Hi Solub!

Looking at the images, I would guess that he’s using some form of fractional brownian motion/domain distortion. You can check out Inigo Quilez’s article and the part on fbms from thebookofshaders. (I should note that there are other kinds of warping, but this is a good place to start with)

2 Likes

@WakeMeAtThree – Thank you ! It does look very similar to the effect seen on Solaas’s sketches. I’m all the more conviced that most of Solaas’s work relies on techniques learned from Patricio Gonzalez Vivo himself (they’re both from Buenos Aires).

@TheWizardBear – Hi, glad to hear you find the topic interesting. I’ve also implemented the warping function from Inigo Quilez’s article . Here’s a proof of concept in Processing (python mode) if you’re interested (much slower than a shader implementation of course).

``````
def setup():
size(1400, 900, P2D)

for y in range(height):
for x in range(width):
i = x + y * width
n = warp(x, y, .01, 255)
pixels[i] = color(-n, n, n)
updatePixels()

def warp(_x, _y, factor, n_range):
n1 = noise((_x+0.0) * factor, (_y+0.0) * factor) * n_range
n2 = noise((_x+5.2) * factor, (_y+1.3) * factor) * n_range
q = PVector(n1, n2)

n3 = noise(((_x + q.x * 4) + 1.7) * factor, ((_y + q.y * 4) + 9.2) * factor) * n_range
n4 = noise(((_x + q.x * 4) + 8.3) * factor, ((_y + q.y * 4) + 2.8) * factor) * n_range
r = PVector(n3, n4)

return noise((_x + r.x * 4) * factor, (_y + r.y * 4) * factor) * n_range
``````

There’s still a major issue that I can’t figure out: how to displace the pixels from a picture accordingly with the warping. It seems the displacement rules aforementioned (substracting a noise value to the x and y coordinates) don’t work here and I start to wonder if Solaas ever operated a traditionnal/real pixel displacement on these paintings.

4 Likes

Perhaps you could try the reverse:
Rather than displacing the pixels–moving them according to some noise (or something else), draw each pixel but the color is the color of a pixel nearby. Which nearby pixel’s color it has would be determined by some noise.

Technically this isn’t warping the coordinates but it might work?

6 Likes

We’re getting closer.

It seems @TheWizardBear’s intuition was correct: the displacement happens in the color’s indexing, not in the pixels’s coordinates.

playing with different `factor` and `n_range` values

``````def setup():
size(1476, 969, P2D) #width and height should be similar to the picture's dimensions

for y in range(img.height):
for x in range(img.width):
i = x + y * img.width
n = warp(x, y, .003, 615)
c = img.pixels[i-int(n)]  #selecting a color's index accordingly with the noise value
pixels[i] = color(c)
updatePixels()

def warp(_x, _y, factor, n_range):
n1 = noise((_x+0.0) * factor, (_y+0.0) * factor) * n_range
n2 = noise((_x+5.2) * factor, (_y+1.3) * factor) * n_range
q = PVector(n1, n2)

n3 = noise(((_x + q.x * 4) + 1.7) * factor, ((_y + q.y * 4) + 9.2) * factor) * n_range
n4 = noise(((_x + q.x * 4) + 8.3) * factor, ((_y + q.y * 4) + 2.8) * factor) * n_range
r = PVector(n3, n4)

return noise((_x + r.x * 4) * factor, (_y + r.y * 4) * factor) * n_range
``````

Thank you.

7 Likes

Upon further testing it seems that:

• increasing the noise range (`n_range`) gives thinner waves (nice “sea of flames” kind of look) but the edges may get grainy – picture 1

• increasing the `factor` value will consequently raise the number of blobs – picture 2

• raising the final noise value (`return` statement in the `warp` function) to the power of 2 or more will make the blobs bigger – picture 3 and 4

EDIT: Playing with “normal” single-layered noise is fun as well. I find the comparison between the two very interesting:

“normal” Perlin noise with very high noiseDetail() values – looking like an over-oxidized piece of rusty metal

8 Likes

Just sharing an alternative workaround based on fluids simulation.

Here, it is the ouput of the fluid computation (not the noise) that is added to the pixels’ index.
Python version below is awfully slow, no doubt a Java version would be faster.

Script
``````#### Based on Matthijs Slijkhuis's fluid-simulation implementation (https://www.openprocessing.org/sketch/571415) ####

steps = 150
W, H = 500, 600
xscale = int(W / steps)
yscale = int(H / steps)
source = 200.0
d, db, vx, vy, vxb, vyb, diver, pres = [[[0.0 for y in range(steps+1)] for x in range(steps+1)] for i in range(8)]

def setup():
size(W, H, P2D)
frameRate(1000)
smooth(8)

global img

def draw():
subtractPressure(vx, vy)
syncBuffers()
boundaries(vx)
boundaries(vy)
syncBuffers()

for y in range(img.height):
for x in range(img.width):
v = sample(d, x/xscale, y/yscale)
i = x + y * img.width
c = img.pixels[i-int(v)]
pixels[i] = color(c)
updatePixels()

def subtractPressure(vx, vy):

#Divergence of the velocity field
for y in range(1, steps-1):
for x in range(1, steps-1):
diver[x][y] = (vx[x-1][y] - vx[x+1][y] + vy[x][y-1] - vy[x][y+1]) * 0.5

#Pressure
for i in range(20):
for y in range(1, steps-1):
for x in range(1, steps-1):
pres[x][y] = (pres[x-1][y] + pres[x+1][y] + pres[x][y-1] + pres[x][y+1] + diver[x][y]) * 0.25

#Velocity - Pressure
for y in range(1, steps-1):
for x in range(1, steps-1):
vx[x][y] -= (pres[x+1][y] - pres[x-1][y]) * 0.5
vy[x][y] -= (pres[x][y+1] - pres[x][y-1]) * 0.5

for y in range(steps):
for x in range(steps):
vx = x + vel_x[x][y]
vy = y + vel_y[x][y]
arr_out[x][y] = sample(arr_in, vx, vy)

def boundaries(v):
for x in range(steps):
v[x][0] = -v[x][1]
v[x][steps-1] = -v[x][steps-2]
for y in range(steps):
v[0][y] = -v[1][y]
v[steps-1][y] = -v[steps-2][y]

def sample(arr, x, y):
if x < 0: x = 0
if x > steps - 2: x = steps - 2
if y < 0: y = 0
if y > steps - 2: y = steps - 2
x0 = int(x)
y0 = int(y)
x1 = x0+1
y1 = y0+1
tx = x-x0
ty = y-y0
lx0 = lerp(arr[x0][y0], arr[x1][y0], tx)
lx1 = lerp(arr[x0][y1], arr[x1][y1], tx)
return lerp(lx0, lx1, ty)

def syncBuffers():
for y in range(steps):
for x in range(steps):
db[x][y] = d[x][y]
vxb[x][y] = vx[x][y]
vyb[x][y] = vy[x][y]

if mouseX > xscale-xscale/2 and mouseX < W-xscale/2 and mouseY > yscale-yscale/2 and mouseY < H-yscale/2:
x = int(round(mouseX / xscale))
y = int(round(mouseY / yscale))

if mousePressed:

#Density
d[x][y] += source * (1.0 / frameRate)
d[x-1][y] += source/2 * (1.0 / frameRate)
d[x+1][y] += source/2 * (1.0 / frameRate)
d[x][y-1] += source/2 * (1.0 / frameRate)
d[x][y+1] += source/2 * (1.0 / frameRate)

#Velocity
xv = (steps/float(width)) * (mouseX - pmouseX)*10
yv = (steps/float(height)) * (mouseY - pmouseY)*10
vx[x][y] -= xv * (2 / (abs(xv) + 1)) * 10
vy[x][y] -= yv * (2 / (abs(yv) + 1)) * 10

``````
4 Likes

Can you post a processing java version of this code. I can’t seem to get it to work when I port it to java.

Here is my code.
///////////////

``````PImage img;

void setup(){
size(260, 512, P2D); //width and height should be similar to the picture's dimensions

for(int x = 0; x < 260; x++){

for(int y = 0; y < 512; y++) {
int i = x + y * 260;
float n = warp(x, y, .003, 615);
color c = img.pixels[constrain(i-int(n), 0, img.pixels.length-1)];  //selecting a color's index accordingly with the noise value

// pixels[i] = color(constrain(c, 0, 255));
pixels[i] = color(c);

}

}

img.updatePixels();
noLoop();
}

void draw() {
image(img,0,0);
}

float warp(int _x, int _y, float factor, float n_range) {
float n1 = noise((_x+0.0) * factor, (_y+0.0) * factor) * n_range;
float n2 = noise((_x+5.2) * factor, (_y+1.3) * factor) * n_range;
PVector q = new PVector(n1, n2);

float n3 = noise(((_x + q.x * 4) + 1.7) * factor, ((_y + q.y * 4) + 9.2) * factor) * n_range;
float n4 = noise(((_x + q.x * 4) + 8.3) * factor, ((_y + q.y * 4) + 2.8) * factor) * n_range;
PVector r = new PVector(n3, n4);

return noise((_x + r.x * 4) * factor, (_y + r.y * 4) * factor) * n_range;

}
``````

////////////////////////

1 Like

Hi @Rochie,

Interesting.

There is a missing `img.` on lines 6 and 15 before `loadPixels` and `pixels` but that omission doesn’t explain the problem. For some reason the Java sketch doesn’t seem to display an image when using the `warp()` function whereas the Python version does.

Unfortunately I don’t have much time to find what could be the issue and all I can do for now is suggesting a different approach:

``````PImage img;

void setup(){
size(515, 397, P2D); //width and height should be similar to the picture's dimensions

for(int i = 0; i < img.width*img.height; i++){
int x = i%img.width;
int y = i/img.width;
float n1 = noise(x * .005, y * .005) * img.width;
float n2 = noise(x * .005, y * .005) * img.height;
color c = img.pixels[int(n1) + int(n2) * img.width];  //selecting a color's index accordingly with the noise value
img.pixels[i] = color(c);
}
img.updatePixels();

}

void draw() {
image(img,0,0);
}
``````

EDIT:

For comparison, the same sketch in Java and Python mode. The Java version throws an `ArrayIndexOutOfBoundsException` error where the Python version runs normally.
Adding a modulo to control the `offset` (line 14 and 15) didn’t work.

Java

``````PImage img;

void setup(){
size(515, 397, P2D); //width and height should be similar to the picture's dimensions

int len = img.pixels.length;

for(int x = 0; x < img.width; x++){
for(int y = 0; y < img.height; y++) {
int i = x + y * img.width;
float n = warp(x, y, .003, 615);
int offset = (i-int(n))%len; // with a modulo the offset should wrap around
color c = img.pixels[offset]; // --> ArrayIndexOutOfBoundsException
pixels[i] = color(c);
}
}
updatePixels();
}

float warp(int _x, int _y, float factor, int n_range) {
float n1 = noise((_x+0.0) * factor, (_y+0.0) * factor) * n_range;
float n2 = noise((_x+5.2) * factor, (_y+1.3) * factor) * n_range;
PVector q = new PVector(n1, n2);

float n3 = noise(((_x + q.x * 4) + 1.7) * factor, ((_y + q.y * 4) + 9.2) * factor) * n_range;
float n4 = noise(((_x + q.x * 4) + 8.3) * factor, ((_y + q.y * 4) + 2.8) * factor) * n_range;
PVector r = new PVector(n3, n4);

return noise((_x + r.x * 4) * factor, (_y + r.y * 4) * factor) * n_range;
}
``````

Python

``````def setup():
size(515, 397, P2D) #width and height should be similar to the picture's dimensions

for y in range(img.height):
for x in range(img.width):
i = x + y * img.width
n = warp(x, y, .003, 615)
c = img.pixels[i-int(n)]  #selecting a color's index accordingly with the noise value
pixels[i] = color(c)
updatePixels()

def warp(_x, _y, factor, n_range):
n1 = noise((_x+0.0) * factor, (_y+0.0) * factor) * n_range
n2 = noise((_x+5.2) * factor, (_y+1.3) * factor) * n_range
q = PVector(n1, n2)

n3 = noise(((_x + q.x * 4) + 1.7) * factor, ((_y + q.y * 4) + 9.2) * factor) * n_range
n4 = noise(((_x + q.x * 4) + 8.3) * factor, ((_y + q.y * 4) + 2.8) * factor) * n_range
r = PVector(n3, n4)

return noise((_x + r.x * 4) * factor, (_y + r.y * 4) * factor) * n_range
``````
3 Likes

Thank you for your prompt reply and the new new approach - it works!
I am now trying to port it to p5 so that I can show it on the browser.

Patrick Roche

1 Like

Hey great stuff! - very happy to find this thread

just a quick fix for the problem with the above code. – the offset variable was returning a negative number. I just removed the modulo length and change to 0 if returning a negative number.

``````PImage img;

void setup(){
size(730, 549, P2D); //width and height should be similar to the picture's dimensions

int len = img.pixels.length;
//print(len);
//print(img.width * img.height);
for(int x = 0; x < img.width; x++){ // by row
for(int y = 0; y < img.height; y++) { // by column
int i = x + y * img.width; // i = index of grid columns
float n = warp(x, y, .003, 555);
int offset = (i-int(n)); //%len; // with a modulo the offset should wrap around
if (offset<0){
offset = 0;
}
color c = img.pixels[offset]; // --> ArrayIndexOutOfBoundsException

pixels[i] = color(c);
}
}
updatePixels();
}

float warp(int _x, int _y, float factor, int n_range) {
float n1 = noise((_x+0.0) * factor, (_y+0.0) * factor) * n_range;
float n2 = noise((_x+5.2) * factor, (_y+1.3) * factor) * n_range;
PVector q = new PVector(n1, n2);

float n3 = noise(((_x + q.x * 4) + 1.7) * factor, ((_y + q.y * 4) + 9.2) * factor) * n_range;
float n4 = noise(((_x + q.x * 4) + 8.3) * factor, ((_y + q.y * 4) + 2.8) * factor) * n_range;
PVector r = new PVector(n3, n4);

return noise((_x + r.x * 4) * factor, (_y + r.y * 4) * factor) * n_range;
}
``````
4 Likes

please for the love of god i need this for p5.js to use in my website
cybergrunge.net/html random art generator

1 Like