# OpenCV update, Heaviside Step Function, deriving parametric functions of shapes, Sobel filter

That’s wild. Are you trying to implement the entire function in code?

Not that wild, It´s just complex. But you can start with simple ones
Let’s take a random simple parametric curve function of their site like this one.
The code in processing is:

float dt; // delta stepper used in for loop. has to be small enough to give a good resolution
float x, px, y, py;  // actual and previous x/y values
int sf; // scale factor

void setup() {
size(800, 800);
background(255);
smooth();
sf = 300;
dt = PI/1500;
translate(width/2, height/2);
for (float t = dt; t < TWO_PI; t += dt) {
x = sin(10*t);
y = sin(8*t);
line(sf*px, -sf*py, sf*x, -sf*y);
px = x;
py = y;
}
}


The complex ones like the Einstein figure work the same way.

But they use the step function to omit/skip/hide certain parts of the curves. End that is the part with what I’m struggling with.
The WolframAlpha site is really interesting. You can abstract functions from images etc. But it’s paid. An introduction you can find here and here.
There you can find fi the image below made with solely functions.

1 Like

@noel, I’m not sure about the theta issue specifically, but have you looked at @quark’s Jasmine library?

http://www.lagers.org.uk/jasmine/index.html

It might be possible for you to feed in your equation components in something more like their natural equation form, rather than hand-translating them into Java functions.

That said, I haven’t tried a project like this with Jasmine before – it just occurred to me that it might be possible and labor saving, especially with so many expressions in things like person curves.

2 Likes

Interesting problem

Some points

• the equation you present is invalid because there is an imbalance in the brackets, I assume that there is an opening bracket before the sin
• the function as presented will return 0 outside the range 27\pi to 31\pi
• inside the range 27\pi to 31\pi it will return a sinusoidal value between 246 and 248

The solution is to create another function for \theta(v) which in the code below is called h(v)



void setup() {
for (float t = 26.9*PI; t < 31.1*PI; t += 0.1) {
println(t/PI, func(t));
}
}

float func(float t) {
float result = sin(9*t + 14f/3f) + 247;
result *= h(31 * PI - t);
result *= h(t - 27 * PI);
return result;
}

// Calculate the Heavyside step function H((t + n*Pi)
// Return 0 if t + n*Pi <0
// Return 1 if t + n*Pi >0
// otherwise return 0.5
float h(float v) {
return v < 0 ? 0 : v > 0 ? 1 : 0.5;
}

4 Likes

Hi @quark .
Thank you for looking into this and providing this function.
Unfortunately I have still difficulties in implementing it the way as done on the wolframalpha site.
It´s not that I want to plot the existing functions on their site, but rather trying to discover how they derive those functions from an image. I do not want to pay monthly to use their programs and learn yet another language (the wolfram language); but by searching throughout their blogs I hope to discover something.
I would really appreciate it, if you could have a look on one of their blogs. Scrolling down one third of this blog-page, you will find a plot function of the twitter bird. As you see below I coded this shape in a para-metrical way. How can I use your step function in this code?
Any hint about how they derive those functions will be a great help.
Thanks once again.

float[][] cc = {{210,168},{400,148},{210,280},{70,185},{130,215},{180,216},{130,110},{185,135},{292,-75},{430,30},{493,-35},{410,10},{450,-75}}; // circle center
float[][] ca = {{-.15*PI,.5*PI},{.66*PI,1.56*PI},{-.38*PI,0*PI},{0*PI,.2*PI},{-.07*PI,.06*PI},{-.54*PI,-.08*PI}, {-.18*PI,-.08*PI},
{-.67*PI,-.23*PI},{-.29*PI,-.02*PI},{.18*PI,.3*PI},{0.02*PI,.12*PI},{.22*PI,.35*PI},{.08*PI,.16*PI}}; // circle arc {start,stop}

void setup() {
size(620, 520);
background(255);
stroke(50, 50, 255);
float step = 2*PI/60;
float px = 0, py = 0;
for (int i = 0; i < 13; i++) {
push();
translate(cc[i][0], cc[i][1]);
int j = 0;
for (float t = ca[i][0]; t < ca[i][1]+step; t += step) {
if (j == 0) { px = x; py = y; } // avoiding -center to start line- in first for loop
line(x, y, px, py);
px = x; py = y; j++;
}
pop();
}
// fillShape();
}

Not relevant fillShape function
import java.awt.Point;
import java.util.Queue;
PImage img;

void fillShape() {
img = get();
while (!queue.isEmpty()) {
Point p = queue.remove();
if (check(p.x, p.y)) {
}
}
img.updatePixels();
image(img, 0, 0, width, height);
}

boolean check(int x, int y) {
if (x < 0 || y < 0 || y >= img.height || x >= img.width) return false;
int pp = img.pixels[x+(y*img.width)];
if (pp != color(255)) return false;
img.pixels[x + (y * img.width)] = color(0, 167, 230);
return true;
}


1 Like

They seem to use a wide range to techniques depending on the image type and the maths is hidden inside the Wolfram language.

The HSF is used to limit the parts of individual curves to include in the calculation so is useful if you are trying to describe the shape using a single pair of parametric curves i.e. x=f(t) and y=g(t) which is what they have in the Einstein face drawing.

In your code you are creating the twitter outline using a series of arcs and using the ca array to control the portion of the circle to draw so you don’t have a need for the HSF.

The approach you take from here depends on whether you want to

1. create a single pair of parametric equations . x=f(t) and y=g(t) to describe the whole contour
2. describe the contour as a collection of circular arcs

If you want (1) then you need to modify the data so that the contour is roughly centred about the coordinates [0,0]. If the contour is offset from the origin then most values of the parametric parameter t would cause the HSV to output 0 which is hugely wasteful.

If you want (2) I suggest that you create a class that represents an arc it would have fields for the centre, radius and angle limits. Then have an array of arcs, this is better than having separate arrays for centre, angle limits and radii.

Personally I would go for (2) because it is easy to create a small sketch that takes an image and with some user input calculate the arc values need for the contour.

When drawing the arcs instead of translating the graphics matrix you calculate the arc cartesian values with
x = cx + radius * sin(t) and
y = cy + radius * cos(t)
it should be possible to generate the parametric equations for method (1), though I would need to think more on the actual algorithm.

3 Likes

Thanks. Great hint. From the center! That immediately reminded me of Ptolemy (proving that that the earth is the center of the universe). While I took the second approach, WolframAlpha likely uses the first, using Fourier Transform in their math. So I did a search and found this!

See the Pen Sketchable Fourier Transform by marl0ny (@marl0ny) on CodePen.

See the Pen Drawing Paths with Harmonics by Mitch Anderson (@tmanderson) on CodePen.

Plenty of material to study nowl
Starting here.

1 Like

I really enjoyed studying this. Daniel really is a great teacher, Everything is in P5.js and I couldn’t find a ready code for the ‘fourier complex’ sketch example in P5.java, so I will leave it here. It also includes a way of evenly distributing points on the curves. By using parametric functions, the huge data file is no longer necessary.
So did I come closer on discovering how Wolfram derive those functions from an image? Nope.
But paying more attention, I noticed that their functions use as many quotients as I use floats, in my code, and that is what I’m looking after; plotting images with the least amount of data.
So my next project is drawing free hand curves, and transform them in bezier curves.
If anyone can give some guide lines on how to start doing this I would appreciate it.

//  Code based on teachings from Daniel Sauter https://www.youtube.com/watch?v=7_vKzcgpfvU&t=1314s

float[][] cc = {{210, 168}, {430, 30}, {493, -35}, {410, 10}, {450, -75}, {400, 148}, {292, -75}, {185, 135}, {130, 110}, {180, 216}, {130, 215}, {210, 280}, {70, 185}}; // circle center
float[][] ca = {{-.15, .5}, {.18, .3}, {0.02, .12}, {.22, .35}, {.08, .16}, {.66, 1.56}, {-.29, -.02}, {-.67, -.23}, {-.18, -.08}, {-.54, -.08}, {-.07, .06}, {-.38, 0}, {0, .2}}; // circle arc {start,stop}
int[] radius = {300, 148, 160, 150, 177, 105, 255, 110, 110, 110, 110, 110, 250};
float[] bird_x = new float[0];
float[] bird_y = new float[0];
float[] swap_x = new float[0];
float[] swap_y = new float[0];
int count = 0, tel;
float diameter, amplitude, time, interval, frequency, phase;
PVector pos, prev;
ArrayList<PVector> path, drawing;
ArrayList<Complex> x, fourier;
boolean allow = true;

void setup() {
size(800, 600);
frameRate(25);
pos = new PVector();
prev = new PVector();
x = new ArrayList<Complex>();
fourier = new ArrayList<Complex>();
path = new ArrayList<PVector>();
float step = .037;
float px = 0, py = 0;
int tel = 0;
for (int i = 0; i < 13; i++) {
push();
translate(cc[i][0], cc[i][1]);
int j = 0;
swap_x = new float[0];
swap_y = new float[0];
for (float t = ca[i][0]*PI; t < ca[i][1]*PI+1.7*step; t += step/radius[i]*200) {
swap_x = append(swap_x, x+cc[i][0]);
swap_y = append(swap_y, y+cc[i][1]);
j++;
}
if (i != 0 && i % 2 == 0) {
swap_x = reverse(swap_x);
swap_y = reverse(swap_y);
}
bird_x = concat(bird_x, swap_x);
bird_y = concat(bird_y, swap_y);
pop();
}
PVector[] vectors = new PVector[bird_x.length];
for (int i = 0; i < bird_x.length; i++) {
vectors[i] = new PVector();
vectors[i].x = bird_x[i];
vectors[i].y = bird_y[i];
}
drawing = new ArrayList<PVector>();
for (int i = 0; i < vectors.length; i++) {
}
for (int i = 0; i < drawing.size()-1; i++) {
PVector point = drawing.get(i);
Complex c = new Complex(point.x, point.y);
}
fourier = dft(x);
sort(fourier);
}

void draw() {
background(255);
noFill();
PVector vertex = drawEpicycles(width/2, height/2, fourier);
push();
strokeWeight(5);
strokeJoin(ROUND);
stroke(0, 0, 255);
beginShape();
for (int i = path.size()-1; i > 0; i--) {
PVector p = path.get(i);
vertex(p.x, p.y);
}
endShape();
pop();
time += interval;
if (time > TWO_PI) {
time = 0;
path.clear();
}
}

PVector drawEpicycles(float x, float y, ArrayList<Complex> fourier) {
pos.x = x;
pos.y = y;
interval = TWO_PI/fourier.size();
for (int i = 1; i < fourier.size(); i++) {
prev.x = pos.x;
prev.y = pos.y;
Complex epicycle = fourier.get(i);
frequency = epicycle.freq;
amplitude = epicycle.amp;
phase = epicycle.phase;
diameter = amplitude*3;
float theta = frequency * time + phase;
pos.x += amplitude * cos(theta);
pos.y += amplitude * sin(theta);
noFill();
stroke(0, 150, 0);
if (i == 1) diameter = 400;
ellipse(prev.x, prev.y, diameter, diameter);
stroke(0);
line(prev.x, prev.y, pos.x, pos.y);
}
return new PVector(pos.x, pos.y);
}

ArrayList<Complex> dft(ArrayList<Complex> x) {
int N = x.size();
ArrayList<Complex> X = new ArrayList<Complex>(N);
for (int k = 0; k < N; k++) {
Complex sum = new Complex(0, 0);
for (int n = 0; n < N; n++) {
float phi = (TWO_PI * k * n) / N;
Complex c = new Complex(cos(phi), -sin(phi));
}
sum = sum.div(N);
float freq = k;
float amp = sum.mag();
X.add(new Complex(sum.re, sum.im, freq, amp, phase));
}
return X;
}

void sort(ArrayList<Complex> c) {
int n = c.size();
for (int i = 0; i < n-1; i++) {
int temp = i;
for (int j = i+1; j < n; j++) {
if (c.get(j).amp > c.get(temp).amp) temp = j;
}
Complex temp2 = c.get(temp);
c.set(temp, c.get(i));
c.set(i, temp2);
}
}

class Complex {
float re;
float im;
float freq;
float amp;
float phase;

Complex(float r, float i) {
re = r;
im = i;
}

Complex(float r, float i, float f, float a, float p) {
re = r;
im = i;
freq = f;
amp = a;
phase = p;
}

Complex mult(Complex c_obj) {
float rea = re * c_obj.re - im * c_obj.im;
float ima = re * c_obj.im + im * c_obj.re;
return new Complex(rea, ima);
}

return new Complex(re + c_obj.re, im + c_obj.im);
}

Complex div(float divisor) {
float r = re / divisor;
float i = im / divisor;
return new Complex(r, i);
}

return atan2(im, re);
}

float mag() {
return sqrt(re * re + im * im);
}
}


3 Likes

In addition to Dan’s tutorial, I also enjoy this Alex Miller illustrated explainer:

It isn’t in Processing / p5.js – but the discussion with illustrations is still helpful.

1 Like

The Wolfram people drawing parametric equations used complex numbers!! Now that would be fun.

What do you mean by that?

I meant the challenge to learn why they use complex numbers and recreate the techniques would be fun.

My Steganos library came about after watching the film Along Came a Spider, it was the first time I had heard of steganography (hiding in plain sight) and after some research I thought it would be fun to implement it. So Along Came Steganos and although I have never used the library for real myself it doesn’t spoil the satisfaction of creating it.

1 Like

I found this video. I hadn’t watched it yet. Very good; I like Freeman’s acting. Recently I watched “Seven”. Also good, of the same genre.

I still believe that the Wolfram people first tracing the image, using posterize technique etc. and then simplify the beziers, to obtain the points/ needed in their equations. So I immersed myself in the study of Bézier curves/splines, and found a good paper here. ( I have read in the comment there that you are planning to write a 2D/3D Bezier library in Java. Looking forward to that! ) So I have tried to write a code to transform a bitmap shape into a vectorized one, with the least points possible. I started to write a rasterizing code, but then I read a question about the detection of a shape where @jb4x solved it by using the OpenCV lib, which does the job perfectly. Then I minimized the number of points by just skipping points, but with poor results (42 ). Optimizing using Ramer Douglas Peucker algorithm gave a more consistent result. (I’m so happy that Processing has so many resources to study, like Daniel Shiffman’s and yours as well). Although the code can be useful as a tool for converting bitmaps to vector shapes, it’s not efficient enough for me. The twitter shape uses 13 points/curves (13 circle arcs), meaning 52 floats; the same amount of quotients that the twitter Wolfram function uses. I’ve tried to optimize further, by calculating the difference of the angle of the two lines attached to each point, storing them in an array, and by keeping track of their indexes of the maximum values, only removing points at the widest angles, preserving the sharper. But without success. Now, I’ve run out of ideas and would appreciate suggestions on how to go further on my quest to write functions of irregular shapes.

// Ramer Douglas Peucker algorithm by teachings Daniel Shiffman  https://youtu.be/nSYw9GrakjY

import gab.opencv.*;
import java.awt.Point;
import java.util.Queue;
import java.util.*;

float[][] cc = {{210, 168}, {430, 30}, {493, -35}, {410, 10}, {450, -75}, {400, 148}, {292, -75}, {185, 135}, {130, 110}, {180, 216}, {130, 215}, {210, 280}, {70, 185}}; // circle center
float[][] ca = {{-.15, .5}, {.18, .3}, {0.02, .12}, {.22, .35}, {.08, .16}, {.66, 1.56}, {-.29, -.02}, {-.67, -.23}, {-.18, -.08}, {-.54, -.08}, {-.07, .06}, {-.38, 0}, {0, .2}}; // circle arc {start,stop}
int[] radius = {300, 148, 160, 150, 177, 105, 255, 110, 110, 110, 110, 110, 250};

PImage img, imgf;
ArrayList<PVector> all_points = new ArrayList<PVector>();
ArrayList<PVector> rdp_points = new ArrayList<PVector>();
ArrayList<Contour> contours;
int index, total_points;
PShape s;
float[][] knots;
float loc = 15, num, angle, epsilon = 1;
OpenCV ocv;

void setup() {
size(620, 520);
background(0);
textSize(18);
stroke(255);
drawShapeBorder();
fillShape();
stroke(0);
drawShapeBorder();
//sharp angle test
//star(width/2, height/2, 60, 140, 5);
//fill(255);
//fillShape();
ocv = new OpenCV(this, img);
strokeWeight(1);
contours = ocv.findContours();
println("number of contours = "+contours.size()); // make sure to have only 1 !
background(255);
Contour contour = contours.get(1); // write 0 for the star test
total_points = contour.getPoints().size();
knots = new float[total_points][2];
for (PVector point : contour.getPoints()) {
point = new PVector(point.x, point.y);
knots[index][0] = point.x;
knots[index][1] = point.y;
index++;
}
PVector[]ak = new PVector[knots.length];
for (int v = 0; v <= knots.length-1; v++) {
ak[v] = new PVector(knots[v][0], knots[v][1]);
}
for (PVector point : contour.getPoints()) {
point(point.x, point.y);
point = new PVector(point.x, point.y);
}
}

void draw() {
background(255);
rdp_points = new ArrayList<PVector>();
int total = all_points.size();
PVector start = all_points.get(0);
PVector end = all_points.get(total-1);
rdp(0, total-1, all_points, rdp_points);
stroke(220, 220, 255);
strokeWeight(4);
noFill();
beginShape();
for (PVector v : all_points) {
vertex(v.x, v.y);
}
endShape();
stroke(0);
strokeWeight(1);
beginShape();
PVector v1 = new PVector();
v1 = rdp_points.get(rdp_points.size()-1);
curveVertex(v1.x, v1.y);
for (PVector v : rdp_points) {
curveVertex(v.x, v.y);
fill(255, 0, 0);
ellipse(v.x, v.y, 5, 5);
}
PVector v2 = new PVector();
v2 = rdp_points.get(0);
curveVertex(v2.x, v2.y);
noFill();
endShape();
fill(0);
text("epsilon: " + nf(epsilon, 2, 2), 150, 80);
text("Total Points: " + (rdp_points.size()-1), 50, 50);
drawSlider();
fill(0);
text("Print", 510, 430);
noFill();
strokeWeight(2);
rect(490, 410, 80, 28);
}

void mousePressed() {
if (mouseX > 490 && mouseY > 410 && mouseY < 438) {
println();
print("float[] x = {");
for (PVector v : rdp_points) {
print(v.x+", ");
}
print("};");
println();
print("float[] y = {");
for (PVector v : rdp_points) {
print(v.y+", ");
}
print("};");
}
}

void mouseDragged() {
loc = mouseX;
if (loc < 15) loc = 15;
if (loc > width-15) loc = width-15;
epsilon =  int(map(loc-10.55, 0.0, width, 1.0, 17.0));
}

void rdp(int startIndex, int end_index, ArrayList<PVector> all_points, ArrayList<PVector> rdp_points) {
int next_index = findFurthest(all_points, startIndex, end_index);
if (next_index > 0) {
if (startIndex != next_index) {
rdp(startIndex, next_index, all_points, rdp_points);
}
if (end_index != next_index) {
rdp(next_index, end_index, all_points, rdp_points);
}
}
}

int findFurthest(ArrayList<PVector> points, int a, int b) {
float record_distance = -1;
PVector start = points.get(a);
PVector end = points.get(b);
int furthest_index = -1;
for (int i = a+1; i < b; i++) {
PVector current_point = points.get(i);
float d = lineDist(current_point, start, end);
if (d > record_distance) {
record_distance = d;
furthest_index = i;
}
}
if (record_distance > epsilon) {
return furthest_index;
} else {
return -1;
}
}

float lineDist(PVector c, PVector a, PVector b) {
PVector norm = scalarProjection(c, a, b);
return PVector.dist(c, norm);
}

PVector scalarProjection(PVector p, PVector a, PVector b) {
PVector ap = PVector.sub(p, a);
PVector ab = PVector.sub(b, a);
ab.normalize(); // Normalize the line
ab.mult(ap.dot(ab));
return normal_point;
}

void drawShapeBorder() {
float step = 2*PI/60;
float px = 0, py = 0;
for (int i = 0; i < 13; i++) {
push();
translate(cc[i][0], cc[i][1]);
int j = 0;
for (float t = ca[i][0]*PI; t < ca[i][1]*PI+step; t += step) {
if (j == 0) {
px = x;
py = y;
}
line(x, y, px, py);
px = x;
py = y;
j++;
}
pop();
}
}

void fillShape() {
img = get();
while (!queue.isEmpty()) {
Point p = queue.remove();
if (check(p.x, p.y)) {
}
}
img.updatePixels();
}

boolean check(int x, int y) {
if (x < 0 || y < 0 || y >= img.height || x >= img.width) return false;
int pp = img.pixels[x+(y*img.width)];
if (pp != color(0)) return false;
img.pixels[x + (y * img.width)] = color(255);
return true;
}

void drawSlider() {
push();
noStroke();
fill(255);
rect(0, height-30, width, 30);
fill(200, 200, 255);
rect(10, height-15, width-20, 5);
fill(150, 150, 255);
ellipse(loc, height-13, 15, 15);
pop();
}

void star(float x, float y, float radius1, float radius2, int npoints) {
float angle = TWO_PI / npoints;
float halfAngle = angle/2.0;
beginShape();
for (float a = 0; a < TWO_PI; a += angle) {
float sx = x + cos(a) * radius2;
float sy = y + sin(a) * radius2;
vertex(sx, sy);
sx = x + cos(a+halfAngle) * radius1;
sy = y + sin(a+halfAngle) * radius1;
vertex(sx, sy);
}
endShape(CLOSE);
}


1 Like

First very impressive sketch although a few comments wouldn’t go amiss. It is taking me sometime to work out what is happening.

It seems to me that you have succeeded (by using OpenCV) and that your only issue is trying to reduce the amount of data used for the vector image but still match the bitmap image within acceptable visual limits.

If you are starting with an image (w , h) pixels then the amount of data is large (w \times h \times 4) bytes which is a lot more than produced by OpenCV. As you have discovered reducing the vector data reduces the accuracy of reproduction but that should not be surprising.

Just noticed this statement. Is your ultimate aim to do the same as Wolfram?

Yes. I remember how exciting I found the statement of my math teacher in high school, that every object/form could be expressed, as a function. (Today I believe the whole existence is solely a mathematical structure.)
When I import the same twitter bitmap into Inkscape and use the trace tool with the custom value, it vectorizes it with 27 points (knots). Reducing them to 23 will still give a more accurate shape than with my code. Inkscape employs the Potrace bitmap tracing engine. You can find a javascript port of it here. I didn´t have the time to go through it, because I didn’t even finish my study of the paper I linked above. Wolfram somehow does a better job, because they traced it back to the theoretical lowest point number(13). I still believe that using angle detection could optimize the code. Code below prints the angle array, but until now I was not able to reduce the knots evenly. I mean not only remove the knots on the widest angles because this would deform the shape, but also on the sharper angles, but with less frequency. If you have an idea to ‘balance’ that, please let me know.

float[] x = {381.0, 335.0, 301.0, 295.0, 275.0, 229.0, 183.0, 137.0, 94.0, 75.0, 79.0, 104.0, 70.0, 74.0, 101.0, 146.0, 108.0, 144.0, 190.0, 184.0, 138.0, 92.0, 94.0, 140.0, 186.0, 232.0, 278.0, 324.0, 370.0, 416.0, 455.0, 482.0, 498.0, 508.0, 509.0, 551.0, 513.0, 544.0, 502.0, 461.0, 415.0, 381.0  };
float[] y = {43.0, 64.0, 109.0, 155.0, 179.0, 171.0, 154.0, 126.0, 84.0, 118.0, 164.0, 210.0, 202.0, 248.0, 294.0, 322.0, 324.0, 368.0, 388.0, 406.0, 424.0, 433.0, 445.0, 459.0, 467.0, 467.0, 460.0, 446.0, 422.0, 386.0, 341.0, 295.0, 249.0, 203.0, 157.0, 116.0, 122.0, 77.0, 93.0, 62.0, 43.0, 43.0, };
float[] angles = new float[x.length];
FloatList angles_f_list = new FloatList();
IntList index_list_sorted = new IntList();
int tel;

void setup() {
size(620, 520);
background(255);
for (int i = 1; i < x.length-1; i++) { // Calculating angle between the two lines attached to each point
PVector p1 = new  PVector();
PVector p2 = new  PVector();
p1.x = x[i]-x[i+1];
p1.y = y[i+1]-y[i];
p2.x = x[i-1]-x[i];
p2.y = y[i]-y[i-1];
float angle = PVector.angleBetween(p1, p2);
angles[i] = degrees(angle);
}
for (int i = 0; i < x.length; i++) { // Fill angles_f_list with angle float values
angles_f_list.append(angles[i]);
}
printArray(angles);
for (int i = 0; i < x.length; i++) { // Sorting the the maximum values indexes of angles_f_list in the index_list_sorted
float delta_max = angles_f_list.min();
for (int j = 0; j <= angles_f_list.size(); j++) {
float dlm = angles_f_list.get(j);
if (dlm ==  delta_max) {
index_list_sorted.append(j);
angles_f_list.remove(j);
tel++;
break;
}
}
}
printArray(index_list_sorted);
stroke(0);
beginShape();
curveVertex(x[x.length-1], y[y.length-1]);
for (int i = 0; i < x.length; i++) {
curveVertex(x[i], y[i]);
circle(x[i], y[i], 5);
}
curveVertex(x[0], y[0]);
endShape();
}

1 Like

True. So I converted them into beziers. When sliding the button totally to the right it’s easy to edit control points and anchors. Right mouse clicking the anchor points, will remove them, so I could almost reached the theoretical number 13. (if it were not for the case that I have a random start point which not can be removed). My next goal is to do this process by code. The first thing that came into my mind was to programmatically move the control-point around its anchor-point in a spiral way, keeping track of the matching pixels of the original line and the formed curve. This highest matching number would then indicate its best control-point location. But I hope you, or anyone else that eventually is reading this can give me another idea.

When writing this I am thinking that I could also write a shape editor including the many possibilities of the geomerative library. But I presume that your upcoming 2D/3D bezier library will contain such codes. Please let me know.

// Ramer Douglas Peucker algorithm by teachings Daniel Shiffman  https://youtu.be/nSYw9GrakjY
// Catmull-Rom/Bezier convertion Peter Lagers and @antony74 https://forum.processing.org/one/topic/convert-between-curvevertex-and-beziervertex.html
// Actual link to paper Catmull-Rom/bezier convertion  https://people.engr.tamu.edu/schaefer/research/catmull_rom.pdf

import gab.opencv.*;
import java.awt.Point;
import java.util.Queue;
import java.util.*;

float[][] cc = {{210, 168}, {430, 30}, {493, -35}, {410, 10}, {450, -75}, {400, 148}, {292, -75}, {185, 135}, {130, 110}, {180, 216}, {130, 215}, {210, 280}, {70, 185}}; // circle center
float[][] ca = {{-.15, .5}, {.18, .3}, {0.02, .12}, {.22, .35}, {.08, .16}, {.66, 1.56}, {-.29, -.02}, {-.67, -.23}, {-.18, -.08}, {-.54, -.08}, {-.07, .06}, {-.38, 0}, {0, .2}}; // circle arc {start,stop}
int[] radius = {300, 148, 160, 150, 177, 105, 255, 110, 110, 110, 110, 110, 250};

PImage img;
PShape shp1;
ArrayList<PVector> contol1 = new ArrayList<PVector>();
ArrayList<PVector> contol2 = new ArrayList<PVector>();
ArrayList<PVector> anchor1 = new ArrayList<PVector>();
ArrayList<PVector> anchor2 = new ArrayList<PVector>();
ArrayList<PVector> all_points = new ArrayList<PVector>();
ArrayList<PVector> rdp_points = new ArrayList<PVector>();
ArrayList<Contour> contours;
float loc = 15, epsilon = 1, x1, y1, x2, y2, x3, y3, x4, y4;
int control_point, control_index;
boolean curve_mode = true, bezier_mode;
OpenCV ocv;
CurveToBezier c2b = new CurveToBezier();

void setup() {
size(620, 520);
surface.setTitle("Bezier edit");

// Comment following lines to import a black shape on white background image instead
background(0);
stroke(255);
drawShapeBorder();
fillShape();
stroke(0);
drawShapeBorder();

// and uncomment these following lines
//img.resize(width, height);
//filter(INVERT);

ocv = new OpenCV(this, img);
contours = ocv.findContours();
println("number of contours = "+contours.size()); // Cheque if you have more then 1 !
// Bitmap created by parametric function above has 2 contours so be sure to selct the right one here.
Contour contour = contours.get(1);
for (PVector point : contour.getPoints()) {
point(point.x, point.y);
}
textSize(18);
}

void draw() {
background(255);
stroke(220, 220, 255);
strokeWeight(4);
noFill();
beginShape();
for (PVector v : all_points) {
vertex(v.x, v.y);
}
endShape();
if (curve_mode) {
rdp_points = new ArrayList<PVector>();
int total = all_points.size();
PVector start = all_points.get(0);
PVector end = all_points.get(total-1);
rdp(0, total-1, all_points, rdp_points);
stroke(0);
strokeWeight(1);
beginShape();
PVector v1 = new PVector();
v1 = rdp_points.get(rdp_points.size()-1);
curveVertex(v1.x, v1.y);
for (PVector v : rdp_points) {
curveVertex(v.x, v.y);
fill(255, 0, 0);
ellipse(v.x, v.y, 5, 5);
}
PVector v2 = new PVector();
v2 = rdp_points.get(0);
curveVertex(v2.x, v2.y);
noFill();
endShape();
fill(0);
text("epsilon: " + nf(epsilon, 2, 2), 150, 80);
text("Total Points: " + (rdp_points.size()-1), 50, 50);
drawSlider();
fill(0);
text("Edit", 510, 430);
} else if (bezier_mode) {
fill(0);
text("Total Points: "+(contol1.size()), 50, 50);
stroke(0);
strokeWeight(1);
beginShape();
vertex(rdp_points.get(1).x, rdp_points.get(1).y);
for (int i = 0; i <  anchor2.size(); i++) {
x1 = anchor1.get(i).x;
y1 = anchor1.get(i).y;
x2 = contol1.get(i).x;
y2 = contol1.get(i).y;
x3 = contol2.get(i).x;
y3 = contol2.get(i).y;
x4 = anchor2.get(i).x;
y4 = anchor2.get(i).y;
fill(0, 255, 255);
circle(contol1.get(i).x, contol1.get(i).y, 8);
circle(contol2.get(i).x, contol2.get(i).y, 8);
stroke(150);
line(x4, y4, x3, y3);
line(x2, y2, x1, y1);
stroke(0);
fill(255, 130, 130);
circle(rdp_points.get(0).x, rdp_points.get(0).y, 7);
circle(anchor1.get(i).x, anchor1.get(i).y, 7);
circle(anchor2.get(i).x, anchor2.get(i).y, 7);
noFill();
bezierVertex(x2, y2, x3, y3, x4, y4);
}
endShape();
drawSlider();
fill(0);
text("Print", 510, 430);
}
noFill();
strokeWeight(1.3);
rect(490, 410, 80, 28);
}

void mousePressed() {
if (mouseX > 530 &&  mouseY < 50) {
println();
}

if (curve_mode) {
if (mouseX > 490 && mouseY > 410 && mouseY < 438) {
for (int i = 0; i < rdp_points.size(); i++) {
c2b.makeBezierVertex(rdp_points.get(i).x, rdp_points.get(i).y);
}
for (int i = 1; i <  rdp_points.size(); i++) {
}
curve_mode = false;
bezier_mode = true;
}
} else if (bezier_mode ) {
for (int i = 0; i < contol1.size(); i++) {
if (mouseX < contol1.get(i).x+6 && mouseX > contol1.get(i).x-6 &&
mouseY < contol1.get(i).y+6 && mouseY > contol1.get(i).y-6) {
control_point = 1;
control_index = i;
}
if (mouseX < contol2.get(i).x+6 && mouseX > contol2.get(i).x-6 &&
mouseY < contol2.get(i).y+6 && mouseY > contol2.get(i).y-6) {
control_point = 2;
control_index = i;
}
if (mouseX < anchor2.get(i).x+6 && mouseX > anchor2.get(i).x-6 &&
mouseY < anchor2.get(i).y+6 && mouseY > anchor2.get(i).y-6) {
control_point = 3;
control_index = i;
if (mouseButton == RIGHT && i != anchor2.size()-1) {
anchor1.remove(i+1);
anchor2.remove(i);
contol1.remove(i);
contol2.remove(i);
}
}
}
if (mouseX > 490 && mouseY > 410 && mouseY < 438) {
println();
println("shp1 = createShape(PShape.PATH);");
println("shp1.beginShape();");
println("shp1.vertex("+rdp_points.get(1).x+", "+rdp_points.get(1).y+");");
for (int i = 0; i <  contol2.size(); i++) {
x1 = anchor1.get(i).x;
y1 = anchor1.get(i).y;
x2 = contol1.get(i).x;
y2 = contol1.get(i).y;
x3 = contol2.get(i).x;
y3 = contol2.get(i).y;
x4 = anchor2.get(i).x;
y4 = anchor2.get(i).y;
println("shp1.bezierVertex("+x2+", "+y2+", "+x3+", "+
y3+", "+x4+", "+y4+");");
}
println("shp1.endShape();");
}
}
}

void mouseDragged() {
if (curve_mode) {
loc = mouseX;
if (loc < 15) loc = 15;
if (loc > width-15) loc = width-15;
epsilon =  int(map(loc-10.55, 0.0, width, 1.0, 17.0));
} else if (bezier_mode) {
if (control_point == 1) contol1.set(control_index, new PVector(mouseX, mouseY));
if (control_point == 2) contol2.set(control_index, new PVector(mouseX, mouseY));
if (control_point == 3 && control_index < anchor2.size()-1) {
anchor2.set(control_index, new PVector(mouseX, mouseY));
anchor1.set(control_index+1, new PVector(mouseX, mouseY));
float tx1 = contol1.get(control_index+1).x;
float ty1 = contol1.get(control_index+1).y;
float tx2 = contol2.get(control_index).x;
float ty2 = contol2.get(control_index).y;
contol1.set(control_index+1, new PVector(tx1+(mouseX-pmouseX), ty1+(mouseY-pmouseY)));
contol2.set(control_index, new PVector(tx2+(mouseX-pmouseX), ty2+(mouseY-pmouseY)));
}
}
}

void rdp(int start_index, int end_index, ArrayList<PVector> all_points, ArrayList<PVector> rdp_points) {
int next_index = findFurthest(all_points, start_index, end_index);
if (next_index > 0) {
if (start_index != next_index) {
rdp(start_index, next_index, all_points, rdp_points);
}
if (end_index != next_index) {
rdp(next_index, end_index, all_points, rdp_points);
}
}
}

int findFurthest(ArrayList<PVector> points, int a, int b) {
float record_distance = -1;
PVector start = points.get(a);
PVector end = points.get(b);
int furthest_index = -1;
for (int i = a+1; i < b; i++) {
PVector current_point = points.get(i);
float d = lineDist(current_point, start, end);
if (d > record_distance) {
record_distance = d;
furthest_index = i;
}
}
if (record_distance > epsilon) {
return furthest_index;
} else {
return -1;
}
}

float lineDist(PVector c, PVector a, PVector b) {
PVector norm = scalarProjection(c, a, b);
return PVector.dist(c, norm);
}

PVector scalarProjection(PVector p, PVector a, PVector b) {
PVector ap = PVector.sub(p, a);
PVector ab = PVector.sub(b, a);
ab.normalize(); // Normalize the line
ab.mult(ap.dot(ab));
return normal_point;
}

void drawShapeBorder() {
float step = 2*PI/60;
float px = 0, py = 0;
for (int i = 0; i < 13; i++) {
push();
translate(cc[i][0], cc[i][1]);
int j = 0;
for (float t = ca[i][0]*PI; t < ca[i][1]*PI+step; t += step) {
if (j == 0) {
px = x;
py = y;
}
line(x, y, px, py);
px = x;
py = y;
j++;
}
pop();
}
}

void fillShape() {
img = get();
while (!queue.isEmpty()) {
Point p = queue.remove();
if (check(p.x, p.y)) {
}
}
img.updatePixels();
}

boolean check(int x, int y) {
if (x < 0 || y < 0 || y >= img.height || x >= img.width) return false;
int pp = img.pixels[x+(y*img.width)];
if (pp != color(0)) return false;
img.pixels[x + (y * img.width)] = color(255);
return true;
}

void drawSlider() {
push();
noStroke();
fill(255);
rect(0, height-30, width, 30);
fill(200, 200, 255);
rect(10, height-15, width-20, 5);
fill(150, 150, 255);
ellipse(loc, height-13, 15, 15);
pop();
}

class CurveToBezier {
private PVector ring[] = new PVector[4];
private int count = 0;

CurveToBezier() {
ring[0] = new PVector();
ring[1] = new PVector();
ring[2] = new PVector();
ring[3] = new PVector();
}

void makeBezierVertex(float x, float y) {
ring[count % 4].x = x;
ring[count % 4].y = y;
if (count >= 3) makeBezierZero(ring[(count-3)%4], ring[(count-2)%4], ring[(count-1)%4], ring[count%4]);
count++;
}

private void makeBezierZero(PVector c0, PVector c1, PVector c2, PVector c3) {
PVector b1 = new PVector(0, 0);
PVector b2 = new PVector(0, 0);
b1.sub(c0);
b1.div(6);
b2.sub(c3);
b2.div(6);
}
}


1 Like

I set bezier mode to true and got the exception IndexOutOfBoundsException: Index: 1, Size: 0 so I could not try that out.

Since you have the coordinates of the curve you want to represent you could use these to perform a Bezier curve fitting - see the Pomax (curve fitting) guide.

I am not sure when I will be releasing my Bezier library, the actual Bezier class is virtually complete and implements most of the functionality in Pomax guide but I still have a lot to do before release.

Most graphics applications use cubic or quadratic beziers but I have written the library to handle curves of any order >= 2 (i.e. degree >=1). So we have

Degree          Shape                    Order (= number of control points)
1             straight line              2
3             cubic bezier               4
4             quartic bezier             5
and so on


This means I have not implemented any functionality from the guide that is specific to a particulare order e.g. cubic.

Some of the functionality, especially for higher order curves uses matrix maths and I use EJML (Efficient Java Matrix Library) for its ease of use. This does mean that my library is for JavaMode and would need a lot of work to convert it to JS

1 Like

Providing that the picture has enough contrast could you not use the marching squares algorithm to detail an outline. Sebastian league has a video on this for procedural cave generation and coding train just recently did a video on this very topic. I’ve implemented my own version and I can confirm that you can produce a set of points for vertices of an image which you can store in an array, the beauty of this is that it allows you to define the minimum amount of vertices and the maximum amount too and therefore allows you to kinda denoise the image too.

The gist of the program is this.
Retrieve an rgba value use r g b or a. Set a threshold based on the colour you are using. Anything over the threshold is 1 everything else is 0.

Then run through all points mark it as an open space if its a 1. This should give an array of array of points. Trim to you desired size.

Then do the same for all the wall points. Again trim based on desired size.

Repeat step one again. Trimming the walls will make new non wall tiles which now need to be counted.

extract edges of non walls and or walls.

Finally use marching squares to produce outline.

Note that whilst marching squares is based on the idea of retrieving one of 16 vertices based on neighbour tile variations it can be amended to give you a higher resolution, and finally I’m sure edge points with similar neighbouring gradients could be culled.

Note this is quite intensive CPU wise and on my t440p I can only run this on a grid 200 by 200 maybe a little higher but at 300x300 the program just hangs. I haven’t tried multi threading mind you which should be very easily doable.

so can confirm this will extract the edges. No I haven’t yet added the marching squares part so im just setting the color of edge tiles to a different color.

this has been done extracting blue setting threshold to 90, open space cuttof size to 10 and wall cutoff to 20.

here’s without any size threshold for comparison.

threshold 95

threshold 99

reference image
;

only downside to this is that its not very repeatable and settings have to be amended for individual images. Also its highly dependent on min max values.
passed red cutoff threshold 20

here is a picture of a car i passed through the program

passed blue same setting

passed green same settings

passed average (r + g + b )/3 threshold 10.

however something with good contrast will work fine

just noticed there is some deformation in the image, this happens at resolutions above 100 for some reason, due to the way I extract the pixel points.

2 Likes

I watched the serie now, and also Daniel’s latest video. I must admit that I need to re-watch them to fully understand this. But I see plenty of possibilities using this algorithm, as you also have shown with your project. Are you going to work this further out? When (if possible) I am ready with my bezier-node reducing project, I definitely will look into this. For the moment opencv is giving me a satisfying result.

What do you mean by “set to true”? I’m not getting this error. Did you use an imported image?
I know this can be improved, but first I have to learn a lot, and it’ll take some time
A lib in just JavaMode would be a great contribution. I’m looking forward to this.