Point cloud in Processing: what are the limits?

Hi everyone,

Upon reading this interview (japanese only) of Satoshi Horii, “Rhyzomatics” main creative coder, about one of his latest work involving 3D scanning, I started to wonder if such project was possible with Processing. In the article he explains how he spent a week scanning the entire underground of the Shibuya station and gathered no less than 3.5 billion points in .pts format (for a total of 216GB).

While he doesn’t explicitly mention what sofware he used to display all these points, I suspect OpenFrameworks (his daily coding platform) to be the main toolkit here.

QUESTIONS:

  • Is it technically possible to display billions of points using Processing ?
  • What are the limits (software/hardware) ?

I did a little test on my 2012 entry-level iMac (8 gb RAM, NVIDIA GeForce GT 650M 512 Mo). I used a point shader and stored the largest amount of point Processing/my computer could handle in a PShape (see example sketch below). This way, I could display up to only 1 million points. Exceding that threshold would systematically throw the following error message:

processing.app.SketchException: java.lang.RuntimeException: Waited 5000ms for: <1ebe2967, 1079c4c0>

Is that threshold impossible to exceed ?

Western Shinjuku, 800 000 points (not enough for details)


Example sketch with 100 000 points

import peasy.*;
import peasy.org.apache.commons.math.*;
import peasy.org.apache.commons.math.geometry.*;

PeasyCam cam;
PShader pointShader;
PShape shp;
ArrayList<PVector> vectors = new ArrayList<PVector>();

void setup() {
  size(900, 900, P3D);
  frameRate(1000);
  smooth(8);
  
  cam = new PeasyCam(this, 500);
  cam.setMaximumDistance(width);
  perspective(60 * DEG_TO_RAD, width/float(height), 2, 6000);
  
  double d = cam.getDistance()*3;
  
  pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
  pointShader.set("maxDepth", (float) d);

  
  for (int i = 0; i < 100000; i++) {
    vectors.add(new PVector(random(width), random(width), random(width)));
  }
  
  shader(pointShader, POINTS);
  strokeWeight(2);
  stroke(255);
    
  shp = createShape();
  shp.beginShape(POINTS);
  shp.translate(-width/2, -width/2, -width/2);  
  for (PVector v: vectors) {
    shp.vertex(v.x, v.y, v.z);
  }
  shp.endShape();
  
}

void draw(){
  background(0);
  shape(shp, 0, 0);
  
  println(frameRate);  
}

pointvert.glsl

uniform mat4 projection;
uniform mat4 modelview;

attribute vec4 position;
attribute vec4 color;
attribute vec2 offset;


varying vec4 vertColor;
varying vec4 vertTexCoord;

void main() {
  vec4 pos = modelview * position;
  vec4 clip = projection * pos;

  gl_Position = clip + projection * vec4(offset, 0, 0);

  vertColor = color;
}

pointfrag.glsl


#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

varying vec4 vertColor;
uniform float maxDepth;

void main() {

  float depth = gl_FragCoord.z / gl_FragCoord.w;
  gl_FragColor = vec4(vec3(vertColor - depth/maxDepth), 1) ;

}
3 Likes

One thing to keep in mind is that this is probably not rendered in real-time. Instead of trying to create a Processing sketch that draws at 60 frames per second, try to output an image file for a single frame. Then repeat that for the next frame. After you have all of the frames, you can stitch them together to create a video.

With this approach, the render time of a single frame doesn’t really matter anymore (unless the render time is longer than you feel like waiting).

This is how movies with a lot of special effects work. Only instead of doing everything on one computer, they’ll do the rendering on a render farm.

1 Like

Hi @Kevin,

My Japanese is not really on point (no pun intended) and I don’t have the technical skills so cannot say with certainty but my take is that it’s actually rendered in real-time.

メモリ使用量も最低限に抑え、リアルタイムにプレビューできるようにしました。
By limiting as much as possible memory usage I managed to make the whole thing viewable in real-time.

Satoshi explains that he divided the point cloud into several stacks with different levels of resolution you can switch from dynamically. The closer the camera gets to the points, the higher their density becomes and vice-versa.

That being said, I just want to know if there’s any limitation especially on the software side (e.g, the error mentionned previsouly).

For instance, if I buy a computer with better specs (high end GPU and CPU), I’m sure I’ll be enjoying a significant hardware acceleration (higher framerate, smoother camera movements…) but does that mean I’ll be able to display a lot more points as well ? In other words, is there some sort of software-related limitations that even better hardware couldn’t overcome at some point ?

Interesting that it was rendered in real-time, but I guess what I’d advise is to not rule out pre-rendering the scene.

Anyway, depending on what you mean, there are a few possible limits on the software side:

  • Rendering a complicated scene can take prohibitively long. According to this, it takes about 30 hours to render a single frame of a Pixar movie. Pixar can probably wait that long (and they use renderer farms to render multiple frames at one time), but you probably don’t want to wait that long.
  • You can have more data than you can hold in memory. Your computer has a set amount of memory, which decides how many points you can load at once. You can get around this by only loading the points you need.
  • There are other software limitations, such as arrays having a maximum size. This usually isn’t something you have to worry about, but technically it’s a limitation.

Throwing more hardware at the problem can help with some of these issues, but in the end a computer is a set of finite resources, and you can’t just do an infinite amount of rendering no matter how good your hardware is.

As for your specific error, I recommend googling it. Searching for “java.lang.RuntimeException: Waited 5000ms for:” (with the quotes) returns a bunch of promising results.

2 Likes

To avoid the java.lang.RuntimeException: Waited 5000ms for: error you can load your stuff in a thread :

// 'volatile' key word Recommended by: neilcsmith
volatile boolean LOADED = false;

public void setup() {
  // ... setup stuff ...
  
  thread("load");
}

public void load() {
  // ... load stuff ...

  LOADED = true;
}

public void draw() {
  if(!LOADED) {
    // ... draw loading screen ...

    return;
  }

  // ... draw stuff ...
}
3 Likes

That general strategy is more or less how tiled maps work – and how pyramidal TIFFs work – and how some open world 3D engines work. An interesting related design (scale rather than proximity) is Katamari Damacy (which is set to Shibuya-kei).

I believe that the Processing Library UnfoldingMaps does this – not for 3D point clouds, but for map tiles. You might be interested in checking out the code if you want to do something similar.

Once a system like that is in place the real limit is addressable disk space – you can zoom from the solar system down into Shibuya station, but being at the scale of the Sun doesn’t mean you load every subway track on planet earth. Instead you load ~3 pixels for “Japan.”

The general term of art in computer graphics is “level of detail”:

That article also mentions frustum culling, which could also provide additional big performance gains if you are planning on allowing FPS-style navigation using e.g. QueasyCam.

3 Likes

While you’ll almost certainly get away with it here, good to get in the habit of putting volatile on that boolean flag!

This calling of setup() in the JOGL animator (which causes this error) should probably be considered a bug. It causes arbitrary errors in resource loading like this, which will differ on other machines due to different CPU and disk speeds, etc.

1 Like

I’m currently trying to make a sketch from a large point cloud file so I hope you won’t mind if I bump that thread.

The text file contains roughly 2M points and takes about 1 minute and 40 seconds to be loaded in Processing. When I try to load an object (.obj) built from the same vertices, the loading time goes up to 5 minutes and 30 seconds (!).

Can someone please tell me:

  • why it takes so long, especially compared to other software like Meshlab, Blender or C4D that usually process the same file under a second ?
  • if there’s a workaround to speed up the loading process.
2 Likes

i´m currently looking into something similar and found this tool.
it is based on webGL
http://www.potree.org

1 Like

Nice project! It looks like all the data is in binary files, and that there’s also incremental loading. There’s no reason in principle that Processing shouldn’t be able to do that at least as fast as webGL. One key issue I think @solub has is loading the data as text, and loading the data all in one. Converting the data to binary and looking at something like a MappedByteBuffer over the file is probably a good way to approach this.

2 Likes

Because I was building my own very simple 3D laser scanner, I wanted to be able to display the data it creates with processing. After struggling with the processing point rendering (especially the CPU tessellation is slow) I decided to write my own library which makes use of the modern GL pipeline.

Not only it is a bit faster then the Processing implementation, but it also allows to use custom vertex attributes (atm only Int and Float is implemented), which is not possible in Processing.

Performance wise it is possible to display about 50 million colored points with about 30 FPS in FHD. Windowed it goes up to 50-55 FPS on a 1080ti (see Figure). The 1080ti seems not the limit (would be possible to render even more), maybe it‘s the implementation. But it was more an experiment, than a fully developed library.

The downside of the lib is that it won‘t run on all platforms. At the moment I have tested it on Windows / MacOS with modern graphics cards and a mac mini with an internal GPU.

7 Likes

A wonderful script. I tried this by loading a .txt file, primarily so that I could accesses the [y][z] locations of each vertices.

in the case of the .ply file - how would you accesses the [y][z] positions of each point?

how would you accesses the [r][g][b] values of each point?

^ I was not able to figure out how to do this for a string .txt file but I was wondering if this was possible with your .ply method?

Best regards!

I am not totally sure if I got your question, but if you would like to load a txt file with x,y,z,r,g,b, you could simply write your own loader (as done for ply) or you open it in CloudCompare and save it as a ply (binary little endian).

To access the data, you can get a list of all the attributes that have been added to the pointcloud buffer and read out the buffer. The values are usually stored as triplets for position and color. Here an example you can add to the LoadPointCloud.pde sketch:

// load pointcloud
PLYReader reader = new PLYReader(PLYFormat.BINARY_LITTLE_ENDIAN);
pclBuffer = reader.read(sketchPath("bunny.ply"));

// show keys
for (String key : pclBuffer.getAttributes().keySet()) {
  println(key);
}

// read buffer
FloatAttribute vertices = pclBuffer.getAttribute("position");
java.nio.FloatBuffer data = ((java.nio.FloatBuffer)vertices.getBuffer());

// print out all vertices
int stride = 3;
for (int i = 0; i < data.limit(); i += stride) {
  println("X: ", data.get(i), "Y: ", data.get(i+1), "Z: ", data.get(i+2));
}

// attach pointcloud
pclRenderer.attach(pclBuffer);

To be honest, I have not yet implemented a very nice way to read the data, currently working on other projects.

2 Likes