OK, just an obvious potential problem / difference, because one is using Java2D and one is using P2D.
still don’t get where the problem lies, since i’m not using svg in runtime, but an arraylist of polylines that contains one arraylist of PVectors and 6 vectors is the greatest of them all.
Do you need to do everything you are doing inside
draw() on every frame? Or can some of those calculations be done once inside
This is not very fast:
color c=color(red(colorSampler.pixels[b.loc]), green(colorSampler.pixels[b.loc]), blue(colorSampler.pixels[b.loc]))
Why not just
? If you need each component, look in the documentation of
blue() for faster bit-shifting based alternatives.
If your main screen is 1200x960, why do you need the 10 other images to be 2400x1920?
i’m trying to use video or a live feed to do the color sampling. with still images, color sampling just needs to happen once.
tried the second option also with no gain at all
my final output will be a 2400x1920 screen(two vertical rotated 90º)
till now, i’m trying the render solution instead of a live one. with render i mean saving each frame.
but just in case. performance sucks anyway, even when not calling the color sampling.
just plain svg runs at 44fps instead of the usual 60
It’s hard to suggest something without understanding the logic In the final program… there will be a video instead of 10 images? What’s the purpose of the svg? What is the goal of the program?
You can profile the application using https://visualvm.github.io/ and it could tell which methods are consuming most CPU.
By looking at another of your questions I get the impression you were using openFrameworks before. Why did you switch (if you did)?
Hmmm but the svg is not going to change on every frame right? Could you draw it into a PGraphics just once, so you work with pixels instead of rendering vector graphics to pixels on every frame?
the objective is to do a live mapping on a weird surface.
i’ve got a gothic dome that is make of bricks. each shape is a brick. all 880 bricks/shapes are in the vector file. i’m projecting over these bricks with two projectors. my workflow is generative -> syphon -> madmapper.
so for instance, i can project something like a growing circle, or a particle system that emits circles that grow. i can simply do that. but instead i can light up the brick if some part of the circunference is passing through the brick centroid. actually this is what i’m doing. but i have to render the videos instead of doing something live(vj).
i’d rather do something live/generative than doing renders. after all, this is why i’ve migrated to programming.
i learned to program with processing, so this is the environment where i work faster. openFramworks is awesome, but i’m slow with it, i get lost in how to do things instead of doing things. i intend to prototype in processing than translate it to oF.
following images are what madmapper spatial scan delivers. one for each projector
also. the same svg gets 44fps in oF too.
In debug or release mode? Release can perform much faster.
I would use visualvm to find the bottleneck. Otherwise you can do random attempts at optimizing, but you may or may not find the bottleneck.
- Check how many vertices each brick has. Maybe they are too detailed.
- Does the computer have good CPU and GPU?
- Do you need
stroke()in the bricks? That involves more work than just
color()each time is not necessary if the color doesn’t change (for instance you could store
color(255)in a variable called white.
- I believe calling
.setFill()iterates over each vertex and changes its properties. Using a shader to affect rendering might be faster (if the bottleneck is CPU bound).
I’m going to hazard a guess – just a guess! – that you could get a large performance boost by ditching the PShape Brick.s and the use of shape() in Brick.display() entirely, and switching to just keeping the vertex list and using quad().
i tought that if i built the brick once, and then display it i wouldn’t loose performance by building the brick every cycle.
quad might work, but sometimes, specially in the corners i have three, fice or six vertices per brick.
You get most performance from cached shapes when you draw them in one go under one parent, rather than drawing lots of small ones individually. Simpler shapes will probably draw quicker without using PShape.
Can you replace the color map entirely with texture coordinates on the vertexes for each brick to read from the color? Or with custom attributes and shaders. Let the GPU do the work.
i still lack the knowledge to create my own custom shaders. and i agree it would be much faster to get the color for each desired coordinate from shaders.
If it’s just the colour from an input texture, you should be able to do this with texture coordinates on the vertexes of the shape and no custom shader though.
will have to read how to get this done.
I’m not sure there are many built-in examples of this. Look under
Topics / Textures. Only the sphere example builds a PShape with texture coordinates I think.
IIRC there’s an issue where you can’t change the texture for the shape later. I think I managed this by using a PGraphics as the texture and drawing video, etc. into that first if you need it to update.
yes. i was reaaly whinkin about sending a PGraph for the shader, so i can do whatever i want inside it.
If I’m thinking of the same issue, there was a recent discussion about using
s.getTessellation() as a workaround for this.