I have been using the HE_Mesh library quite a bit to make models, and modify scanned models to make them have contour lines. It’s pretty neat.
I recently bought an AxiDraw to plot some of these out, but I am running into trouble when exporting a viewpoint as an SVG drawing to use in inkscape (which drives the axidraw)
Is there a library which will generate an SVG line drawing of a 3d view of a model with occlusion (so hidden lines disappear) I tried exporting as PDF but you get ALL the lines, and not just the visible ones.
My attempts until now have included bringing the model into Blender and then using freestyle to output but it is a problematic workflow that feels like a house of cards,ready to fail at any moment.
so… export as svg/pdf with occlusion from a 3d scene…?
I don’t have personal experience in this, but it looks like HE_Mesh GUI had an occlusion shader – there might be some logic in that that could be useful?
I ended up using Blender for this, as it can export with occlusion. In one case I used Python inside Blender to create all those cubes. In another one I used openFrameworks, but Processing would have been ok too, then load the mesh in Blender and export the svg.
I ended up using blender as well. It can be a bit daunting to set up Freestyle to export the correct things, and slicing can be time consuming, and is sometimes glitchy depending on the scale/amount of slicing done.
It is a good solution, though, and thankful pointing it out.
Hello, sorry to revive this old topic I’m not sure what’s the etiquette here.
I’m trying to achieve exactly what the original poster wants to do only I want to do that with 2D shapes. When I export to SVG/PDF all the vectors paths are there in the drawing and the axidraw will draw them.
I’m looking for a way to only export the paths that are visible. Basically I want to replicate the pathfinder function in illustrator.
Hi If I remember right what I once did was to add a tiny increasing depth value for all the 2D shapes, then do the Blender approach with an ortographic camera to avoid having far-away shapes be smaller than closer ones.
I think I actually scripted the depth thing on Blender. I can’t remember… I think I loaded an SVG, then added depth to each shape using the Python API, then rendered back to SVG. Something like that
In the scripting tab you can see the Python code. Looks like this:
import bpy
OpsObj = bpy.ops.object
OpsObj.select_all(action='DESELECT')
# select all shapes
OpsObj.select_by_type(type='CURVE')
# center shape origins
OpsObj.origin_set(type='ORIGIN_CENTER_OF_VOLUME', center='MEDIAN')
colls = bpy.data.collections.keys()
svgs = filter(lambda coll: 'svg' in coll , colls)
x = []
y = []
z = 0.0
for svg in svgs:
obs = bpy.data.collections.get(svg).objects
sources = [o for o in obs]
for src in sources:
src.location.z = z
loc = src.location
bb = src.bound_box
x.append(loc.x + bb[0][0])
x.append(loc.x + bb[6][0])
y.append(loc.y + bb[0][1])
y.append(loc.y + bb[6][1])
z += 0.001
OpsObj.select_by_type(type='CAMERA')
OpsObj.delete(use_global=False, confirm=False)
OpsObj.select_all(action='DESELECT')
OpsObj.camera_add(enter_editmode=False, align='VIEW', location=(0.0, 0.0, 95.0), rotation=(0, -0, 0))
camdata = bpy.context.object.data
camdata.type = 'ORTHO'
camdata.ortho_scale = 0.3
camdata.shift_x = (min(x)+max(x)) / 2.0 / camdata.ortho_scale
camdata.shift_y = (min(y)+max(y)) / 2.0 / camdata.ortho_scale
bpy.context.scene.svg_export.use_svg_export = True
bpy.context.scene.render.use_freestyle = True
bpy.context.scene.render.resolution_x = 793
bpy.context.scene.render.resolution_y = 1122
bpy.context.scene.render.resolution_percentage = 100
bpy.context.scene.frame_end = 0
I think my plan was to do everything from the command line, so never open Blender, but execute a script. I stopped the experiment when I realized that only closed shapes are rendered but not open lines or curves.