Processing has this PFont method, “undocumented” (not in the main public reference), called .getShape(), that will give you a PShape of a glyph from a font object: PFont docs
But there is a catch… it will only take a char as an argument, and emojis are either String or char[] arrays… so I guess I’m out of luck, I suppose.
Method Font::createGlyphVector() which method PFont::getShape() internally invokes is overloaded to accept other datatypes besides just char[], such as int[] and String:
So you can try out your luck and ask the devs to add more datatype options for getShape()'s calling signature:
I’m interested in this topic, but I don’t know anything about Java. Can I have an example please (if possible)? Let’s say the letter “1”.
The following source code should get the glyphVector coordinates for ‘1’:
/*
This demo uses java code to display the character '1' as
well as the font outline and vectors (x,y coordinates) used
to create it. The outline was initially created offscreen,
so a translation was required to pull it down so that we can
see it. A similar technique was used to reposition the vector
points (marked by small circles) using an AffineTransform. The
points were found using a PathIterator and temporarily held
in an array.
*/
import java.awt.*;
import java.awt.event.WindowAdapter;
import java.awt.event.WindowEvent;
import java.awt.font.FontRenderContext;
import java.awt.font.GlyphVector;
import java.awt.geom.AffineTransform;
import java.awt.geom.PathIterator;
double[] coords = new double[2];
int index = 0;
int _wndW = 300;
int _wndH = 500;
class CanvasForDisplay extends Canvas {
public void paint(Graphics g) {
Graphics2D g2D = (Graphics2D) g;
Font font = new Font("Verdana", Font.BOLD, 106);
FontRenderContext fontRenderContext = g2D.getFontRenderContext();
GlyphVector glVector = font.createGlyphVector(fontRenderContext, "1");
g2D.drawGlyphVector(glVector, 120, 360);
Shape s = glVector.getOutline();
g2D.translate(120,150); // Initially drawn above our window so move it down
g2D.draw(s);
println("Shape =",s);
println(s.getBounds());
AffineTransform translate = new AffineTransform();
translate.setToTranslation(0, 100);
PathIterator path = s.getPathIterator(translate, 0.5);
println("pathIterator =",path);
println("windingRule =",path.getWindingRule());
while (!path.isDone()) {
path.currentSegment(coords);
println("pt[" + index + "] = " + (int)coords[0] + " : " + (int)coords[1]);
g2D.drawOval((int)coords[0], (int)coords[1], 4, 4);
path.next();
index++;
}
}
}
void setup() {
surface.setVisible(false); // Don't show default Processing window.
Frame frame = new Frame("FontRenderContextExample");
frame.setBounds(100, 100, _wndW, _wndH);
frame.add(new CanvasForDisplay());
frame.setVisible(true);
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
}
);
}
I have played with Geomerative before, it could be a solution as it is getting a string and not a char, but I wonder how it goes about it internally!
Processing “as is” can get the contours of the “simple” glyphs, like ‘1’ fine, the problem arises if they are represented by multiple chars in Java (like emojis).
Ok, I see how this works. When you use newString(“012345”); it will get the coordinates for each character in succession. If you look at the screen shot of Webdings older version you can see where these characters came from. It starts at zero on the first bar character on line 2 (not sure what the numbers are for the first line).
int count = 0x1F310;
Font font = new Font("Segoe UI Emoji", Font.BOLD, 120);
GlyphVector glVector = font.createGlyphVector(fontRenderContext, Character.toString(count));
A while back I worked on a wrapper to handle some font glyph stuff in the CamZup library, since Processing’s methods made PShapes with errors. iirc, there was an issue with the AWT path iterator repeating the same point at the open and close of a shape, and Processing not handling this case.
Supplying 0 detail to the AWT path iterator allows you to get the Bezier control points to the curve, rather than samples. Having the choice may allow you–or your students–to decide where there’s an advantage to each.
The tough part is preserving the cut-out shapes. The example code above uses an AWT-based renderer. If the P2D-like renderer is used, all the holes are opaque. Same deal with converting from the curve objects that I use to PShapes.
I can attest the method works fine, you have to do some checking for when a vertex repeats (then you know an external polygon has closed). So you have to check yourself when a vertex is the same as the initial vertex and this indicates a closed polygon. If you are drawing directly from the vertices, you can just repeat the vertex without using endShape(CLOSE), use endShape() only.
Then the problem is to know when to switch to beginCountour() / endContour() for the holes, or when it is a separate filled part of the glyph… One might perhaps use the PShape .contains() method for that decision, but I was not drawing directly so I used shapely to decide and apply the holes.
On py5 I’m doing this (and then using other strategies, with the shapely library) to make polygons with holes:
...
glyph_pt_lists = [[]]
c_shp = font.get_shape(c, 1)
vs3 = [c_shp.get_vertex(i) for i in range(c_shp.get_vertex_count())]
vs = set()
for vx, vy, _ in vs3: # discarding vz
x = vx + x_offset
y = vy + y_offset
glyph_pt_lists[-1].append((x, y))
if (x, y) not in vs:
vs.add((x, y))
else:
glyph_pt_lists.append([]) # will leave a trailing empty list