Getting pixel data from a PShape

Hello,
I’m looking into getting pixel data from a PShape. I have successfully imported a .obj file into processing. However, I am focused on retrieving the pixel data of the object, and eventually, put in into an array. Where I could later compare two identical objects for minuscule differences using the arrays. I’d appreciate any help for I am very stuck. I’ve thought of taking the 3D .obj file and converting into a 2d image, consisting of all angles of the object flattened. However, after research, I saw that this requires much work and some software to unwrap a .obj.

Sincerely,
Andrew

1 Like

Hi Andrew!

A PShape is actually vector data, isn’t it? It contains vertices, normals, etc. Do you mean the textures used by an .obj? A PShape does not contain pixels. You CAN render a PShape as an image, but there’s infinite ways to render it, depending on translation, rotation and scaling.

Unwrapping a 3D object is certainly doable, but not at all simple. Blender can do this for baking textures.

Do you have an image of what you are trying to achieve?

4 Likes

Hi Hamoid,

I really appreciate your response. Yes to a degree, however when drawing a Pshape shape the outcome does inherit color from the .mtl file that comes with the .obj. Also, yes, the texture is a better way to explain what I’m trying to compare. I don’t have a set object, I was just looking to develop a universal algorithm that would take any two highly similar 3D scans, .obj files and then get all the color data into two arrays to then compare to find differences.

Thank you,

Andrew

What do you want to compare? The actual data in the PShape or how it looks? If you want to compare the actual data, you can look at https://processing.github.io/processing-javadocs/core/processing/core/PShape.html and use the getters there (so you can get all vertices, normals, etc. and compare them to a different PShape).

But if you want to compare the looks, then you could have 2 PGraphics, render one PShape to each, then do pixel substraction which should give you a black image if they are identical, but you would see colored pixels when there’s a difference. You could render the two PShapes rotating and doing the subtraction in real time, so you can compare the shapes from every angle… Might this work?

5 Likes

I’m looking to compare the actual look of the .obj, specifically the color. If I render a pshape to the pgraphics how would I gain access to the pixel data or the actual colors. Further, for the subtraction wouldn’t I have to place that data into an array. Then compare to further subtract?

PGraphics inherits from PImage, so you can get the pixels out of a PGraphics.

But to do the substraction you wouldn’t need to do that, as you could use blendMode(SUBTRACT); when drawing the second layer on top of the first one.

2 Likes

I’ll definitely give it a try. However, would this implementation allow me to get the pixels if necessary from all around the obj, or is it a 2d render of just the front. Because I would like to get all the angles of the given obj compared. Thank you.

This would just be a view. That’s why I suggested rotating the object. But there’s many ways to do this, and there may be occluded areas, impossible to see unless you place the camera inside… You didn’t mention what the output of the program should be… is it to inspect it manually? in that case you could use a camera and rotate, pan, zoom the object with the mouse while seeing the difference. Or should the output be screenshots? values? Lots of possibilities :slight_smile:

1 Like

To start I’ll probably should the pan, rotate and zoom to visually verify it’s working. However, the plan was to return the 2nd obj with a circle around the area where the difference is, compared to the first. Do you recommend any way to go about this? Or can you point me in the right direction?

Thanks a ton,

Andrew

The Peasycam library gives you pan, rotate and zoom in one or two lines of code :slight_smile:

To draw the circle you could then scan the pixels manually and find the center of the non-black-pixel blob. But maybe there are multiple blobs? Something to consider. You can also use opencv, which can detect blobs automatically and give you an array with their contours (maybe their centers too?).

I think it may be possible that you need to implement a threshold in the detection, in case there are tiny color variations, to avoid everything being marked as changed. But you’ll figure that out later.

Sounds like a fun project :slight_smile:

2 Likes

Could you say more about what your final goal is?

For example, say you have 2 identical .obj teapots with different mtl files – there are tiny differences in the colors on the two teapots.

In your ideal world, how would you present that information? Do you want a third teapot that marks the locations of difference, for example with red marks on the surface? Do you want a color histogram of differences as a bar chart, showing which colors changed and which stayed the same? Something else?

1 Like

Hi Jeremy!
Okay, so based on your example lets say I have a brand new teapot and I personally 3D scan the teapot, creating a .obj of the object. Now I use this teapot for roughly a year so there may be scratches, or dents on it, but the color remains the same. Now, I went ahead and again 3D scanned the teapot with its used condition. Now through processing, I would like to use the first scan as something to compare the second scan too, where I could find the differences, which in this instance could be a scratch, dent, stain, etc… As the end result I would like Processing to go ahead a create a 3D circle around the difference found on the second .obj so displaying the second obj with that circle around/highlighting the difference. However, accessing the pixels for comparison or using blend to subtract similar pixels is something I’m trying to work with, but am experiencing struggle. I want to try these options as I continue to find an efficient method to meet my outcome. Thanks for the response, I really appreciate your time.

Okay, to be sure I understand for dent detection – you are actually expecting the obj mesh to be different, not the color, is that correct? So you want to register the two meshes and measure their difference?

For measuring the distance between two meshes, see for example MeshLab. There are also publications on the math involved – assuming the second mesh is an altered version of the first, registering is key, and being able to make assumptions about common orientation of the two meshes helps simplify the problem.

https://www.researchgate.net/post/What_kind_of_metrics_can_I_use_to_compare_two_3D_surfaces

1 Like

Thank you for the response, I will definitely look further into this. However, as far as comparing the meshes for the dent, wouldn’t it be simpler to again just compare the colors, for a dent would entail a level of color distortion or difference compared to the original. Any suggestions on the color comparison algorithm to implement for my objective?

So is the color the same, or is it different? Your problem and objective / outcome are not clear to me.

If you scan the same object twice, and there are two files for each (obj and mtl) then you could measure:

  1. differences in the mesh, mathematically
  2. differences in the materials (color), as an inventory (regardless of location)
  3. differences in various 3d-rendered views of the objects

Depending on the lighting of your scan and the location and size of the dent (and resolution of your scanning equipment), it is not necessarily the case that a dent that causes a change in the mesh will also cause a change in color – it may not. However, perhaps this would work with your objects for your use case – have you tested it? You haven’t said anything about what these things are, other than “compare two identical objects for minuscule differences.”

1 Like

I’m sorry for not being clear, when I stated that color remains the same. I was referring to the general color of the teapot, so lets say its gold, the second scan of the teapot would also return gold as it’s general color, but the scratches would be silver. I expressed this because I didn’t want the fading of the general color to be taken into consideration. Sorry for the confusion, the actual object I’m doing this on as it may seem odd is a human head, aka 3D selfie. This is a school assignment, and I’m going beyond on. The first scan of the head is going to be what I’m comparing too, after a couple weeks, I would do another 3D scan of the head and look for differences, being, sorry for this but, acne, scar tissue, anything that would pose as a difference, then highlight that difference, and also return the rotational angle taken out of 360 of the pixel area to refer to in the future. Now, I’m not sure what would be the best approach to accomplish this, Not sure if I should take mesh difference into account or not, or to stick with just the color. I have considered taking multiple 3D rendered views of the object, gathering the pixel data from each angle and comparing them for a difference. However, I wanted to expand my options before starting on one. What would you recommend be the best approach, and further is it all achievable in processing and if so what libraries would I need. Also, as far as the general skin changing a slight tone, I could implement a threshold to overlook it, so the comparison isn’t so sensitive. Thank you again for your time and response Jermey, I greatly appreciate it.

Interesting project! Warning, faces are much much harder than teapots – even with perfect lighting and good resting expressions there are going to be lot of variables, even over a few hours (never mind days) – angle of the neck, breathing, flush, hydration, eye dilation, beard stubble, micro-expressions, etc. etc.

Have you determined how the subjects will be recorded (with what lighting / equipment, in what environment), and do you have any samples of your data to work with? Those will help you determine what kind of registration is needed so that you can do comparison. In face detection (e.g. using OpenCV) one simple method is to align the inverted triangle between the centers of the eyes and center of the mouth. More generally, you could try to warp one mesh into the other before generating the images and making your comparisons.

I have the lighting factor taken into a controlled environment, to verify as close as exact lighting as possible. That is an interesting approach, however, if I Align the two 3D scans based on the triangle from openCV, would I do next, BlendMode subtraction? I’m currently writing the code to find the mesh difference, using your references, but I’m thinking further of how exactly would I compare the colors, for what if the face gains weight, would the mesh become different, because the pixel location would probably change, when comparing the pixels, what would you recommend to overcome this issue within processing? Any libraries or any classes you recommend using? I’ve come to the conclusion that I need to compare the meshes for differences, to find any shape distinctions or physical changes over the area, and then color changes, to find an accurate difference. Then I should be all good, I’m on a deadline for this project, and am trying to find the most accurate, simple way.

This is the heart of the issue – faces are plastic and subject to complex changes. If you are trying to compare things like freckles and moles (or circles under the eyes, differences in makeup, etc. then you might want a two-step process, assuming that you already know that Face A and Face B belong to the same person:

  1. detect the outlines of Face A and B
  2. detect interest keypoints in A and B
  3. construct meshes on the keypoints.
  4. if the mesh networks are not equivalent (e.g. new mole = new keypoint) make them equivalent
  5. using the meshes, morph image A to target B
  6. measure the differences between the Morph A and Face B

Now, I have no idea if this particular approach will work for you. This is a hard problem; in addition to reviewing the existing literature I would strongly recommend developing anything you do with tests on actual sample images based on your recording setup. Without good sample data to test against you will have no way of knowing what is a good or bad design – there are too many potential variables.

P.S.

Here is an example of a Kaggle task / contest around facial keypoint detection. Notice that one of the listed areas of application is “detecting dysmorphic facial signs for medical diagnosis” which sounds like it might relate to your use case.