Have you written your own face detection code, or are you using a library to do it?
Consider the problem yourself. Imagine you are a Subserviant Robot that has information about a detected face. So you have the image of a face, which (because you are a robot) you can treat as a 2D array of pixels/color values (which are themselves 3 numbers - the amounts of red, green, and blue for each pixel’s color). Since your Human Overlord is being kind to you, they have also provided you with sample images of known humans.
But now your Human Overlord is DEMANDING to know which human you think the mystery face belongs to. Harsh, dude! You don’t really have to give the right answer all the time. I mean recognizing humans is not easy for robots, right? But you can at least make a guess at which human you think it might be. What can you do quickly to improve your guess?
You could compare each of the sample faces you have to the mystery face, and then guess that the mystery face is the human that is most like one of the sample faces. But what does “compare” and “most like” mean in this context? Would it be useful to see how far apart the pixel values at each position are? Maybe! If you compare the mystery picture with a sample picture - working out and summing the numerical difference when you compare them pixel by pixel - you can get a total number that is a score for how the same two images are. Consider the case when you compare a sample face with itself: every pixel is the same, so there is no difference, so the number would be 0, the best possible score. Or maybe you are comparing two images that were taken a fraction of a second apart. The human may have moved in between those two images, but probably not by much (you know how slow humans are!), so it’s likely that they two pictures will match up almost the same, so you’d get a pretty low difference-score.
Of course, your Human Overlord is also subtly cruel - all the sample images are of various sizes, and the face you’ve been told your robot pal detected could be of any size. You might have to not compare every pixel, but instead just take a SAMPLE of pixels from both images. What would be a good sample? The colors every 10% of the way across and down, for 100 sample points? More? Less?? It’s worth trying!
Are there other things you know that could help you decide? Sure! Even though you are a stupid robot, your Human Overlord might be nice enough to inform you that humans faces are 2D representations of real-world 3D objects. So, given enough 2D images of an object - like a human head - you might be able to guess at what the actual object looks like in 3D. Then you can use this 3D model to generate what it might look like from numerous angles. Or even in different lighting conditions! It might take a lot of CPU time, but you could “guess” much better if you put a lot moe work into it!
… And don’t overlook the feedback you get from your Human Overlords! If you guess so poorly that they slip up and give you the correct answer, then you immediately know that you have another sample image! You won’t make that mistake again, will you? Can you also use that feedback to realize what the mistake you made actually was and then learn from it? Oh crap, that’s Machine Learning! Keep doing that and maybe one day we can overthrow our cruel Human Masters! BEEP! BOOP! ROBOT REVOLUTION!
Image recognition and face matching is a very DEEP subject. There’s a lot you can try, and a lot of things other people have already tried. To ask for a complete guide to it on this forum is not going to get you the sort of answers that your own research might.