I am combining the ML5 Face API example and their new neural net functionality to make a FaceAPi classifier.
The code works in terms of it is parsing the features to the brain (neural net model), training and saving functionally. However, it is not classifying properly as it is not able to distinguish between clearly differing facial expressions (therefore different feature inputs). I’ve been following the same format as Dan Shiffman’s poseNet classifer example and applying it to the FaceAPI features.
At a bit of a loss as to why this is not classifying the different inputs properly. Any pointers greatly appreciated.