How to print a label from a model of a Neural network?

I have a Numaker-PFM-M487 device and it contains the Bin file. The Bin file contains the weights and labels as ‘cat, dog, etc’ of a Neural network.
Now, how to print one label in the Bin file by this processing.
I made it like this code but it only shows the number, not the label.
Please help me, Many thanks.
Here is my code;

  Serial myPort;
  int SIZE = 3072;
  PImage img;
void setup(){
 size(32, 32);
  img = loadImage("image3.png");
  image(img, 0, 0);
  printArray(Serial.list());
  myPort = new Serial(this, Serial.list()[1], 9600);
   }
void draw() {
  image(img, 0, 0);
}
void keyPressed(){
  int i = 0, new_b = 0, index=0;
  String all = "";
  int temp;
  PImage myImage = img ;
  myImage.loadPixels();
  for(i=0;i<3072;i++){
      float b = blue(myImage.pixels);          
      new_b = (int)b;
      all = all + new_b + ",";
  }
  myPort.write(all); //send String type data to board
  println(all);
  delay(1000);
  while((temp = myPort.read()) > 0){
    if(index % 32 == 0){      
      println();
      index = 0;
    }
    print((char)temp);//recieve labels from M487 board
    print(" "); 
    index++;
  }
}

Hi, welcome to the forum!

I may be missing something important, but I’m not sure if that board is running the program you’ve written or comes with Numaker-PFM-M487. I know you can flash mbed boards with a bin file but is that the file you refer to? And the issue is technically not related to Processing because it’s more of the spec of the board and firmware.

Also this is my naive guess without knowing the context but maybe what you receive actually represents the label, for example 0 is cat and 1 is dog? Because this is normal for how neural networks are trained - but again I’m just guessing and we need further context on what you are working on :slight_smile:

I do on the Alexnet model. I trained this model with the cifar10 database. when I load one picture of that data in this model, it does not show class index (0 is cat or 1 is dog). Could you help me?

thanks for the info! But I’m totally lost about the context again. Can you maybe share what steps you took, tutorials you followed, code for the board you wrote (if any), what output actually you get?

step 1: I trained Alexnet model with the cifar10 database. (it ok)
Step 2: I get the weight of that model. (it ok)
step 3: I compiled weights with embed (it ok)
setp 4: I load an image into embed devices, But it does not show the labels of that image.
(I want to the embed device can show the labels, which contains that input image, for example, if I load an image of a dog then embed show the class of ‘dog’).
here is the structure of model.

arm_convolve_HWC_q7_basic(img_buffer2, 32, 3, conv0_wt, CONV0_CHANNEL, 3, 1, 2, conv0_bias, 0, CONV0_OUT_SHIFT, img_buffer1, 16, bufferA_conv0, NULL);
  arm_relu_q7(img_buffer1, 1 * CONV0_CHANNEL * 16 * 16);
  arm_maxpool_q7_HWC(img_buffer1, 16, CONV0_CHANNEL, 2, 0, 2, 8, NULL, img_buffer2);

  arm_convolve_HWC_q7_basic(img_buffer2, 8, CONV0_CHANNEL, conv1_wt, CONV1_CHANNEL, 3, 1, 1, conv1_bias, 0, CONV1_OUT_SHIFT, img_buffer1, 8, bufferA_conv1, NULL);
  arm_relu_q7(img_buffer1, 1 * CONV1_CHANNEL * 8 * 8);
  arm_maxpool_q7_HWC(img_buffer1, 8, CONV1_CHANNEL, 2, 0, 2, 4, NULL, img_buffer2);

  arm_convolve_HWC_q7_basic(img_buffer2, 4, CONV1_CHANNEL, conv2_wt, CONV2_CHANNEL, 3, 1, 1, conv2_bias, 0, CONV2_OUT_SHIFT, img_buffer1, 4, bufferA_conv2, NULL);
  arm_relu_q7(img_buffer1, 1 * CONV2_CHANNEL * 4 * 4);

  arm_convolve_HWC_q7_basic(img_buffer1, 4, CONV2_CHANNEL, conv3_wt, CONV3_CHANNEL, 3, 1, 1, conv3_bias, 0, CONV3_OUT_SHIFT, img_buffer2, 4, bufferA_conv3, NULL);
  arm_relu_q7(img_buffer2, 1 * CONV3_CHANNEL * 4 * 4);
  
  avepool_q7_HWC(img_buffer2, 4, CONV3_CHANNEL, 2, 0, 2, 2, NULL, img_buffer1); 
  fc_test(img_buffer1, fc0_wt, FC0_WEIGHT_ALL, 10, 0, FC0_OUT_SHIFT, fc0_bias, img_buffer2, fc0_buffer);

Thanks, that finally gives a bit more of the context - so you are using this library, right?

I’m afraid that perhaps you will get better answers if you ask questions in their forum because I believe this issue is not specific to Processing. Generally it’s not a good practice to ask questions on github issues, but they seem to have a “discussion” tab (although no one posted there) maybe you can give it a try?

Whether you post there or wait for more answers here, you should elaborate each step even more because we don’t know what is happening in your computer and your brain just like you don’t know what I had for my breakfast :slight_smile: How did you train the model? You wrote “ok” on step 1-3, but how did you verify them? Do you have an example of that image you loaded on Processing? Is there other ways than Processing to actually test if the model is trained properly?

Thank for your reply. I mean in these steps that it can run successfully and I tested its result. it is ok.
beginning step 4. I don’t know how can predict the label of one image from this network. do you know? please help me.!