Error with tensor

I am trying to redo a mouse tracking code with tensor.js so it can learn to follow, but I am getting an error when i use model.fit I think. I’m trying to train with the x and y position of the ball as input and predict the acceleration that has to be applied to it. This is the ball class with Tensorflow.js

class BallTF{
 constructor(){
  this.maxSpeed = 1;
  this.maxForce = 10;
  this.pos = createVector(random(width),random(height));
  this.vel = createVector(1,1);
  this.acc = createVector(0,0);

  this.model = tf.sequential();
  this.model.add(tf.layers.dense({
    inputShape: [2],
    units: 2,
    activation: 'sigmoid'
  }));
  this.model.add(tf.layers.dense({
    units: 2,
    activation: 'sigmoid'
  }));
  //const optimizer = tf.train.adam(0.1);
    this.model.compile({
    optimizer: tf.train.adam(0.1),
    loss: 'meanSquaredError'
  });
 }


 update(mos){
   let mous = createVector(mos[0],mos[1]);
   tf.tidy(() => {
     let predAccArr = this.posTrain(this.track(mous));
     let predAcc = createVector(predAccArr[0],predAccArr[1]);
     this.vel = this.vel.add(predAcc);
     this.pos = this.pos.add(this.vel);
     this.acc = this.acc.mult(0);
   });
 }

posTrain(tar){
  let train_ys = tf.tensor2d([[tar.x,tar.y]]);
  let train_xs = tf.tensor2d([[this.pos.x,this.pos.y]])
  for(let i = 0; i < 100; i++){
        this.model.fit(train_xs, train_ys);
  }
  let result = this.model.predict(train_xs).dataSync();
  return result;
 }

show(){
    fill(250,150,150);
    ellipse(this.pos.x,this.pos.y,50,50);
  }

  boundaries(){
   if(this.pos.x > width) this.pos.x = width;
   if(this.pos.x < 0) this.pos.x = 0;
   if(this.pos.y > height) this.pos.y = height;
   if(this.pos.y < 0) this.pos.y = 0;
  }

  track(tr){
    this.acc = p5.Vector.sub(tr,this.pos);
    this.acc.setMag(this.maxForce);
    let steer = p5.Vector.sub(this.acc,this.vel);
    steer.limit(this.maxSpeed);
    console.log("steer : "+steer);
    return steer;
  }

}

The error I get is
tfjs@0.12.0:2 Uncaught (in promise) TypeError: Cannot read property ‘texture’ of undefined
at tfjs@0.12.0:2
at Array.map ()
at e.compileAndRun (tfjs@0.12.0:2)
at e.gather (tfjs@0.12.0:2)
at ENV.engine.runKernel.$x (tfjs@0.12.0:2)
at e.runKernel (tfjs@0.12.0:2)
at e.gather (tfjs@0.12.0:2)
at tfjs@0.12.0:2
at e.tidy (tfjs@0.12.0:2)
at Function.e.tidy (tfjs@0.12.0:2)
the error happens 100 times each update which makes me think it happen at model.fit since I am looping that command 100 times.

below is the normal mouse follow code without tensor.

class Ball{
 constructor(){
  this.maxSpeed = 1;
  this.maxForce = 10;
  this.pos = createVector(random(width),random(height));
  this.vel = createVector(1,1);
  this.acc = createVector(0,0);
 }

 update(mos){
  let mous = createVector(mos[0],mos[1]);
   this.vel = this.vel.add(this.track(mous));
   this.pos = this.pos.add(this.vel);
   this.acc = this.acc.mult(0);
 }

show(){
    fill(150,150,250);
    ellipse(this.pos.x,this.pos.y,50,50);
  }

  boundaries(){
   if(this.pos.x > width) this.pos.x = width;
   if(this.pos.x < 0) this.pos.x = 0;
   if(this.pos.y > height) this.pos.y = height;
   if(this.pos.y < 0) this.pos.y = 0;
  }

  track(tr){
    this.acc = p5.Vector.sub(tr,this.pos);
    this.acc.setMag(this.maxForce);
    let steer = p5.Vector.sub(this.acc,this.vel);
    steer.limit(this.maxSpeed);
    return steer;
  }

}

and sketch

let mos = [];
function setup() {

  createCanvas(400,400);
  background(0);
  b = new Ball();
  bTF = new BallTF();
  frameRate(.2);
}

function draw() {

  mos = [mouseX, mouseY];

  bTF.show();
  bTF.update(mos);
  b.show();
  b.update(mos);

}

Hi there Sean,
This is going to be a bit tricky. While I do have experience with Tensorflow for Python and Processing, I have little experience with Javascript and even less with Tensorflow.js. However, I will try my best to help you out a bit conceptually. Please let me know how much experience you have with machine learning and Tensorflow.
In the meantime, I will explain what I would do to approach this problem.

Since this is a big enough project, I would start by breaking down the problem. The ultimate goal is that you want to train a ML algorithm-controlled ball to follow your mouse. You could write a program that has a ball follow a mouse completely using vectors and target vectors, but seeing as you started out with Tensorflow, I’ll assume you want to try out using a machine learning algorithm. All right. You will probably want to use some type of reinforcement learning algorithm


since you have no particular good “examples” for how the ball should follow the mouse. If you wanted to write this program in a language without a machine learning library (ex. Processing), I would recommend using a genetic-algorithm optimized neural network, since simple neural networks and genetic algorithms are relatively simple to write from scratch. However, you have the lovely hand of Tensorflow guiding you, so no need to do all that grunt work.

I would start by simplifying the problem. Conceptually, you want to teach something to follow a target. The logical first step would be to write a program that randomizes the path of a target, mimicking, in your case, a mouse movement. Writing a random generator for these paths helps to keep you from monotonously sitting at your desk for hours just moving your mouse.

Once you have set up this path generator, you need to set up what I call a “frame”. A frame is a set period of time that you can think of as a single step in the training process. Let’s make this frame a canvas of about 600 by 600 pixels with a length of about ten seconds. This would mean that every ten seconds, the target is reset to the center point, and then goes about its randomly generated path bounded within the window (600x600 pixels) for ten seconds, and then is reset at the end of the ten seconds to the center of the screen.

Thirdly, I would write a small program for the follower, which in this case is the ball. You have already done this. Yay! If I were writing the networks and such from scratch, I would include a “brain” (instance of a neural network class) in this follower class. But since you have Tensorflow, I wouldn’t worry about that. For now, just write a few functions in the follower class that change the ball’s velocity vector. I wouldn’t worry about acceleration yet, we’ll add that in later if the velocity thing works. If you want to simplify it further, set hte velocity to a constant “1”, and just write a function that changes the heading of said velocity vector. Write the regular display and update functions. You will also want to write a function that outputs the follower’s location on the screen, its velocity, and its heading, as numbers. You will use this later as inputs (confusing! but useful in the future). Another morsel for thought- write a function that judges how close the follower stays to the target constantly. I would use a constantly updating average- have the “fitness” of the follower be determined by the average distance it has been to the target over the course of a frame.

Now that you have all that framework stuff out of the way, test it. Write a function that resets the ball to a location every “frame” and another that constantly feeds the follower a random velocity vector (or in the simple case, a random heading). See how this works. Given some fiddling on your end, you should ultimately end up with a sketch that has a target moving in a random path and a “follower” moving randomly and constantly updating its fitness, resetting to base positions every ten seconds.

Now we get to the juicy part, the part with some real, honest, frustrating thinking and troubleshooting- the ML algorithm. I said earlier that we would be using some kind of reinforcement learning algorithm. And that’s true, but here’s where you have reached a difficult crossroad. You can take your model and easily expand and adapt it to train using a genetic algorithm- neural network combo, save the best neural network weights/biases to a file, then use those weights/biases in a neural network that controls a real ball that will follow a target (your mouse) in another program, no longer training and just existing. Let me know if that appeals to you. However, this is a genetic algorithm, and since Tensorflow is used for, well, tensors, GAs are kinda off-topic and not really supported…at all. Alternatively, if you still want to use Tensorflow, you could go the harder (but more interesting) route and implement a self-training deep net.

It’s kinda like going down the steeper slide at the park even though you might break an ankle in the process.

At this point, I warn you. I have much more experience with GAs and simple deep nets than I do with whatever I will talk about from this point on. This problem is much better suited to be optimized by a genetic algorithm- NN combo than basically any other machine learning algorithm that I know of. And unfortunately, this means that Tensorflow just ain’t gonna cut it. Machine learning is fantastic, and fun, and interesting, but it’s not a panacea. You can’t just take any old problem and throw a deep net at it, or download Tensorflow and try and manhandle it to do your bidding. Certain algorithms work for specific problems. Imagine, if you will, I needed to paint a fence, and I had two options. Pay Huck Finn an apple to do it, and not worry about it, or program a robot to do it for me. Paying the apple loses me my lunch, bummer, but the programming the robot to do something it was never meant to do costs me my entire day, and even then it does the work worse (that’s a really, really crude and poor metaphor, I’m sorry).

Now, back to the subject. If you are still dead-set on using Tensorflow to train your follower, use a simple deep net. Personally, I have found that similar networks need no more than one or two hidden layers, the first with about 2 more neurons than the input layer and the second with 2 more neurons than the output layer has. Now, let’s talk about your model. If your follower just gets the input of its own location, it will output an arbitrary vector every time. Now, it might take a little bit of watching your algorithm to figure this out, and once you do, it will be so frustrating to figure out what was wrong. I know because I slipped up here once as well. In order for a network to react to a stimulus (in this case, the target mouse position), it needs to know what the stimulus is/what it is doing. For how “smart” they are, deep nets are actually very silly and kinda dumb. You need to let them know as much as you can about their world as you can. To them, the world around them is just numbers. Remember that deep nets are not so complex in nature. They find and make patterns based on numbers and math. They know nothing about their world. So we take our models, extract numerical data from it, and then feed it to the network. The network takes these numbers, does some math, and spits out some more numbers. We then take these outputs and manually relate them to our models. They cannot reason on their own. So, with that in mind, you want to feed the net, in your case, the x and y positions of the follower, the x and y positions of the target, and the x and y components of the follower’s velocity vector and the target’s velocity vector, separately. So you end up with a deep net with eight inputs. Following our earlier rule, the first hidden layer has ten neurons. Since you want to output an x and y component for the velocity vector of the follower, you should have the second hidden layer contain four neurons. However, this is a bit lopsided, so personally, I would fudge the network a bit until you end up with an 8-8-4-2 network: 8 inputs, 8 neurons in H1, 4 in H2, and 2 output neurons. All this is just known as architecture.

Once you have the structure of the network, focus on the connection to the network. Set it up so that every time the screen updates with your simulated mouse and the follower, the network also updates. The first time you initialize the network, create random weights. Input the inputs into the network, run the network to produce outputs, and then use those outputs to change the velocity vector of your follower.

Now comes the fun bit. You are going to fudge the ol’ backpropagation. To do this, you are going to need to do a few things. First of all, disable the “frames” portion. Just have it so that the simulated mouse constantly moves across the screen. Next, you are going to want to define a few variables. You’ll want a previousFitness variable as well as a bestPossible vector. The best possible vector will be the x and y components of a vector that is the vector sum of the target’s velocity vector and the vector connecting the follower ball to the target point. This is the optimal vector for the follower to produce. This is what we will use to train the follower. If you want something to learn, you gotta teach it. In order to teach the follower’s network, we have to compare the vector it suggests the follower should take (based on the inputs) against the optimal vector. Then, you can use Tensorflow to compute an error vector, backpropagate the error through the network, and adjust the weights. Since it performs this gradient descent once a tick, your follower should catch on pretty soon. Theoretically, I think this should work. I also think it’s less interesting than a genetic algorithm simply because you have to provide it with the answer to teach it. Personally, I find that when a subject learns what it is supposed to do given nothing but a fitness, the result is fascinating.

In summary, I think that you should look some more into how machine learning algorithms work and what they are used for. I love machine learning, and I love that you are interested in it and I want you to continue with it. I realize now I have written a long essay that more or less says that you can do one of two things. I just wanted to let you know that I love being proved wrong, and I am pretty sure I got one or two things wrong in this essay. It sounds like a fun project and I don’t want to come across as pretentious, mean, or snobby. I could have probably written this whole thing in one paragraph, headlined as follows:
You can probably use a genetic algorithm to do this, or you can train the ball using a target vector.

But I didn’t, for better or for worse. I took this opportunity, a little bit selfishly, to try and explain my way of thinking through an ML problem. I apologize for (probably) boring you and making you skip through a lot of the tangential stuff, as well as for the general vagueness of my answers. Thanks for making it this far, and I wish you the best with your project. Most of all, have fun with it. I will be here to help with any problems you have, whichever route you choose to go on, or neither at all. If you want to try something other than what I suggested, go for it. I’m down to learn it with you.

TL:DR; You can use a genetic algorithm to optimize a simple deep net to solve this problem. Otherwise, you can create a model that randomly generates a simulated “mouse movement” continuously, then use a computed target vector to train a deep net that controls the following ball. Use the position and velocity vectors of the follower ball and target point as your NN inputs, have two hidden layers, then output a velocity vector for the follower ball. Compare this outputted vector to the target vector, then backpropagate the error to train the network. Have fun!

Best of luck, and best regards. Don’t give up! ML is fun once you get the hang of it, which you will shorter than you think. Don’t hesitate to contact me with questions, concerns, or really anything at all.

  • Trilobyte
2 Likes

Alright so this is a somewhat late response because I had to reread a few things to fully understand, but I did read everything multiple times. First off just to let you know you did not come off bad in anyway with this post. I’m glad I got someone enthusiast about this to respond and help me. Another thing is I actually have done a lot of the ideas you gave beforehand, maybe in different ways though. I am trying this with tensorflow, but I have made this code before using a java based neural network library that I got while following the toy NN coding train videos.
The way I trained the java based NN mouse tracker is by having a normal ball that follows a point that is changed to a random position after a set amount of time using the steer algorithm(which i got from coding train video, with a title somewhat about steering agents). The normal ball is easily able to go to the target point and has no ml involved with it. Then I have a set amount of NN balls that have a similar algorithm as the normal ball except instead of using the steering algorithm to find the acceleration applied I use the NN feedforward to find the acceleration I should apply. I train each NN by passing in the x and y position of the NN ball and making the expected output the acceleration that the normal ball has at each frame. This way I have a best output that I can say that the NN should be giving each frame. Any NN ball that does not reach the target point by time the target resets is deleted and when there is one NN ball left I save its NN and generate a new set of NN balls that have a mutation of the saved NN. So I think I actually had very similar ideas to you. The only problem is I thought this program was working since the NN balls were able to slowly get better at solving the problem, but after looking at my code I saw that a lot of stuff was done wrong and now i’m actually going back and trying to fix the java based code now before I do the tensorflow one. Sorry for the switch I hope you can still help though since it is the same problem just different code language.
Some things that I have changed with the code based on your help is now instead of just the x and y pos inputs I have the x pos, y pos, x vel, y vel, x tar, and y tar as the inputs. I still generate a random position after a set time to be followed in order to train. The problem I get now is that the output from the feedforward is slowly getting smaller and the two output values (x acc and y acc) are always positive so the NN ball is always moving towards the bottom right and a slowing rate.

The blue ball is the steering algorithm ball, the green is the NN ball, and the red dot is the target.
I made the testing to just one NN ball for now because I want to make sure that I have the training part is correct, however based on what I said before and the video it seems like it is not. I do plan to have the GA part implemented later, but I want the NN to be able to do some learning then the best is taken when all NN balls get deleted.
Main code:

int size = 1;
int rows = 2;
int cols = 1;
int count = 0;
int limit = 150;

ArrayList<Ball> ball;
ArrayList<NN> n;
Ball2 ball2;
Matrix tar;
Matrix hold;
Matrix hold2;

void setup(){
 size(500,500);
 ball = new ArrayList<Ball>();
 ball2 = new Ball2();
 tar = new Matrix(2,1);
 n = new ArrayList<NN>();
 hold = new Matrix(6,cols);
 hold2 = new Matrix(rows,cols);
 for(int i = 0; i < size; i++){
   ball.add(new Ball());
   n.add(new NN(6,8,2));
 }

}

void draw(){
  background(0);

//Generates new target if the counter reaches max time
  if(count == limit){
    tar.data[0][0] = (random(width));
    tar.data[1][0] = (random(height));
    count = 0;
    NN fake = n.get(0);
    fake.printNN();
    n.remove(0);
    ball.remove(0);
    ball.add(new Ball());
    n.add(fake);
  }
  
  //When 1 ball is left create new balls mutated from the last one
  //if(ball.size() == 1){
  //  NN fake = n.get(0);
  //  fake.printNN();
  //  n.remove(0);
  //  ball.remove(0);
  //  println(n.size());
  //  println(ball.size());
  //  for(int i = 0; i < size; i++){
  //    ball.add(new Ball());
  //    n.add(fake.mutate());
  //   }
  //   limit = 100;
  //   count = 0;
  //}
  
  //loop through each NN ball updating feedingforward and updating position
  for(int i = 0; i < ball.size(); i++){
    //make input matrix
    hold.data[0][0] = ball.get(i).pos.x;
    hold.data[1][0] = ball.get(i).pos.y;
    hold.data[2][0] = ball.get(i).vel.x;
    hold.data[3][0] = ball.get(i).vel.y;
    hold.data[4][0] = tar.data[0][0];
    hold.data[5][0] = tar.data[1][0];
    //make expected output matrix of NN
    hold2.data[0][0] = ball2.acc.x/ball2.maxForce;
    hold2.data[1][0] = ball2.acc.y/ball2.maxForce;
    //train the nn with hold and hold2
    n.get(i).train(hold,hold2);
    //pass through ball's respective nn's prediction as acceleration
    println(" x " +n.get(i).feedforward(hold).data[0][0]);
    println(" y " +n.get(i).feedforward(hold).data[1][0]);
    ball.get(i).update((n.get(i).feedforward(hold).multiply(ball.get(i).maxForce)));
    ball.get(i).show();
    //n.get(i).printNN();
  }
  
  //update and show steering algorithm ball
  ball2.update(new PVector((float)tar.data[0][0],(float)tar.data[1][0]));
  ball2.show();
  fill(150,50,50);
  ellipse((float)tar.data[0][0],(float)tar.data[1][0],10,10);
  //delay(200000);
  count++;
}

NN ball code:

class Ball{
 PVector pos, vel, acc, accT;
 float maxSpeed = 1, maxForce = 5;
 Matrix accOut = new Matrix(2,1);
 boolean tarHit = false;
 Ball(){
    pos = new PVector(random(width),random(height));
    vel = new PVector(1,1);
    acc = new PVector(0,0);
 }
 
 void update(Matrix accel){
   acc = new PVector((float)accel.data[0][0],(float)accel.data[1][0]);
   //println("acc x : "+acc.x + "acc y : " +acc.y);
   vel.add(acc);
   pos.add(vel);
   //acc.mult(0);
   if(this.pos.x > tar.data[0][0]-5 && this.pos.x < tar.data[0][0]+5 &&
      this.pos.y > tar.data[1][0]-5 && this.pos.y < tar.data[1][0]+5){
        this.tarHit = true; 
      }
   //boundaries();
 }

  
  void show(){
    fill(150,250,150);
    ellipse(pos.x,pos.y,30,30); 
  }
  
    void boundaries(){
   if(pos.x > width) pos.x = 0;
   if(pos.x < 0) pos.x = width;
   if(pos.y > height) pos.y = 0;
   if(pos.y < 0) pos.y = height;
  }
  
}

I should let you know that as I am typing this reply I am also changing the code so my train of thought might seem off. I have tested your idea of just changing the velocity instead of applying the acceleration so now I have the steering algorithm ball’s velocity x and y as the expected prediction for training and the same 6 inputs as before. Also the update function for the NN ball has been changed to the following

void update(Matrix accel){
acc = new PVector((float)accel.data[0][0],(float)accel.data[1][0]);
vel = acc;
pos.add(vel);
//acc.mult(0)
if(this.pos.x > tar.data[0][0]-5 && this.pos.x < tar.data[0][0]+5 &&
this.pos.y > tar.data[1][0]-5 && this.pos.y < tar.data[1][0]+5){
this.tarHit = true;
}
//boundaries();
}
Now the velocity is changed directly, however I just noticed that when I am training the NNs it makes no sense to put the acc or vel of the steering algorithm ball as the expected output because that ball is at a different pos so I need to simulate a steering algorithm at the position of each NN ball(which I also now see if probly what you meant by optimal velocity). If I do this it would be hard to just change the velocity since the velocity at that pos can be multiple values, but the steering(acceleration) at a position is always the same no matter the velocity, so I am going to keep changing the acceleration. I tested this out and I got really good results!

(video in below post)

This is without the GA part still. I just set the NN balls back to pos 0,0 when the target point is changed and let them keep their NN. I plan on adding the GA however I think it might need some more thought to the fitness. If I tried to just take how well it gets close to the target it might make it so i get a bad NN that was lucky enough to spawn close to the target, but I will try it out and see how it goes. I know I pretty much changed the topic of this post, but I will put the tensorflow part on pause for now then continue after I implement the GA part for the java mouse follow code so that I have a good template of how it should be structured.
I hope i wasn’t too confusing and I hope you don’t mind me pretty much typing out my thoughts/process instead of just saying my ends results. I think it helps to show how I got to where I am now and what I have tried, also i’m a little too lazy to revise the above stuff. I think I followed most of the advice you gave hopefully I didn’t miss any important things, there was quite a bit though.
As for my experience with this stuff I would say I have very little. Most of what I know is from coding train and some of that andrew ng course(i didn’t finish it). With tensorflow all I know is from coding train videos and with javascript I am very new to that where I don’t fully understand how the language is structured but it is similar to java so I can somewhat get by.

1 Like

It wouldn’t let me add this link so here is the current NN ball follow

1 Like

Hi Sean,
I love it! This is fantastic. Judging by the video you just posted, it seems as though your non-TF implementation is working great! I am excited to see what follows in regards to the GA. Thanks for reading through my essay and especially for documenting your own thought process. I am glad to have found someone so interested in ML! Please, please keep me (and the rest of us) posted throughout this process. For now, I am going to do some reading up on TF.js so I can help better when it comes to that portion! Best of luck with the project!

  • Trilobyte