Why Can't Processing Do Basic Math lol

Hey yall! I’m building a model that can simulate the trading that occurs across a liquidity pool and to experiment with how I can use a balancer contract account to tend the pool and keep a stable currency ratio. The issue I’m having is with my math in calculating the pool balances after a transaction, something weird is happening… I think it might have to do with the weird parsing of data from string to float I’m doing to generate numbers with decimals rounded to a specific decimal point. This is the first time I’m using the nf() function and I think I’m using it correctly, but now some of my math is rounding to weird decimal places where it should’nt. Anyways I’ve included the code below with some context:

The first part of the code works fine. It parses the string data from my accounts[] array from the corresponding account to a float and sets the variable BuyLimit equal to the max amount of money that the account can spend. Next I calculate a random float less than the BuyLimit and set it equal to the String buy.

The next chunk checks the values of the exchange pools before the transaction. When the software starts they are set to 100,000. The first print line confirms that in the console. My issue is when I go to change the pool values with " poolTokenOne = poolTokenOne - Float.parseFloat(buy); " , they end up as the wrong number when I println() again to check their value after the transaction.

Like I said maybe this has to do with the weird Float.parseFloat thing or my nf() rounding, but I tried checking just the output of the Float.parseFloat(buy) with a println() prior to the transaction and the value to be subtracted comes out equivalent to the original buy amount which is how it should be. To me this indicates the issue is occurring not in the Float.parseFloat as I have the correct value to subtract, but when the equation of subtraction occurs on that value from poolTokenOne.

I’m really confused here because poolTokenOne and poolTokenTwo are assigned as floats so when I parse the buy string into a float I’m subtracting or adding proper data types in a simple equation but the solution always comes out wrong…

For example if the buy amount was 328.153. When I call the transaction it should be doing:

Pool A
100,000 = 100,000 - 328.153 = 99,671.847
Pool B
100,000 = 100,000 + 328.153 = 100,328.153

Instead I end up with the final values of:
poolTokenOne = 99,671.844
poolTokenTwo = 100,328.16

It doesn’t make any sense to me there not even redistributing its like they round to weird values. If I took .003 from the difference in the wrong vs correct output for pool one and added it to pool two I don’t even get the correct pool two value, the ratio is off. In this case it creates .004 coins that don’t exist.

ex: 99.671.847 - 99,671.844 = .003 + 100,328,153 /= 100,328.16)

Seriously someone with more experience please help, what’s going on here? Why is processing failing to do basic subtraction and addition? Why am I an idiot? :grinning:

  if (tokenSelector <= 5) {
    println("A Pool Selected");
    println(" ");
    
    // Call a random buy from Token A balance
    float buyLimit = Float.parseFloat(accounts[accountSelector-1]);
    println("Buy Limit: " + buyLimit);
    String buy = nf(random(0,buyLimit),0,decimal);
    println("Buy Amount: " + buy);
    println(" ");
    
    println("Token A Bal Pre Transaction: " + poolTokenOne);
    println("Token B Bal Pre Transaction: " + poolTokenTwo);
    println(Float.parseFloat(buy));
    poolTokenOne = poolTokenOne - Float.parseFloat(buy);
    poolTokenTwo = poolTokenTwo + Float.parseFloat(buy);
    println(" ");
    println("Token A Bal Post Transaction: " + nf(poolTokenOne,0,decimal));
    println("Token B Bal Post Transaction: " + poolTokenTwo);
    
  } else  if (tokenSelector > 5) {

Hey and welcome to the forum!

It’s a general computer problem

See https://en.m.wikipedia.org/wiki/Round-off_error

Sooooo, would running my poolTokenOne = 100,000 float through the same nf function when I declare it fix this error? Also I wouldn’t have to parse back to a float from a string being the nf function outputs a string not a float.

Meaning change these parts of my code:

// Global Declaration
String poolTokenOne = nf(100,000, 0, 3);

// Update of value
poolTokenOne = poolTokenOne - buy;

Or…?


float a = 99671.847;
float b = 99671.844;
println(a, b, a - b); // 99671.84 99671.84 0.0

double c = 99671.847d;
double d = 99671.844d;
println(c, d, c - d); // 99671.847 99671.844 0.0029999999969732016

exit();
3 Likes

So if I’m understanding this correctly being floats aren’t necessarily precise I should use a double even though storage wise they’re kind of over kill and then just throw everything away after the three decimal places that I want with the nf() function?

But I still don’t understand the math going on, why if you’re subtracting 99671.847 - 99671.844 do you not get 0.003 even with the double type and its precision? Why do we get 0.0029999999969732016? I didn’t pass it any values past the three decimal places as I’ve rounded the values of the equation to three decimal places prior, why is the answer giving me something like the value subtracted with was 99761.84499999903921 or whatever… Those values shouldn’t exist unless the computer is again rounding wrong. This isn’t like a huge number I’m subtracting, I should be able to subtract accurately 3 decimal places past zero?

Even the primitive datatype double can’t deal perfectly well w/ fractional precision.

As a workaround you can use the primitive datatype long and treat its 3 rightmost digits as if they were its fractional part:

Or as a more properly solution go w/ the datatype BigDecimal:

1 Like

The double data type uses 8 bytes, that 64 bits. This means there are 18,446,744,073,709,551,616 different possible combinations of bit arrangements but there is an infinite number of floating point numbers (FPN). This means that only an infinitely small number of FPNs can be stored in a computer with 100% accuracy, it would seem that 0.003 is one of them.

Processing’s choice of float over double as a default was always a mistake in my opinion. The storage concern only really comes in when you have lots of data, such as big arrays, and even then doing calculations on them as doubles can make more sense.

For comparison, all the new functional and stream stuff added in Java 8 supports int, long and double, but not float.

That’s life with floating point values and computers. If you need real precision then you need to work around. One option what @GoToLoop suggested: use int or long int and treat part from the right as decimal part. You’ll get precision you need, but you need to divide those numbers before you show them. Divisor depends on amount of decimal places you use.

Java has BigDecimal class that can handle arbitrary accuracy with decimal number, but using it is not as straight forward.

1 Like