Python Forum

Full Version: Python gives " -0.0 " as solution for an equation
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi,
I'm learning Python (3.7) and I have sometimes strange results, for example when using code to find the real roots of a single univariate quadratic function:

a= -200
b = 600
c = 0
discr = b**2 - 4 * a * c 

if discr < 0:
    print("This equation has no real root")
elif discr == 0:
    print("This equation has one real root:\n"
         f"\t{-b / (2 * a)}")
else:
    print("This equation has two real roots:\n"
         f"\t{(-b + discr**0.5) / (2 * a)}\t and\n"
         f"\t{(-b - discr**0.5) / (2 * a)}")       
returns
Output:
This equation has two real roots: -0.0 and 3.0
From what I could try, results are similar in other cases where 0 is a solution and a is negative (it's like "0" takes its sign from "a"), if "a" is positive then it returns "0.0".

Isn't it a bit weird? Is there an easy/clean/pythonic way to prevent/correct that kind of outputs?
Normally, Python's floats adhere internally to the IEEE 754 specification for floating point arithmetics which uses a signed zero. It means that the numbers 0.0 and -0.0 differ internally by one bit, the sign bit. As python values, they compare equal though, that is to say 0.0 == -0.0 returns True.

If you want to avoid the negative zero you could do
root = (-b + discr**0.5) / (2 * a) or 0.0
for example.

Observe that the two numbers are also considered equal in dictionaries, for example
>>> {0.0: 'foo', -0.0: 'bar'}
{0.0: 'bar'}
Ok, I did not know about the IEEE 754 standard and signed zeros.

Thanks for your answer.
Now I know I'll have to be cautious about that.

"Problem" solved!