Python Forum

Full Version: large ints and floats
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
what is the largest int that can be converted to float such that 1 larger cannot be converted to float?

hint: it is not a power of 2

suggestion: don't bother trying to step by 1 unless you start very close to the right number
>>> a
179769313486231580793728971405303415079934132710037826936173778980444968292764750946649017977587207096330286416692887910946555547851940402630657488671505820681908902000708383676273854845817711531764475730270069855571366959622842914819860834936475292719074168444365510704342711559699508093042880177904174497791
>>> float(a)
1.7976931348623157e+308
>>> float(a + 1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: int too large to convert to float
Sad!

I was thinking that there are no limitations if you have enough memory.
the issue is that float has a limited size which i understand can vary by implementation. cpython, the most commonly distributed implementation, uses type double in the C language. the limitation is what the hardware does for C double (not long double).
One can use the gmpy2 library to handle larger numbers
>>> import gmpy2
>>> gmpy2.mpfr(1<<10000)
mpfr('1.9950631168807584e+3010')
Play with the context instance to control the mpfr's precision.
Love gmpy2
is there a way to do int with gmp to do extreme ints faster? i still have to translate extreme int stuff to Pike to get the speed (it has indefinite ints like Python but uses gmplib to implement them).
The gmpy2.mpz type is just an interface to gmp's integers. With them you should get the same performance as another interface to the gmplib. You could perhaps write a benchmark program with some heavy large integers computation to see the difference between Pike and gmpy2.