Python Forum
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
math.pi
#1
i am curious where module math stores the value of pi. i have found where it tests it, but i cannot find where it sets constants like pi and e. the last 3 bits are 000 and are the first 3 bits of a run of 5 bits that are 00000 so the precision could be as low as 50 bits and you'd get the same value in a 55 bit system. i am curious how they coded it and if their code would work on an architecture with more bits. in my 46 years of programming (36 in C, 20 in assembly) i have seen a few different float sizes as large as 112 bit mantissa
Tradition is peer pressure from dead people

What do you call someone who speaks three languages? Trilingual. Two languages? Bilingual. One language? American.
Reply
#2
I tracked down the entire process for defining math.pi. First some comments:

  1. CPython effectively assumes IEEE-754 64 bit floating point representation. Other floating point formats were supported many years ago but I don't believe there are current platforms supported by CPython that do not use IEEE-754.
  2. It is done in C so it does not exist in any .pyc file.

The original definition is in pymath.h:

#define Py_MATH_PI 3.14159265358979323846
In mathmodule.c, it is converted into a Python "float" and added to the math module's namespace with the name "pi":

PyModule_AddObject(m, "pi", PyFloat_FromDouble(Py_MATH_PI));
When mathmodule.c is compiled, what happens next depends on the operating system.

On Windows, the compiled code is linked into the Python executable. On Linux, it is stored in a separate shared object. To find where an imported module is stored, try "import math; math.__file__". If "__file__" does not exist, the compiled code is already linked into the executable. Otherwise, it will return the location of the file in the file system.

BTW, if you care about higher precision, I maintain the gmpy2 extension which wraps the GMP, MPFR, and MPC arbitrary precision libraries.

casevh
Reply
#3
ah! in C code. that's why my search didn't find it. FYI, here is how my C code defines Pi so it can work with some higher precision architectures with long double:

#define Pi (8552228672519733982877442985294966266405.0L/2722258935367507707706996859454145691648.0L)

basically that is Pi scaled up by 2**131 divided by 2**131.

(Jan-08-2018, 04:23 AM)casevh Wrote: BTW, if you care about higher precision, I maintain the gmpy2 extension which wraps the GMP, MPFR, and MPC arbitrary precision libraries.

casevh
i am looking at doing some super deep Mandelbrot image zooms. given that most of the CPU time will be spent on high precision calculations, i think Python is virtually as good as C for something like this. so, the quality of your code will matter a lot. my code will be designed as a networked utility to run it on multiple cores on multiple cloud instances to get faster draw times, which will already be very slow due to the extreme depths and extreme precision (dynamically extended in the last version i did in C for just one core).

maybe you can make GPU versions in the future. it would be like having so many more cores, although it is not as simple as running N processes for N GPU cores (some have hundreds of cores).
Tradition is peer pressure from dead people

What do you call someone who speaks three languages? Trilingual. Two languages? Bilingual. One language? American.
Reply
#4
(Jan-08-2018, 04:23 AM)Skaperen Wrote: ah! in C code. that's why my search didn't find it. FYI, here is how my C code defines Pi so it can work with some higher precision architectures with long double:

#define Pi (8552228672519733982877442985294966266405.0L/2722258935367507707706996859454145691648.0L)

basically that is Pi scaled up by 2**131 divided by 2**131.

Unfortunately, your fraction only provides 53 bits of accuracy, not the 64 bits required by long double.

>>> import math
>>> math.pi
3.141592653589793
>>>
>>> import gmpy2
>>> gmpy2.get_context().precision=200
>>> gmpy2.const_pi()
mpfr('3.1415926535897932384626433832795028841971693993751058209749445',200)
>>> a=gmpy2.mpfr(8552228672519733982877442985294966266405.0)
>>> b=gmpy2.mpfr(2722258935367507707706996859454145691648.0)
>>> a/b
mpfr('3.141592653589793115997963468544185161590576171875',200)
>>> # Incorrect after ^
You can check the bit patterns of the the mantissa in gmpy2.

>>> a
mpfr('8552228672519733649496873820484995645440.0',200)
>>> b
mpfr('2722258935367507707706996859454145691648.0',200)
>>> (a/b).digits(2)
('11001001000011111101101010100010001000010110100011000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000', 2, 200)
>>> gmpy2.const_pi(precision=64).digits(2)
('1100100100001111110110101010001000100001011010001100001000110101', 2, 64)
>>> gmpy2.const_pi(precision=53).digits(2)
('11001001000011111101101010100010001000010110100011000', 2, 53)
>>> 
casevh
Reply
#5
(Jan-08-2018, 04:23 AM)Skaperen Wrote: i am looking at doing some super deep Mandelbrot image zooms. given that most of the CPU time will be spent on high precision calculations, i think Python is virtually as good as C for something like this. so, the quality of your code will matter a lot. my code will be designed as a networked utility to run it on multiple cores on multiple cloud instances to get faster draw times, which will already be very slow due to the extreme depths and extreme precision (dynamically extended in the last version i did in C for just one core).

maybe you can make GPU versions in the future. it would be like having so many more cores, although it is not as simple as running N processes for N GPU cores (some have hundreds of cores).

I ran a quick test for computing one iteration of calculating z=z**2+c and testing abs(z)<4. At 1000 bits of precision, it took less than 1.25 microseconds per iteration. So around 800,000 iterations per second. I would expect real-world performance to be around 500,000 iterations per second.

There isn't a version of GMP designed for a GPU. Until that happens, there won't be a version of MPFR or MPC for GPU.

gmpy2 = https://gmpy2.readthedocs.io/en/latest/
GMP = https://gmplib.org/
MPFR = http://www.mpfr.org/
MPC = http://www.multiprecision.org/index.php?prog=mpc

casevh
Reply
#6
Off topic question to @casevh.

Has gmpy2 a method to calculate tetration ( power tower )? Also, super-logarithm?
"As they say in Mexico 'dosvidaniya'. That makes two vidaniyas."
https://freedns.afraid.org
Reply
#7
(Jan-08-2018, 07:06 AM)wavic Wrote: Off topic question to @casevh.

Has gmpy2 a method to calculate tetration ( power tower )? Also, super-logarithm?

Those functions are not available in gmpy2.

Tetration does grow rather rapidly. Big Grin What values are interested in computing?

(If you want to continue this discussion, please start a new topic and I'll reply there.)

casevh
Reply
#8
(Jan-08-2018, 05:08 AM)casevh Wrote: Unfortunately, your fraction only provides 53 bits of accuracy, not the 64 bits required by long double.
This computation is not correct, because when you write a=gmpy2.mpfr(8552228672519733982877442985294966266405.0), the python interpreter converts the number to float, which loses precision. You need to use some string "8552228672519733982877442985294966266405.0".

The following code using the bigfloat library confirms Skaperen's values

>>> from bigfloat import *
>>> with precision(133):
...  print(const_pi().as_integer_ratio())
... 
(8552228672519733982877442985294966266405, 2722258935367507707706996859454145691648)
Reply
#9
(Jan-08-2018, 09:44 AM)Gribouillis Wrote: This computation is not correct, because when you write a=gmpy2.mpfr(8552228672519733982877442985294966266405.0), the python interpreter converts the number to float,

I don't see that:

In [1]: import gmpy2

In [2]: a=gmpy2.mpfr(8552228672519733982877442985294966266405.0)

In [3]: type(a)
Out[3]: mpfr
"As they say in Mexico 'dosvidaniya'. That makes two vidaniyas."
https://freedns.afraid.org
Reply
#10
The conversion occurs before the call to gmpy2.mpfr(), when the literal float is converted into a python float!
>>> import gmpy2
>>> gmpy2.get_context().precision = 200
>>> a=gmpy2.mpfr(8552228672519733982877442985294966266405.0)
>>> A=gmpy2.mpfr("8552228672519733982877442985294966266405.0")
>>> a
mpfr('8552228672519733649496873820484995645440.0',200)
>>> A
mpfr('8552228672519733982877442985294966266405.0',200)
Reply


Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020