Python Forum
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
quad floating point
#1
i have a coming need to do floating point arithmetic in quad precision. i'm assuming i can use the decimal.Decimal type to do it. speed is not an issue as there will probably about 1000 calculations done every few minutes so decimal.Decimal should be able to handle it. the hard part is reading the numbers and writing them back out. these are in a binary-based quad precision format encoded in hexadecimal. it has 32 hexadecimal characters that encode 128 bits in the IEEE quad format (which x86_64 does not implement).
Tradition is peer pressure from dead people

What do you call someone who speaks three languages? Trilingual. Two languages? Bilingual. One language? American.
Reply
#2
You can play with my anyfloat module, which can handle various ieee 754 formats including the one you want
from anyfloat import anyfloat
import decimal
from decimal import Decimal
from fractions import Fraction

def main():    
    size = (15, 112) # the size of exponent and mantissa for 128 bits
    encoded = '4000 921f b544 42d1 8469 898c c517 01b8'
    print('Our number encoded in hexadecimal ieee 754 formats on 128 bits:')
    print('   ', encoded)

    a = anyfloat.from_ieee(int(encoded.replace(' ',''), 16), size)
    print('The same number as a "anyfloat" instance')
    print('   ', a)
    print('The same number encoded in hex format as in float.hex():')
    print('   ', a.hex())
    print('Is anyfloat.from_hex(a.hex()) equal to a?')
    print('   ', anyfloat.from_hex(a.hex()) == a)
    print('What is the number as a float (64 bits)?')
    print('   ', float(a))
    print('Can we retrieve the number as IEEE code?')
    print('   ', hex(a.to_ieee(size)))
    print('Our number as a fraction:')
    print('   ', a.to_fraction())
    print('Our number as a decimal:')
    f = a.to_fraction()
    decimal.getcontext().prec = 40 # set large enough precision
    dec = Decimal(f.numerator)/Decimal(f.denominator)
    print('   ', dec)

if __name__ == '__main__':
    main()
Output:
Our number encoded in hexadecimal ieee 754 formats on 128 bits: 4000 921f b544 42d1 8469 898c c517 01b8 The same number as a "anyfloat" instance anyfloat(sign=0, exponent=1, significand=1019505104126898525104217885171767) The same number encoded in hex format as in float.hex(): 0x3.243f6a8885a308d313198a2e037p+0 Is anyfloat.from_hex(a.hex()) equal to a? True What is the number as a float (64 bits)? 3.141592653589793 Can we retrieve the number as IEEE code? 0x4000921fb54442d18469898cc51701b8 Our number as a fraction: 1019505104126898525104217885171767/324518553658426726783156020576256 Our number as a decimal: 3.141592653589793238462643383279502797479
I have not yet written the conversion from decimal format to ieee 754, but using the anyfloat class, it should not be more difficult than writing a division in base 2. I leave it to you as an exercise for now.
Reply
#3
Thinking again, one of the best solutions would probably be to convert your numbers to gmpy2.mpfr instances on 128 bits. It would give you immediately all the arithmetic operations and more. The anyfloat module can be also used to help converting from/to gmpy2.mpfr.

Here is an example (not sufficiently tested, but it gives the idea)
from anyfloat import anyfloat
import gmpy2

def anyfloat_to_mpfr(a):
    # XXX not sufficiently tested
    g = gmpy2.mpfr(-a.significand if a.sign else a.significand)
    p = 1 + a.exponent - a.significand.bit_length()
    if p > 0:
        g = gmpy2.mul_2exp(g, p)
    elif p < 0:
        g = gmpy2.div_2exp(g, -p)
    return g

def anyfloat_from_mpfr(x):
    # XXX not sufficiently tested
    m, e = x.as_mantissa_exp()
    m, e = abs(int(m)), int(e)
    s = gmpy2.sign(x) == -1
    return anyfloat(s, e + m.bit_length() - 1, m)

def main():    
    ctx = gmpy2.context(precision=113, subnormalize=True)
    old = gmpy2.get_context()
    gmpy2.set_context(ctx)

    size = (15, 112) # the size of exponent and mantissa for 128 bits
    encoded = '4000 921f b544 42d1 8469 898c c517 01b8'
    a = anyfloat.from_ieee(int(encoded.replace(' ',''), 16), size)    
    g = anyfloat_to_mpfr(a)
    print(repr(g))
    aa = anyfloat_from_mpfr(g)
    assert aa == a
    print(hex(aa.to_ieee(size)))
    gmpy2.set_context(old)
    

if __name__ == '__main__':
    main()
Output:
mpfr('3.1415926535897932384626433832795028',113) 0x4000921fb54442d18469898cc51701b8
It seems to me that the mpfr precision must be set to 113 (it is normally 53 for double) for this to work in your quad floating point format. Also the use of subnormal=True in the gmpy2 context is an indication for the library to adhere to IEEE 754 rules. Now you're free to perform any calculation that you want with the numbers and convert them back later to IEEE.
Reply
#4
is 113 a special setting? i have worked out that i need at least 95 bits of precision and have been using 96 as a minimum in some scaled int calculations.
Tradition is peer pressure from dead people

What do you call someone who speaks three languages? Trilingual. Two languages? Bilingual. One language? American.
Reply
#5
113 bits is the significand precision in the IEEE-754 quadruple precision. Only 112 are actually stored because there is an implicit 1 in first position. If you work with only 95 or 96 bits, then you are not using full quadruple precision. That said, if you work with gmpy2, you can choose any precision that you want. It won't harm if you choose too many bits. As you start with IEEE numbers on 128 bits (in hex format), it seems reasonable to choose at least enough bits to allow conversion back and forth without corruption of the hex digits. Experimentally it means a precision of 113 for me.

Also note that examples in the bigfloat package use a precision of 113 bits for quadruple precision. This package is another interface to the mpfr library.

There is a pure python library on pypi named doubledouble that pretends to compute with 106 bits of precision by using pairs of float. You could perhaps try it if 106 bits suffice. You could use my anyfloat code to convert your values into a pair of floats and back.
Reply
#6
113 or 106 should work for me. there are calculations where extra bits can alter the final results but in most cases it gives more accurate results. chaos calculations can be dramatically impacted, but IMHO those need more bits to the extreme to arrive at values not affected by the calculation mechanism. doubledouble plus anyfloat sounds like the way to go unless i can work out all the formulas to work in reciprocal form with scaled ints.
Tradition is peer pressure from dead people

What do you call someone who speaks three languages? Trilingual. Two languages? Bilingual. One language? American.
Reply
#7
I updated the anyfloat module in version 2020.02.24 with addition operations to ease the process of converting to double double. This code is far from sufficiently tested. Here is the conversion to and from doubledouble
from anyfloat import anyfloat
from doubledouble import DoubleDouble

def anyfloat_to_doubledouble(a):
    x = float(a)
    b = anyfloat.from_float(x)
    y = float(a-b)
    return DoubleDouble(x, y)

def anyfloat_from_doubledouble(d):
    return anyfloat.from_float(d.x) + anyfloat.from_float(d.y)

def main():
    size = (15, 112) # the size of exponent and mantissa for 128 bits
    encoded = '4000 921f b544 42d1 8469 898c c517 01b7'
    a = anyfloat.from_ieee(int(encoded.replace(' ',''), 16), size)    
    g = anyfloat_to_doubledouble(a)
    print(repr(g))
    aa = anyfloat_from_doubledouble(g)
    print(hex(aa.to_ieee(size)))


if __name__ == '__main__':
    main()
Output:
(3.141592653589793 + 1.2246467991473532e-16) 0x4000921fb54442d18469898cc51701c0
Only the last two hex digits are lost in the conversion which is compatible with the idea that we downgrade 113 bits of precision to 106.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Floating and decimal Sandeep2000 2 2,439 Aug-16-2019, 04:25 PM
Last Post: Nitram
  floating point addition Skaperen 4 2,871 Jul-09-2019, 04:12 AM
Last Post: Skaperen

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020