Jul-18-2023, 08:29 AM
(This post was last modified: Jul-18-2023, 08:33 AM by AlexanderWulf.)
Hello.
I want to port an existing algorithm to Python. Original code is written in C and uses fixed-size integer types, such as uint64_t. The computation relies heavily on the "wrapping" behavior of C integer types: If we add or multiply two uint64_t values, we get an uint64_t containing the result of the addition or multiplication truncated to the lowest 64-Bit; the "high" bits are discarded implicitly. Without this truncation, which needs to happen in every arithmetic operation (and there are a lot of them!), the result would not be "correct", with respect to the original algorithm.
(Note: Truncating only the final result, while leaving all intermediate values at larger precision, would not give the "expected" outcome)
So, how can I get this behavior in Python? I understand that Python uses variable-size integer. This means that, for example, multiplying two 64-Bit values results in ~128-Bit value. Of course, I could simply apply "modulus 2^64" after every multiplication or addition. But I think this would result in very cluttered code. And it also would be highly inefficient! I mean, every multiplication would create a ~128-Bit intermediate result which then needs to be "truncated" to 64-Bit, throwing away the intermediate result. Also, the required "modulus" operation is known to be very slow. Is there a "better" (less cluttered, more efficient) way to accomplish the required "fixed size" (wrapping) integer math in Python? is there a recommended support library for this?
I read about ctypes library. It has fixed-sized types, such as c_ulonglong, which seems to be exactly what I need. Unfortunately, I couldn't figure out how to multiply two c_ulonglong values, as Python gives error "unsupported operand" when trying to multiply a c_ulonglong with another c_ulonglong
Thank you!
I want to port an existing algorithm to Python. Original code is written in C and uses fixed-size integer types, such as uint64_t. The computation relies heavily on the "wrapping" behavior of C integer types: If we add or multiply two uint64_t values, we get an uint64_t containing the result of the addition or multiplication truncated to the lowest 64-Bit; the "high" bits are discarded implicitly. Without this truncation, which needs to happen in every arithmetic operation (and there are a lot of them!), the result would not be "correct", with respect to the original algorithm.
(Note: Truncating only the final result, while leaving all intermediate values at larger precision, would not give the "expected" outcome)
So, how can I get this behavior in Python? I understand that Python uses variable-size integer. This means that, for example, multiplying two 64-Bit values results in ~128-Bit value. Of course, I could simply apply "modulus 2^64" after every multiplication or addition. But I think this would result in very cluttered code. And it also would be highly inefficient! I mean, every multiplication would create a ~128-Bit intermediate result which then needs to be "truncated" to 64-Bit, throwing away the intermediate result. Also, the required "modulus" operation is known to be very slow. Is there a "better" (less cluttered, more efficient) way to accomplish the required "fixed size" (wrapping) integer math in Python? is there a recommended support library for this?
I read about ctypes library. It has fixed-sized types, such as c_ulonglong, which seems to be exactly what I need. Unfortunately, I couldn't figure out how to multiply two c_ulonglong values, as Python gives error "unsupported operand" when trying to multiply a c_ulonglong with another c_ulonglong

Thank you!