Dec-24-2021, 07:57 AM
(Dec-23-2021, 10:28 PM)Gribouillis Wrote: There are many issues in your code, before I talk about them, here is a code that I wrote to solve this problem, inspired by your code. It finds the solution 55374 in about 8 seconds in my terminal, but I made no attempt to optimize it.
import numpy as np import itertools as itt def ipentagonal(): s = 0 for c in itt.count(1): s += c if s % 3 == 0: yield s // 3 N = 1000000 #N = 100 gstore = np.fromiter(itt.takewhile(lambda x: x <= 2*N, ipentagonal()), int) sign = np.fromiter( (1 if (i%4 <= 1) else -1 for i in range(gstore.size)), int) pstore = np.zeros(N, int) pstore[0] = 1 print(gstore) print(sign) for n in range(1, N): idx = itt.takewhile(lambda x: x>=0, (n-g for g in gstore)) pstore[n] = sum(pstore[i] * sign[j] for j, i in enumerate(idx)) % 1000000 if pstore[n] == 0: print(n) break#######
The main problems that I see in your code are
- You are using numpy arrays containing floating point values, that is to say 64 bits numbers. This will lose precision if you store large integers in them. This is an integer valued problem, so you need arrays that contain integer values, this is what I use in my code. Note that Python integers have no size limitation other than the machine's memory, but this is not the case of Numpy integers which have a fixed size, such as 32 or 64 bits.
- The partition number increases very rapidly, according to Wikipedia, p(10000) has already 106 ordinary decimal digits. It means that you must not compute p(n), but for this problem you can simply compute p(n) % 1000000, which is what my code does.
#####
Note: By removing the final 'break' statement, the above code found the next solution after 55374, namely 488324.
Thanks for your reaction and explanation. I found that the problem is indeed in the Numpy arrays containing floating point values. I have removed all Numpy arrays and replaced them with "normal" arrays. Now everything works fine!