I thought that it would be more efficient to use numpy arrays and tried with "fibn":
A time check shows that the numpy implementation on the average takes 3 times longer than the standard list version. 0.0002846717834472656 sec for the standard and 0.0009121894836425781 sec for the numpy version when generating the first 1000 fibonacci numbers and for 100000 fib numbers 0.36421632766723633 sec for standard list and 0.44461560249328613 s for numpy version = 1.22 times longer for numpy.
def fibn(start, length, n): if n < 2: return [] seq = np.zeros(length, dtype='uint64') seq[n-1] = start if length < n: return seq for i in range(length-n): seq[i+n] = sum(seq[i:i+n]) return seqbut max-value for 64 bit unsigned integers is 2**64 = 18446744073709551616, so after fib(93) = 12200160415121876738 things get strange with cancellations et.c. so I changed dtype to "object".
def fibn(start, length, n): if n < 2: return [] seq = np.zeros(length, dtype=object) seq[n-1] = start if length < n: return seq for i in range(length-n): seq[i+n] = sum(seq[i:i+n]) return seqThen it works with arbitrarily large numbers but I am concerned about efficiency. In any case, it must be faster than using standard lists as numpy arrays are consecutive cells in memory so I'll change all my implementations and then make some kind of time and space check to see which is to prefer in each case.
A time check shows that the numpy implementation on the average takes 3 times longer than the standard list version. 0.0002846717834472656 sec for the standard and 0.0009121894836425781 sec for the numpy version when generating the first 1000 fibonacci numbers and for 100000 fib numbers 0.36421632766723633 sec for standard list and 0.44461560249328613 s for numpy version = 1.22 times longer for numpy.