Jul-13-2018, 09:15 AM
Hi,
I'm working through the Numpy chapter of Python Data Science Handbook. To show how slow array processing they give an example that takes 2.91s to process but on my machine it's only 3.84 µs, a million times quicker. This is consistent across several environments on my PC, WSB Python 3.4, ipyhton 3.5, Anaconda 3.6.5
Any explanation? Array processing has significantly improved since the book was written?
Thanks for any help
Brett
My run
100000 loops, best of 3: 3.84 µs per loop
I'm working through the Numpy chapter of Python Data Science Handbook. To show how slow array processing they give an example that takes 2.91s to process but on my machine it's only 3.84 µs, a million times quicker. This is consistent across several environments on my PC, WSB Python 3.4, ipyhton 3.5, Anaconda 3.6.5
Any explanation? Array processing has significantly improved since the book was written?
Thanks for any help
Brett
import numpy as np np.random.seed(0) def compute_reciprocals(values): output = np.empty(len(values)) for i in range(len(values)): output[i] = 1.0 / values[i] return output big_array = np.random.randint(1, 100, size=1000000) %timeit compute_reciprocals(big_array)Their run
Quote:1 loop, best of 3: 2.91 s per loop
My run
100000 loops, best of 3: 3.84 µs per loop