Python Forum

Full Version: List processing speed
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi,
I'm working through the Numpy chapter of Python Data Science Handbook. To show how slow array processing they give an example that takes 2.91s to process but on my machine it's only 3.84 µs, a million times quicker. This is consistent across several environments on my PC, WSB Python 3.4, ipyhton 3.5, Anaconda 3.6.5

Any explanation? Array processing has significantly improved since the book was written?

Thanks for any help
Brett


import numpy as np
np.random.seed(0)

def compute_reciprocals(values):
    output = np.empty(len(values))
    for i in range(len(values)):
        output[i] = 1.0 / values[i]
        return output


big_array = np.random.randint(1, 100, size=1000000)
%timeit compute_reciprocals(big_array)
Their run
Quote:1 loop, best of 3: 2.91 s per loop

My run
100000 loops, best of 3: 3.84 µs per loop
I think you don't want to return at the first for iteration, right?

import numpy as np
np.random.seed(0)
 
def compute_reciprocals(values):
    output = np.empty(len(values))
    for i in range(len(values)):
        output[i] = 1.0 / values[i]
    return output # <- PUT IT OUTSIDE THE FOR LOOP
 
 
big_array = np.random.randint(1, 100, size=1000000)
%timeit compute_reciprocals(big_array)
[quote='gontajones' pid='52328' dateline='1531475034']
I think you don't want to return at the first for iteration, right?
{/quote]

Yes, that's it. I missed indented when I cut and pasted. It's now 2.2s which is about expected.

Thank you so much