Use dis to disassemble a function:
import dis
def func1(s, n):
if s[n] < 128 or s[n] > 191:
return -2
def func2(s, n):
x = s[n]
if x < 128 or x > 191:
return -2
print('func1:')
print(dis.dis(func1))
print('\nfunc2:')
print(dis.dis(func2))
Output:
func1:
5 0 LOAD_FAST 0 (s)
2 LOAD_FAST 1 (n)
4 BINARY_SUBSCR
6 LOAD_CONST 1 (128)
8 COMPARE_OP 0 (<)
10 POP_JUMP_IF_TRUE 24
12 LOAD_FAST 0 (s)
14 LOAD_FAST 1 (n)
16 BINARY_SUBSCR
18 LOAD_CONST 2 (191)
20 COMPARE_OP 4 (>)
22 POP_JUMP_IF_FALSE 28
6 >> 24 LOAD_CONST 3 (-2)
26 RETURN_VALUE
>> 28 LOAD_CONST 0 (None)
30 RETURN_VALUE
None
func2:
9 0 LOAD_FAST 0 (s)
2 LOAD_FAST 1 (n)
4 BINARY_SUBSCR
6 STORE_FAST 2 (x)
10 8 LOAD_FAST 2 (x)
10 LOAD_CONST 1 (128)
12 COMPARE_OP 0 (<)
14 POP_JUMP_IF_TRUE 24
16 LOAD_FAST 2 (x)
18 LOAD_CONST 2 (191)
20 COMPARE_OP 4 (>)
22 POP_JUMP_IF_FALSE 28
11 >> 24 LOAD_CONST 3 (-2)
26 RETURN_VALUE
>> 28 LOAD_CONST 0 (None)
30 RETURN_VALUE
None
In func1 you see LOAD_FAST s and n two times.
In func2 there is only one time access to s and n.
Usually what time costs are lookups to nonlocal and global names. Function and Method calls has been improved with Python 3.6 and 3.7.
When you want to do micro-optimization, you'll benefit if you first assign all global names, to local names.
import timeit
import math
def foo1(y):
for i in range(1_000_000):
yield math.sqrt(i) ** y
def foo2(y):
sqrt = math.sqrt
for i in range(1_000_000):
yield sqrt(i) ** y
run1 = timeit.timeit('list(foo1(10))', globals=globals(), number=50)
run2 = timeit.timeit('list(foo2(10))', globals=globals(), number=50)
print(f'Function with global lookup: {run1:.2f} s\nFunction with local lookup: {run2:.2f} s')
Output:
Function with global lookup: 12.97 s
Function with local lookup: 9.79 s
Test it with the timeit module.
Turn your power savings off, maybe you should also deactivate your turbo boost of your cpu, if your cpu supports it.
I my case the cpu frequency is not fixed, so the test results are not always the same.