(Feb-22-2017, 01:44 AM)Skaperen Wrote: [ -> ]it might come down to the CPU cost of .attribute
vs a plain local variable. given that self
is a local variable there is already the cost of accessing self
. now add on the cost of accessing an attribute within it ... unless python does any performance optimizing for self
specifically (because it is so commonly used) or for repeating expressions.
Could you please show some code that demonstrates what you're doing?
(Feb-21-2017, 10:51 AM)wavic Wrote: [ -> ]It doesn't matter. Both references are pointers to one memory address
It matters in terms of performance.
self.foo
has to do an attribute lookup to get that pointer,
foo
just gives the pointer.
Now, that performance gain is very slight and may not matter in some applications. But if you're doing it a million times, it will start to matter.
true, but what about the steps taken to get there?
1. lookup 'self' using the locals dictionary
2. lookup 'foo' using the attributes dictionary of the object found above
vs
1. lookup 'bar' using the locals dictionary
if i cache self.foo
in bar
by doing bar = self.foo
then i need that value.
(Feb-22-2017, 01:44 AM)Skaperen Wrote: [ -> ]it might come down to the CPU cost of .attribute
vs a plain local variable. given that self
is a local variable there is already the cost of accessing self
. now add on the cost of accessing an attribute within it ... unless python does any performance optimizing for self
specifically (because it is so commonly used) or for repeating expressions.
It is highly likely that the compiler recognizes the pattern and optimizes around it (assuming it is worth it). Using your own variable may make it less obvious and prevent better optimizations.
i'm trying to think of a way to test it that itself won't be optimized. for example putting some code in a loop could end up with it recognize no change so 10000046 times around won't do so much.
(Feb-22-2017, 09:14 AM)Skaperen Wrote: [ -> ]i'm trying to think of a way to test it that itself won't be optimized. for example putting some code in a loop could end up with it recognize no change so 10000046 times around won't do so much.
The difficulty is finding something the compiler cannot optimize and that doesn't take time (because you are possibly looking for a handful of processor cycles).
Draw a random integer (to avoid a literal the compiler could optimize). Add it to some variable on each iteration, print the variable at the end (otherwise the compiler may thing it's useless). If you think the compiler could just do a multiplication, then use an array of two ints, and add the number to either member of the array depending on parity.
I came across on the web on such thin optimisations wich can impact a significant difference when it comes to iterating over hundreds of thousands or millions of objects.
So, basically Python search first the local scope for functions and other objects then the globals. And assignments like this make sense:
def squared(big_num):
results = []
append = list.append
for n in big_num:
results.append(n**2)
return results
(Feb-22-2017, 08:00 AM)Ofnuts Wrote: [ -> ]It is highly likely that the compiler recognizes the pattern and optimizes around it
Do you mean CPython would do that?
Would, I don't know. Could, definitely.