Aug-07-2019, 06:49 PM
If the data is very big (does not fit in memory or does not fit on your hard drive,
you often have with frameworks a kind of generator or "magic" iterator, which
loads new data in chunks.
you often have with frameworks a kind of generator or "magic" iterator, which
loads new data in chunks.
from collections import namedtuple CountIndexResult = namedtuple('CountIndexResult', 'count index') def count(iterable_or_gen, value): first_index = None count = 0 for index, element in enumerate(iterable_or_gen): if element == value: if first_index is None: first_index = index count +=1 return CountIndexResult(count, first_index) count([2,2,2,2,2,2,1,1,1,1,1,1,2,2], 1)
Output:CountIndexResult(count=6, index=6)
If your tuple/list has lesser than one million elements, you can use the methods count
and index
.
Almost dead, but too lazy to die: https://sourceserver.info
All humans together. We don't need politicians!
All humans together. We don't need politicians!