Aug-12-2019, 11:10 AM
(Aug-07-2019, 06:49 PM)DeaD_EyE Wrote: If the data is very big (does not fit in memory or does not fit on your hard drive,
you often have with frameworks a kind of generator or "magic" iterator, which
loads new data in chunks.
from collections import namedtuple CountIndexResult = namedtuple('CountIndexResult', 'count index') def count(iterable_or_gen, value): first_index = None count = 0 for index, element in enumerate(iterable_or_gen): if element == value: if first_index is None: first_index = index count +=1 return CountIndexResult(count, first_index) count([2,2,2,2,2,2,1,1,1,1,1,1,2,2], 1)If your tuple/list has lesser than one million elements, you can use the methods
Output:CountIndexResult(count=6, index=6)count
andindex
.
Thank you. I will try this as well.