Nov-04-2021, 07:37 AM
Hi,
The nested loop approach ("brute force") will produce the goods,
but for the (-100/+100, array = 30) case it will take 24+ seconds.(100_000 iterations)
I set out to see if this approach could be tuned to produce more acceptable results.
And yes, if you prepare the initial array in such a way that it will only
test "valid" candidate subarrays, the processing time drops to +/- 4.8 secs.
(Doubling the arrays : 5-10-20... will roughly double the processing time.)
Looking for inspiration to cut the 4.8 secs further...
The nested loop approach ("brute force") will produce the goods,
but for the (-100/+100, array = 30) case it will take 24+ seconds.(100_000 iterations)
I set out to see if this approach could be tuned to produce more acceptable results.
And yes, if you prepare the initial array in such a way that it will only
test "valid" candidate subarrays, the processing time drops to +/- 4.8 secs.
(Doubling the arrays : 5-10-20... will roughly double the processing time.)
Looking for inspiration to cut the 4.8 secs further...
maxArr = [max(arr)] idxStarts, idxEnds = [],[] for idx, number in enumerate(arr): if number > 0: if idx == 0 or arr[idx-1] < 0: idxStarts.append(idx) if idx == len(arr)-1 or arr[idx+1] < 0: idxEnds.append(idx) for idxfrom in idxStarts: for idxto in idxEnds: maxArr.append(sum(arr[idxfrom:idxto +1])) maxArr = [max(maxArr)]
It is more important to do the right thing, than to do the thing right.(P.Drucker)
Better is the enemy of good. (Montesquieu) = French version for 'kiss'.
Better is the enemy of good. (Montesquieu) = French version for 'kiss'.