Dec-04-2016, 11:46 PM
I interviewed a candidate recently for a software engineering position who had a master's degree from UCLA with a 3.8 GPA and a 3.9 GPA in undergrad. They looked good on paper, but after writing the following code, misidentified the time complexity of their solution (the problem is relatively boring, won't post it unless someone really wants to know)
def result(n): nums = [i+1 for i in xrange(n)] d = 1 while len(nums)>1: nums.pop(0) for j in xrange(d%len(nums)): nums.append(nums.pop(0)) d += 1 return nums[0]I realize most of the regulars here don't have a formal computer science background, but I'm curious if the tutorial I wrote is good enough for people to identify the running time of this short algorithm.
So, for those interested in identifying it, what do you think the big-O of this function is? What source(s) of learning did you use to make that conclusion?
(I posted this in the Bar since it's not a question I'm asking as much as thing I thought would be fun, and isn't news about Python or anything, even though the candidate happened to write it in (not especially good) Python.)