I am trying to calculate the time is takes to append one element in a list. According to amortized analysis, it should be constant regardless how long the list is.
So, here is a little piece of code I've written to test it out with the corresponding output,
Output:
I have two questions,
So, here is a little piece of code I've written to test it out with the corresponding output,
1 2 3 4 5 6 7 8 9 10 11 |
def calTime(n): from time import time data = [] start = time() for r in range (n): data.append( None ) end = time() return ((end - start) / n) for r in [ 10 * * x for x in range ( 15 )]: print ( "Size " ,r, " time it took " , calTime(r), "sec" ) |
1 2 3 4 5 6 7 8 9 |
Input is being redirected from C:\Projects\ 2698 \ input .txt Size 1 time it took 0.0 sec Size 10 time it took 0.0 sec Size 100 time it took 0.0 sec Size 1000 time it took 0.0 sec Size 10000 time it took 1.0018348693847657e - 07 sec Size 100000 time it took 1.099705696105957e - 07 sec Size 1000000 time it took 8.603405952453613e - 08 sec Size 10000000 time it took 8.960003852844239e - 08 sec |
- Is this the correct way to test if amortized cost of insertion in a dynamic array is O(1) - I am assuming it is correct as average insertion time for all values or n are very close to zero.
- I understand that the average time it takes to insert an elements in a list when n is small being rounded to zero. How do I print that small value?