Python Forum
Counting Duplicates in large Data Set
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Counting Duplicates in large Data Set
#1
My students and I were discussing the difference between possible and probability when it comes to lottery numbers. Is the sequence 1 2 3 4 5 6 just as likely to happen as 76 31 7 54 29 18 ?

My question is, what would be a good format to record six random numbers and what method to count the duplicates?
Reply
#2
Simple simulation would do: run n-times random.sample on the desired range and convert result to tuple and feed to collections Counter. Then inspect results. If order doesn't matter, then before converting to tuple use sorted. I tried with n=1000000 and there was no 1, 2, 3, 4, 5, 6 in results (I used range(1, 49)).
I'm not 'in'-sane. Indeed, I am so far 'out' of sane that you appear a tiny blip on the distant coast of sanity. Bucky Katt, Get Fuzzy

Da Bishop: There's a dead bishop on the landing. I don't know who keeps bringing them in here. ....but society is to blame.
Reply
#3
Use a set instead of a tuple. Should be faster. But I don't think this as a possible solution.

"Simple solution" is not going to work because the numbers are staggering. I am not surprised that 1 million combinations did not produce a single 1, 2, 3, 4, 5, 6. One million is a pretty small sample size when there are almost 14 million combinations. For the kind of test you propose I would suggest 1 billion combinations. and how are you going to store the 320 MB Counter dictionary?
Reply
#4
Hi

When speaking about duplicates for numbers, I'm alway thinking to "np.unique" => here bellow an example. Note at the same time Numpy is fast even for a huge array size.

Paul

import numpy as np

MyList=[0, 1, 10, 5, 2, 1, -1, 8, 2, 1, 5, 1, 1, 1, -1]
MyList=np.asarray(MyList)

UniqueList = np.unique(MyList, return_index=True, return_counts=True)

n = np.shape(UniqueList[0])[0]
for i in range(n):
    print(f"for {UniqueList[0][i]}  => {UniqueList[2][i]} occurence(s)")
Provinding:
Output:
for -1 => 2 occurence(s) for 0 => 1 occurence(s) for 1 => 6 occurence(s) for 2 => 2 occurence(s) for 5 => 2 occurence(s) for 8 => 1 occurence(s) for 10 => 1 occurence(s)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Add group number for duplicates atomxkai 2 1,142 Dec-08-2022, 06:08 AM
Last Post: atomxkai
  Reading large crapy text file in anaconda to profile data syamatunuguntla 0 838 Nov-18-2022, 06:15 PM
Last Post: syamatunuguntla
  Searching Module to plot large data G_rizzle 0 1,457 Dec-06-2021, 08:00 AM
Last Post: G_rizzle
  Pandas Indexing with duplicates energerecontractuel 3 2,874 Mar-07-2019, 12:57 AM
Last Post: scidam
  How to filter specific rows from large data file Ariane 7 8,235 Jun-29-2018, 02:43 PM
Last Post: gontajones
  jupyter pandas remove duplicates help okl 3 7,528 Feb-25-2018, 01:11 PM
Last Post: glidecode

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020