May-29-2021, 12:00 PM
hey community,
i am having some trouble understanding the threading module and would be glad if someone could help me out.
My Code does the following:
1. Recv Messages from Clients (there are many clients) and append them to a dict by ip-address (e.g.: dct = {client_1: [1, 2, ...], client_2: [3, 4,...], ...}
My Code should do the following:
1. Recv Messages from Clients (there are many clients) and append them to a dict by ip-address (e.g.: dct = {client_1: [1, 2, ...], client_2: [3, 4,...], ...}
2. Start a Scraper for every client in the dictionary
my code, reduced to the basics, is looking like this:
The Messages are being saved for each Client.
Now i would like to run another Function seperate to all that, but concurrently. The Function or Class (I really don't know which would be better) should now start the web scraping process for the messages of each single client. But i am failing to do it.
My Problem isnt about Scraping, but about the concurrent running of scrapers for each client.
I was thinking about starting a scraper.py file for every client in the dictionary, but assume this wouldnt be an efficient option.
If you need any more Information, please let me know and i'll edit my post here
**EDIT: If you see other mistakes on my Code, that doesnt address my question or problem, you can let me know too, but it isnt necesarry since everything is working
i am having some trouble understanding the threading module and would be glad if someone could help me out.
My Code does the following:
1. Recv Messages from Clients (there are many clients) and append them to a dict by ip-address (e.g.: dct = {client_1: [1, 2, ...], client_2: [3, 4,...], ...}
My Code should do the following:
1. Recv Messages from Clients (there are many clients) and append them to a dict by ip-address (e.g.: dct = {client_1: [1, 2, ...], client_2: [3, 4,...], ...}
2. Start a Scraper for every client in the dictionary
my code, reduced to the basics, is looking like this:
SCRAPER = [] CLIENTS = [] # Every client as a Class class Receive(threading.Thread): def __init__(self, socket, address, id, name, signal): threading.Thread.__init__(self) self.socket = socket self.address = address self.id = id self.name = name self.signal = signal def __str__(self): return str(self.id) + " " + str(self.address) def run(self): num = 0 while self.signal: num += 1 try: data = self.socket.recv(90) # print(f"New message from {self.address}: {data.decode('utf-8')}") except: print("Client " + str(self.address) + " has disconnected") self.signal = False CLIENTS.remove(self) break if data.decode("utf-8") != "": dct[self.address[0]].append(data.decode("utf-8")) pprint.pprint(dct) else: pass dct = {} def newConnections(socket): num = -1 while True: sock, address = socket.accept() if bool(dct) != False: for key in list(dct.keys()): if key != address[0]: dct[address[0]] = [] else: dct[address[0]] = [] # print(dct) if Price(sock, address, total_connections, "Name", True) in CLIENTS: pass else: CLIENTS.append(Receive(sock, address, total_connections, "Name", True)) CLIENTS[len(CLIENTS) - 1].start() continue def main_recv(): host = "" port = 61207 # Create new server socket sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind((host, port)) sock.listen(5) newConnectionsThread = threading.Thread(target=newConnections, args=(sock,)) newConnectionsThread.start() # ScraperThread = threading.Thread(target=scraper) # main fonksoynunu ac main_recv()I dont know how the threading module works exactly so it would be nice if you could include some explanation about it. In my my code, i am storing the clients as Classes in the CLIENTS list and starting them. So my Code is waiting for messages from every single Client.
The Messages are being saved for each Client.
Now i would like to run another Function seperate to all that, but concurrently. The Function or Class (I really don't know which would be better) should now start the web scraping process for the messages of each single client. But i am failing to do it.
My Problem isnt about Scraping, but about the concurrent running of scrapers for each client.
I was thinking about starting a scraper.py file for every client in the dictionary, but assume this wouldnt be an efficient option.
If you need any more Information, please let me know and i'll edit my post here

**EDIT: If you see other mistakes on my Code, that doesnt address my question or problem, you can let me know too, but it isnt necesarry since everything is working
