Sep-12-2018, 01:08 PM
From Multiprocessing - RTM
You essentially block the parent process till the current child process terminates, making all the multiprocessing effort redundant.
You may wait on each process after your start all the processes. Drop
In general, I believe process pool could be a better option (haven't done multiprocessing in Python for ages) - but in your case that is not so important. If you apply the fixes I've specified above, your code should work as you expect it to.
Quote:join([timeout])
If the optional argument timeout is None (the default), the method blocks until the process whose join() method is called terminates. If timeout is a positive number, it blocks at most timeout seconds. Note that the method returns None if its process terminates or if the method times out. Check the process’s exitcode to determine if it terminated.
A process can be joined many times.
A process cannot join itself because this would cause a deadlock. It is an error to attempt to join a process before it has been started.
You essentially block the parent process till the current child process terminates, making all the multiprocessing effort redundant.
You may wait on each process after your start all the processes. Drop
p.join()
from the first loop and use p.join(0.2)
in the second one (also, drop time.sleep
from it - redundant).In general, I believe process pool could be a better option (haven't done multiprocessing in Python for ages) - but in your case that is not so important. If you apply the fixes I've specified above, your code should work as you expect it to.
Test everything in a Python shell (iPython, Azure Notebook, etc.)
- Someone gave you an advice you liked? Test it - maybe the advice was actually bad.
- Someone gave you an advice you think is bad? Test it before arguing - maybe it was good.
- You posted a claim that something you did not test works? Be prepared to eat your hat.