Dec-22-2017, 12:41 PM
(Dec-22-2017, 08:20 AM)cyberion1985 Wrote: Thanks for your reply. My functions run like this
def PR01() # process file PR01() PR02() PR03() PR04()The reason why I am using this approach is because each function is somewhat unique and on each function I receive feedback on how long it took to run. The largest file takes 11 minutes to generate for example. To clarify ; in each function :
1.) file might be be renamed
2.) file might be converted to XLSX
3.) file might be receive a header
4.) file might be autofit
So each file gets processed with PR01 then passed on to PR02, then to PR03, etc.? If so, you have yourself a pipeline. (https://www.cise.ufl.edu/research/Parall...peline.htm)
If you're not doing a pipeline and your processing doesn't rely on global state of the program, you might consider using the subprocess module and do the work in individual processes instead of threads. You might also consider the multiprocessing module (https://docs.python.org/3/library/multiprocessing.html). I have never used it and have no idea if it would suit your needs.