Bottom Page

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
 Progress Finished Question
Every bit helps me understand. Thanks for the link; I will be sure to read through it. Hopefully I can get something working.
Well, I figured I'd follow up on this and post my results after trial and error and Google-fu. The 'shutil' module just doesn't support returning progress complete. Even threading couldn't make it happen. The interpreter will not process another line until the copy or move function has completed. I've learned that multithreading doesn't truly perform tasks in parallel, it rapidly switches back and forth between tasks. This doesn't work for shutil.copy()/shutil.move() because the rapid switching pauses until the copy or move function has completed, just like the standard, singlethreaded way. I've read a topic or two of multi-processing, but wth...

I found that I could write my own copy function from scratch (which I may end up doing eventually); shutil utilizes it's own 'copyfileobject' function that I could re-write including some code for progress feedback, but wth. If any more experienced Pythonites want to try out some code and find my conclusions incorrect, please don't hesitate to share--it's a learning experience for me.
Yes, shutil.copy doesn't tell you its progress. You can check the size of the file using os.stat

file_zise = os.stat('original_big_file.big').st_size

# you doing this periodicaly every second while in another thread or process the copying is on the go
new_size = os.stat('new_big_file.big').st_size
print(new_size / file_size * 100) # print the percentage of copying file to original
"As they say in Mexico 'dosvidaniya'. That makes two vidaniyas."
Yeah, I tried that when I was experimenting. Like I said in my previous post, threading doesn't use true, parallel, concurrent threads, rather it switches back and forth rapidly. Even with quick switches, shutil.copy() pauses the interpreter until the copy process is complete--it will not switch to other threads while copying. That's what my testing has led me to believe, at least. If someone wants to show me otherwise, please do.

Thanks for your input, @wavic. I may look into multiprocessing because it offers true parallel tasking. Another option would be writing my own copy function.
As I said I never used threading. Maybe I have to look at it these days. What you want to do is not a complicated task. I may try to do the same to see what is going one with these threads.
"As they say in Mexico 'dosvidaniya'. That makes two vidaniyas."
It will be better if you show us the modified code.
"As they say in Mexico 'dosvidaniya'. That makes two vidaniyas."
Well, I've come back to this because, well, why not. I've decided to go the custom copy function route and have come up with this (very rough code--just proof of concept for myself):
import shutil, os, sys

with open('397.64-desktop.exe', 'rb') as f:
    with open('C:\\Users\Mark\Downloads\Ordenador\driver.exe', 'wb') as g:
        size = os.stat('397.64-desktop.exe').st_size
        while True:
            moving =
            size2 = os.stat('C:\\Users\Mark\Downloads\Ordenador\driver.exe').st_size
            print('\r', size2 / size * 100)
It works, but needs refinement (BIG time). I've noticed, however, that it is nowhere near as fast as shutil, or native OS copying. How can I improve the speed of transfers. I know of course you're limited to the IO of the drive but in my testing (NVME SSD) it's waaay slower than the drive can handle.

@wavic - I'd love to see code examples. Helps me learn, for one. But in my testing (code is deleted at this point), built-in Python functions halt at the copy/move line until the transfer is complete. Proving me wrong is always welcome.
Since this is SSD you can increase the buffer size. For example 10M.
Also because of that buffer, you could know exactly how many bytes are copied. You can do some math instead calling os.stat in every iteration.

Or you can use tqdm.
Something like this:

import os
import tqdm

src = './source/big_file.big'
dest = './path/big_file.big'

f_size = os.stats('./source/big_file.big').st_size
buff = 10485760 # 1024**2 * 10 = 10M

num_chunks = f_size // buff + 1

with open(src, 'rb') as src_f, open(dest, 'wb') as dest_f:
        for _ in tqdm(range(num_chunks)):
            chunk =
    except IOError as e:
        print(f'Done! Copied {f_size} bytes.')
It will give you nice progress bar.

I didn't tested the script.
malonn likes this post
"As they say in Mexico 'dosvidaniya'. That makes two vidaniyas."
Great, wavic. I will look into this code further. I wonder what is "tqdm" (I'll be Googling it too)? Does it require to install a 3rd party library? Also, I have seen but never read about "try", "except" and "finally"... maybe a little explanation of the code? I'm quite new to this.
In my post tqdm is a link. Just go and see it.
Try/except construction allows you to catch the errors or a specific error. A simple example will explain it better. My English is not so good as I want.

    num = float(input('Input a number: '))
    num2 = float(input('A second please: '))
    result = num / num2
    print(f'{num} / {num2} = {result}')
except ValueError:
    print('Incorect input. Abort!')
except ZeroDivisionError:
    print('Zerro division not allowed. Abort!')
    print('We did some nice math. Buy!')
So you try to run some block of code and if there is an error you catch it and can take some actions.
This can replace the if/elif/else in most of the cases.
"As they say in Mexico 'dosvidaniya'. That makes two vidaniyas."

Top Page

Possibly Related Threads...
Thread Author Replies Views Last Post
  How to stop Xmodem after bin file transfer was finished shaya2103 0 157 Nov-27-2019, 04:33 PM
Last Post: shaya2103
  Process finished with exit code -107374819 (0xC0000375) mrazko 2 1,892 Apr-05-2019, 12:46 PM
Last Post: mrazko
  Fabric - Run method is not being finished mglowinski93 3 754 Dec-29-2018, 10:45 AM
Last Post: mglowinski93

Forum Jump:

Users browsing this thread: 1 Guest(s)