Hello everyone,
I'm trying to use:
with concurrent.futures.ProcessPoolExecutor() as executor:
but the code just keeps proceeding further before the parallel processing finishes. I've searched online, but the solution was not clear to me.
Any hints would be highly appreciated.
Thanks
Can give a working example that have used in my tutorial before.
So this downloads 200 images from
xkcd in about 10-sec.
Without
concurrent.futures the same will take about 1,15 minute.
import requests
from bs4 import BeautifulSoup
import concurrent.futures
import os
def image_down(url):
url_get = requests.get(url)
soup = BeautifulSoup(url_get.content, 'lxml')
link = soup.find('div', id='comic').find('img').get('src')
link = link.replace('//', 'http://')
img_name = os.path.basename(link)
img = requests.get(link)
with open(img_name, 'wb') as f_out:
f_out.write(img.content)
if __name__ == '__main__':
start_img = 1
stop_img = 200
with concurrent.futures.ProcessPoolExecutor(max_workers=10) as executor:
for numb in range(start_img, stop_img):
url = f'http://xkcd.com/{numb}/'
executor.submit(image_down, url)
I'm using the following:
if __name__ == '__main__':
with concurrent.futures.ProcessPoolExecutor() as executor:
f_map = executor.map(functionname,[input])
(Sep-01-2021, 04:49 PM)thunderspeed Wrote: [ -> ]I'm using the following:
This is not enough info.
Just
[input]
in the map argument what do think this will do?
If i should use
map()
in my code it could be done line this.
if __name__ == '__main__':
with concurrent.futures.ProcessPoolExecutor(max_workers=10) as executor:
executor.map(image_down, [f'http://xkcd.com/{numb}/' for numb in range(1,200)])
So now use list comprehension to make a list with url's.
I preferer
submit()
in first code.