Jan-05-2021, 04:46 PM
Holy crap dude. haha. seriously, holy crap. hahaha. It's beautiful. You must just laugh at how easy that is. You're good man. Mind Blown.
Since I got the other one to workish, I started looking at fixing the mutli-threaded one of the same model. I wasn't as much help in this one. Take a look?
Original code:
And my bastardized way of trying to get your fix to work on it.
Since I got the other one to workish, I started looking at fixing the mutli-threaded one of the same model. I wasn't as much help in this one. Take a look?
Original code:
########################################## ####### This is section for the main imports import requests import os from bs4 import BeautifulSoup from tqdm import tqdm from multiprocessing.pool import ThreadPool def save_image(tag): dlthis = ('https:' + tag['href']) print(dlthis) path = os.path.join(folder, tag['download']) myfile = requests.get(dlthis, allow_redirects=True, stream = True) ########################################## ####### Section for Saving Files, both work # with open(path, 'wb') as f: # f.write(myfile.content) open(path, 'wb').write(myfile.content) ########################################## if __name__ == '__main__': ########################################## ####### This is section for choosing site and save folder url = '' folder = '' url = input("Website:") folder = input("Folder:") if not os.path.isdir(folder): os.makedirs(folder) ########################################## ####### This section I have NO idea what it does. :) Sets parser for sure r = requests.get(url, stream = True) data = r.text soup = BeautifulSoup(data, features = "lxml") ########################################## ####### This section grabs all pictures tagged download and makes folders images = soup.select('a.parent[download]') ThreadPool().map(save_image, images)
And my bastardized way of trying to get your fix to work on it.
########################################## ####### This is section for the main imports import requests import os from bs4 import BeautifulSoup from tqdm import tqdm from multiprocessing.pool import ThreadPool def save_image(tag): dlthis = (img.get('href')) strnum = str(number) newnum = " " + strnum namestr = name + newnum + ".jpg" path = os.path.join(folder, namestr) myfile=requests.get(dlthis, allow_redirects=True, stream = True) ########################################## ####### Section for Saving Files, both work # with open(path, 'wb') as f: # f.write(myfile.content) open(path, 'wb').write(myfile.content) ########################################## if __name__ == '__main__': ########################################## ####### This is section for choosing site and save folder url = '' folder = '' name = '' number = 1 url = input("Website:") folder = input("Folder:") name = input("Name:") headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36'} if not os.path.isdir(folder): os.makedirs(folder) ########################################## ####### This section I have NO idea what it does. :) Sets parser for sure response = requests.get(url, headers=headers) soup = BeautifulSoup(response.content, 'lxml') ########################################## ####### This section grabs all pictures tagged download and makes folders images = soup.select('div.thread_image_box > a') ThreadPool().map(save_image, images)i think there is an issue with the ".map" and then the "img.get" in the function.