May-16-2018, 03:20 PM
All right! Lars60+ - I've made your change - awesome!
wavic - I've made your change - awesome!
Now I've got a traceback error.
Here is the completely changed code -
wavic - I've made your change - awesome!
Now I've got a traceback error.
Error: RESTART: C:\Users\toliver\AppData\Local\Programs\Python\Python36\WOGCC_File_Downloads.py
Traceback (most recent call last):
File "C:\Users\toliver\AppData\Local\Programs\Python\Python36\WOGCC_File_Downloads.py", line 71, in <module>
GetCompletions('api.txt')
File "C:\Users\toliver\AppData\Local\Programs\Python\Python36\WOGCC_File_Downloads.py", line 12, in __init__
self.log.pdfpath = self.homepath / 'comppdf'
AttributeError: 'GetCompletions' object has no attribute 'log'
Does this mean I need to make the folders? When I installed Python, I selected the 'Add Path' feature. Does this have something to do with this error?Here is the completely changed code -
import requests from bs4 import BeautifulSoup from pathlib import Path class GetCompletions: def __init__(self, infile): """Above will create a folder called comppdf, and wsgeo wherever the WOGCC File Downloads file is run from as well as a text file for my api file to reside. """ self.homepath = Path('.') self.log.pdfpath = self.homepath / 'comppdf' self.log.pdfpath.mkdir(exist_ok=True) self.log.pdfpath = self.homepath / 'geocorepdf' self.log.pdfpath.mkdir(exist_ok=True) self.textpath = self.homepath / 'text' self.text.mkdir(exist_ok=True) self.infile = self.textpath / infile self.api = [] self.parse_and_save(getpdfs=True) def get_url(self): for entry in self.apis: yield (entry, "http://wogcc.state.wy.us/wyocomp.cfm?nAPI=[]".format(entry[3:10])) yield (entry, "http://wogcc.state.wy.us/whatupcores.cfm?autonum=[]".format(entry[3:10])) """Above will get the URL that matches my API numbers.""" def parse_and_save(self, getpdfs=False): for file in filelist: with file.open('r') as f: soup = BeautifulSoup(f.read(), 'lxml') if getpdfs: links = soup.find_all('a') for link in links: url in link['href'] if 'www' in url: continue print('downloading pdf at: {}'.format(url)) p = url.index('=') response = requests.get(url, stream=True, allow_redirects=False) if response.status_code == 200: try: header_info = response.headers['Content-Disposition'] idx = header_info.index('filename') filename = self.log_pdfpath / header[idx+9:] except ValueError: filename = self.log_pdfpath / 'comp{}'.format(url[p+1:]) print("couldn't locate filename for {} will use: {}".format(file, filename)) except KeyError: filename = self.log_pdfpath / 'comp{}.pdf'.format(url[p+1:]) print('got KeyError on {}, respnse.headers = {}'.format(file, response.headers)) print('will use name: {}'.format(filename)) print(repsonse.headers) with filename.open('wb') as f: f.write(respnse.content) sfname = self.textpath / 'summary_{}.txt'.format((file.name.split('_'))[1].split('.')[0][3:10]) tds = soup.find_all('td') with sfname.open('w') as f: for td in tds: if td.text: if any(field in td.text for field in self.fields): f.write('{}\n'.format(td.text)) if __name__ == '__main__': GetCompletions('api.txt')