web crawler problems - Printable Version +- Python Forum (https://python-forum.io) +-- Forum: Python Coding (https://python-forum.io/forum-7.html) +--- Forum: Web Scraping & Web Development (https://python-forum.io/forum-13.html) +--- Thread: web crawler problems (/thread-19147.html) |
web crawler problems - kid_with_polio - Jun-14-2019 Hi, I followed along with a web crawler tutorial on Youtube but I can't seem to get the code to work even though I used an updated website. I have all the used packages installed and at first I got a traceback error with BeautifulSoup() that said it needed an additional parameter of features = "html.parser" I inserted this and now the traceback I receive is nothing at all. Does anyone have a solution to this issue or know why it is caused? RE: web crawler problems - micseydel - Jun-14-2019 Please post code in code tags, we pretty much never want to see images of text. RE: web crawler problems - kid_with_polio - Jun-14-2019 import requests from bs4 import BeautifulSoup def trade_spider(max_pages): page = 1 while page <= max_pages: url = 'http://books.toscrape.com/catalogue/page-' + str(page) + '.html' source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text, features="html.parser") for link in soup.findAll('a', {'class': 'title'}): href = link.get('href') print(href) page += 1 trade_spider(1)here's the traceback: "C:\Users\Jake\PycharmProjects\practice baby\venv\Scripts\python.exe" "C:/Users/Jake/PycharmProjects/practice baby/web crawler 2.py" Process finished with exit code 0 RE: web crawler problems - metulburr - Jun-15-2019 That is not an error, that just means than nothing is being printed out at all. You didnt find the href's try this code: import requests from bs4 import BeautifulSoup def trade_spider(max_pages): page = 1 while page <= max_pages: url = 'http://books.toscrape.com/catalogue/page-{}.html'.format(page) source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text, "html.parser") books = soup.find_all('li', {'class':'col-xs-6 col-sm-4 col-md-3 col-lg-3'}) for book in books: a = book.find('a') link = a['href'] title = a.find('img')['alt'] print(link) print(title) page += 1 trade_spider(1) |