Have not ran your code just yet... doing some much-needed cleaning but before I reboot and do run it at a quick look, you did no close the url ... in url = ....
(Jan-24-2017, 12:54 PM)chrisdas Wrote: Hi All, Not sure why my crawler isn't working. It's pretty simply pulling out the href, the brand, and the fit of t-shirts from a website. It manages to get the fit correct but the href and the brand just loop and repeat themselves for every output. Can't find the error. Thanks, Chris
I've had to remove the http and www from in front of 'theiconic' as it wouldn't let me post with web links.
import requests from bs4 import BeautifulSoup def iconic_spider(max_pages): page = 1 while page <= max_pages: url = theiconic.com.au/mens-clothing-tshirts-singlets/?page=' + str(page) source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text, "html.parser") for link in soup.findAll('a', {'class': 'product-details'}): href = theiconic.com.au/' + link.get('href') for link in soup.findAll('span', {'class': 'brand'}): brand = link.string for link in soup.findAll('span', {'class': 'name'}): fit = link.string print(href) print(brand) print(fit) page += 1 iconic_spider(2)OKAY! I got your code to work with a couple edit... simple mistakes really... But before I point them out I would ask you to run your script and read the error, 97% it says the underlying immediate error in code in my experience at the very begging or end of the stack trace...