Nov-13-2018, 12:49 AM
I'm reading Web Scraping with Python by Ryan Mitchell and trying to use requests instead of urlib. When dealing with exceptions if the issue is that page doesn't exist this code works well:
I may have some more issues in the future with exceptions and this topic will be suitable...
import requests from bs4 import BeautifulSoup try: html = requests.get("http://pythonscraping.com/pages/p1.html") html.raise_for_status() except requests.exceptions.HTTPError as e: print(e)
Output:404 Client Error: Not Found for url: http://pythonscraping.com/pages/p1.html
but what to do when server could not be reached at all? Urlib has URLError function but requests module doesn't accept it.I may have some more issues in the future with exceptions and this topic will be suitable...