Python Forum
requests - handling exceptions - Printable Version

+- Python Forum (https://python-forum.io)
+-- Forum: Python Coding (https://python-forum.io/forum-7.html)
+--- Forum: Web Scraping & Web Development (https://python-forum.io/forum-13.html)
+--- Thread: requests - handling exceptions (/thread-14054.html)



requests - handling exceptions - Truman - Nov-13-2018

I'm reading Web Scraping with Python by Ryan Mitchell and trying to use requests instead of urlib. When dealing with exceptions if the issue is that page doesn't exist this code works well:
import requests
from bs4 import BeautifulSoup
try:
    html = requests.get("http://pythonscraping.com/pages/p1.html")
    html.raise_for_status()
except requests.exceptions.HTTPError as e:
	print(e)
Output:
404 Client Error: Not Found for url: http://pythonscraping.com/pages/p1.html
but what to do when server could not be reached at all? Urlib has URLError function but requests module doesn't accept it.

I may have some more issues in the future with exceptions and this topic will be suitable...


RE: requests - handling exceptions - snippsat - Nov-13-2018

Look at A Python guide to handling HTTP request failures.


RE: requests - handling exceptions - Truman - Nov-13-2018

Thank you, I assume that answer is ConnectionError.