Bottom Page

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
 requests - handling exceptions
#1
I'm reading Web Scraping with Python by Ryan Mitchell and trying to use requests instead of urlib. When dealing with exceptions if the issue is that page doesn't exist this code works well:
import requests
from bs4 import BeautifulSoup
try:
    html = requests.get("http://pythonscraping.com/pages/p1.html")
    html.raise_for_status()
except requests.exceptions.HTTPError as e:
	print(e)
Output:
404 Client Error: Not Found for url: http://pythonscraping.com/pages/p1.html
but what to do when server could not be reached at all? Urlib has URLError function but requests module doesn't accept it.

I may have some more issues in the future with exceptions and this topic will be suitable...
Quote
#2
Look at A Python guide to handling HTTP request failures.
Truman likes this post
Quote
#3
Thank you, I assume that answer is ConnectionError.
Quote

Top Page

Forum Jump:


Users browsing this thread: 1 Guest(s)