Python Forum

Full Version: How to check HTTP error 500 and bypass
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi

I want to catch dara from web, but some the web reload for ever (without any error), and some time it catch firewall isuse(HTTP error 500).

But below code, when HTTP error 500, it still read data using, but I want to bypass it. I only want to read data when no error or exceptions.

from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
import requests

webinput='https://www.google.com/'

try:
    f=requests.get(webinput,timeout=10).text
    if len(f>0):
        print('Web connection is fine')
        a=
    else:
        print('Web server issue')
except HTTPError as e:
    print('Error code:', e.code)
except URLError as e:
    print('We fail to reach server')
    print('Reason:', e.reason)
No need to use urllib Exception,requests has own Errors and Exceptions.
Quote:All exceptions that Requests explicitly raises inherit from requests.exceptions.RequestException.
So a catch all error scenario could be.
import requests
from requests.exceptions import RequestException

url1 = "http://www.not_exist.com/"
url2 = "https://httpbin.org/status/500"
url3 = "https://httpbin.org/delay/15"
url4 = 'https://www.google.com/' # ok
try:
    response = requests.get(url4, timeout=10)
    response.raise_for_status()
    print(response.status_code) # All ok 200
except RequestException as error:
    print(error)
I want to bypass all exception, and read data only when All ok 200.
Can pass out all,and only do a check for 200.
import requests
from requests.exceptions import RequestException

url1 = "http://www.not_exist.com/"
url2 = "https://httpbin.org/status/500"
url3 = "https://httpbin.org/delay/15"
url4 = 'https://www.google.com/' # ok
try:
    response = requests.get(url4, timeout=10)
    if response.status_code == 200:
        print(response.status_code) # All ok 200
except RequestException:
    pass