Python Forum

Full Version: Can't open Amazon page
You're currently viewing a stripped down version of our content. View the full version with proper formatting.

Here is simple code that get error:
from urllib.request import urlopen
from bs4 import BeautifulSoup

url = ''

html = urlopen(url)
============ RESTART: /home/pavel/python_code/ ============
Traceback (most recent call last):
File "/home/pavel/python_code/", line 6, in <module>
html = urlopen(url)

Where is a problem ?

You're going to need to post the entire traceback as the piece you've shown doesn't say what the problem is.
Use Requests and not urllib,also need a user agent to not get 503.
Will also need Selenium as Amazon(use a lot of JavaScript).

To show a demo with Requests.
import requests
from bs4 import BeautifulSoup

url = ''
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.content, 'lxml')
Output: >>> response <Response [200]> >>> soup.p <p class="a-last">Sorry, we just need to make sure you're not a robot. For best results, please make sure your browser is accepting cookies.</p>
So now get 200,but as you see now need browser and cooike.
This is when Selenium come into the picture,search the forum for this can also look at web-scraping part-2.
You can also send cookies with requests. That being said, Selenium may well be the best option.

If you haven't used html_requests, i would recommend looking at that for anything javascript related. Its a great tool and is an in-between with selenium and requests