How do I avoid Beautiful Soup redirects? - Printable Version +- Python Forum (https://python-forum.io) +-- Forum: Python Coding (https://python-forum.io/forum-7.html) +--- Forum: Web Scraping & Web Development (https://python-forum.io/forum-13.html) +--- Thread: How do I avoid Beautiful Soup redirects? (/thread-6566.html) |
How do I avoid Beautiful Soup redirects? - HiImNew - Nov-29-2017 import bs4 as bs import urllib.request sauce = urllib.request.urlopen('https://globenewswire.com/Search/NewsSearch?lang=en&exchange=NYSE').read() soup = bs.BeautifulSoup(sauce,'lxml') list = [] for div in soup.find_all('div', class_='results-link', limit=10): initialglobenewsnyseurls = ('https://globenewswire.com' + div.h1.a['href']) list.append(initialglobenewsnyseurls) a, b, c, d, e, f, g, h, i, j = listso far this works. The only problem is I have the exchange set to NYSE, but when I enter the url as such, NYSE is removed from it, as the url is automatically redirected to: https://globenewswire.com/NewsRoom (if you copy and paste the original url into chrome(the one in the code), it will redirect you to the main newsroom, and remove any criteria you previously selected. How can I keep this from happening? RE: How do I avoid Beautiful Soup redirects? - metulburr - Nov-29-2017 What is the procedure of clicks to get to the point that gives you the search URL? Like how would i replicate how you got that URL? Also do you have to be logged in to use their search? You should also use the requests module instead of the standard library When i tried this import requests r = requests.get('https://globenewswire.com/Search/NewsSearch?lang=en&exchange=NYSE', allow_redirects=False) r.contentthe response was >>> r.content b'<html><head><title>Object moved</title></head><body>\r\n<h2>Object moved to <a href="/NewsRoom">here</a>.</h2>\r\n</body></html>\r\n'So it looks like the URL you have is old or must be logged in. try using keyword https://globenewswire.com/Search/NewsSearch?keyword=exchange RE: How do I avoid Beautiful Soup redirects? - HiImNew - Nov-29-2017 You don't need to be logged in to access that url. All you have to do is select 'NYSE' as one of your options. I tried searching with keyword, and that isn't being redirected and works. However searching with keyword won't give me all of the results and it will give me some extraneous results. Is there any way I can keep BeautifulSoup from redirecting urls? Or perchaps go to the main site and then select 'NYSE'? RE: How do I avoid Beautiful Soup redirects? - metulburr - Nov-29-2017 Quote:All you have to do is select 'NYSE' as one of your options.what option? Be specific. I dont see that option anywhere. RE: How do I avoid Beautiful Soup redirects? - HiImNew - Nov-29-2017 On the left side of the webpage there is a column under the words:'Narrow By:' for selecting categories. When you scroll down to the bottom of the webpage you get to the end of the column, and there is an option called 'Stock Market'. If you click on that, it will reveal many options for selecting what stock market you specifically want. If you click on 'NYSE' it will select that as an option for your search criteria and reload the page and change your url. this is the webpage url: https://globenewswire.com/NewsRoom RE: How do I avoid Beautiful Soup redirects? - metulburr - Nov-30-2017 i am not getting redirected anymore https://globenewswire.com/Search/NewsSearch?exchange=NYSE RE: How do I avoid Beautiful Soup redirects? - wavic - Nov-30-2017 It's not changing the address because you choose an option from the menu. This is the request you are sending and you see the answer as a web page. You can build your requests according to the web address schema. Look at the address bar and the web address closely and you will see how is built. RE: How do I avoid Beautiful Soup redirects? - HiImNew - Dec-01-2017 The redirect is not happening anymore for me when I paste the url into my searchbar. However, BeautifulSoup is still being redirected as the results I get from it do not all match the selected criteria of 'NYSE'. Let me show you what I mean. This is my input code: >>> import bs4 as bs >>> import urllib.request >>> sauce = urllib.request.urlopen('http://globenewswire.com/Search/NewsSearch?exchange=NYSE').read()\ >>> soup = bs.BeautifulSoup(sauce,'lxml') >>> list = []\>>> for div in soup.find_all('div', class_='results-link', limit=10): initialglobenewsnasdaqurls = ('https://globenewswire.com' + div.h1.a['href']) list.append(initialglobenewsnasdaqurls) >>> a, b, c, d, e, f, g, h, i, j = list >>> while True: saucea = urllib.request.urlopen(a).read() soupa = bs.BeautifulSoup(saucea,'lxml') sauceb = urllib.request.urlopen(b).read() soupb = bs.BeautifulSoup(sauceb,'lxml') saucec = urllib.request.urlopen(c).read() soupc = bs.BeautifulSoup(saucec,'lxml') sauced = urllib.request.urlopen(d).read() soupd = bs.BeautifulSoup(sauced,'lxml') saucee = urllib.request.urlopen(e).read() soupe = bs.BeautifulSoup(saucee,'lxml') saucef = urllib.request.urlopen(f).read() soupf = bs.BeautifulSoup(saucef,'lxml') sauceg = urllib.request.urlopen(g).read() soupg = bs.BeautifulSoup(sauceg,'lxml') sauceh = urllib.request.urlopen(h).read() souph = bs.BeautifulSoup(sauceh,'lxml') saucei = urllib.request.urlopen(i).read() soupi = bs.BeautifulSoup(saucei,'lxml') saucej = urllib.request.urlopen(j).read() soupj = bs.BeautifulSoup(saucej,'lxml') desca = soupa.find_all(attrs={"name":"ticker"}, limit=1) tickeraraw = (desca[0]['content'].encode('utf-8')) decodedtickera = tickeraraw.decode('utf') soupatitle = soupa.title.text descb = soupb.find_all(attrs={"name":"ticker"}, limit=1) tickerbraw = (descb[0]['content'].encode('utf-8')) decodedtickerb = tickerbraw.decode('utf') soupbtitle = soupb.title.text descc = soupc.find_all(attrs={"name":"ticker"}, limit=1) tickercraw = (descc[0]['content'].encode('utf-8')) decodedtickerc = tickercraw.decode('utf') soupctitle = soupc.title.text descd = soupd.find_all(attrs={"name":"ticker"}, limit=1) tickerdraw = (descd[0]['content'].encode('utf-8')) decodedtickerd = tickerdraw.decode('utf') soupdtitle = soupd.title.text desce = soupe.find_all(attrs={"name":"ticker"}, limit=1) tickereraw = (desce[0]['content'].encode('utf-8')) decodedtickere = tickereraw.decode('utf') soupetitle = soupe.title.text descf = soupf.find_all(attrs={"name":"ticker"}, limit=1) tickerfraw = (descf[0]['content'].encode('utf-8')) decodedtickerf = tickerfraw.decode('utf') soupftitle = soupf.title.text descg = soupg.find_all(attrs={"name":"ticker"}, limit=1) tickergraw = (descg[0]['content'].encode('utf-8')) decodedtickerg = tickergraw.decode('utf') soupgtitle = soupg.title.text desch = souph.find_all(attrs={"name":"ticker"}, limit=1) tickerhraw = (desch[0]['content'].encode('utf-8')) decodedtickerh = tickerhraw.decode('utf') souphtitle = souph.title.text desci = soupi.find_all(attrs={"name":"ticker"}, limit=1) tickeriraw = (desci[0]['content'].encode('utf-8')) decodedtickeri = tickeriraw.decode('utf') soupititle = soupi.title.text descj = soupj.find_all(attrs={"name":"ticker"}, limit=1) tickerjraw = (descj[0]['content'].encode('utf-8')) decodedtickerj = tickerjraw.decode('utf') soupjtitle = soupj.title.text breakThen I go to my results of what I parsed. I now print the stock ticker which also prints the stock exchange. They should all be listed on the NYSE because that is my search criteria, however these are my results: >>> print(decodedtickera) NYSE:PGH, TSX:PGF >>> print(decodedtickerb) TSX-V:TIC >>> print(decodedtickerc) >>> print(decodedtickerd) >>> print(decodedtickere) NYSE:BSCI, NYSE:BSCJ, NYSE:BSCK, NYSE:BSCH, NYSE:GSY, NYSE:BSCL, NYSE:BSCM, NYSE:BSCN, NYSE:BSCO, NYSE:BSCP, NYSE:GTO, NYSE:BSCQ >>> print(decodedtickerf) >>> print(decodedtickerg) Nasdaq:BMTC, Nasdaq:RBPAA >>> print(decodedtickerh) Nasdaq:VBTX >>> print(decodedtickeri) TSX:XAU, TSX-V:AGX-H.V >>> print(decodedtickerj)I know that BeautifulSoup is being redirected from the url with the search criteria (http://globenewswire.com/Search/NewsSearch?exchange=NYSE) to the main page (http://globenewswire.com/NewsRoom). I know this because not all of my search critera has the 'NYSE' with the ticker. This search is getting some stocks from the Nasdaq exchange and some more stocks from the TSX and TSX-V exchange. The redirect stopped happening in my browser, however BeautifulSoup is still being redirected. RE: How do I avoid Beautiful Soup redirects? - wavic - Dec-02-2017 Did you try to change the User-Agent? headers = { 'User-Agent': "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1b3pre) Gecko/20090109 Shiretoko/3.1b3pre"} response = requests.get(url, headers=headers) RE: How do I avoid Beautiful Soup redirects? - HiImNew - Dec-02-2017 response = requests.get(url, headers=headers)I have never used curly brackets before in python and I do not know what response and requests are. Could you show me how that line would fit into my original code? |