Bottom Page

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
 Project: Opening all links within a web page
I'm trying to write a code that opens all links that are on a web page in one tab for each.
import webbrowser
import bs4
import requests

def my_funct(search_url):

    while True:
	    print(f'Downloading page {url}...')
	    res = requests.get(search_url)
	    soup = bs4.BeautifulSoup(res.text, "html.parser")
	    urls ='#href')
	    if urls == []:
		    print('Page doesn\'t have links!')
		    res = requests.get(urls)
		    numLin = len(res)
		    for i in range(numLin):, new=2)
def test_it(args):
if __name__ == '__main__':
	url = input("Please add full url address of desired web site: ")
Whatever website that I add within url it prints a message that page doesn't have links. I'm thinking that there is something wrong with the selector that I chose.

Top Page

Possibly Related Threads...
Thread Author Replies Views Last Post
  use Xpath in Python :: libxml2 for a page-to-page skip-setting apollo 2 472 Mar-19-2020, 06:13 PM
Last Post: apollo
  webscrapping links and then enter those links to scrape data kirito85 2 657 Jun-13-2019, 02:23 AM
Last Post: kirito85
  Web Page not opening while web scraping through python selenium sumandas89 4 4,327 Nov-19-2018, 02:47 PM
Last Post: snippsat
  Flask - Opening second page via href is failing - This site can’t be reached rafiPython1 2 1,654 Apr-11-2018, 08:41 AM
Last Post: rafiPython1

Forum Jump:

Users browsing this thread: 1 Guest(s)