Python Forum
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
urllib request urlopen?
#1
Hi,

I have build a small program with Python3. I'm using urllib.request.urlopen() to download csv files from an open API.
I save the files with file.write(url). After the download the content is imported to a db (sqlite).

It works good but I wonder if this is a Pythonic way to do it? Is it possible to save the content in the csv file to a list instead of a file?
The files are maybe around 1 MB.

Should I keep it as it is or do you guys have a better idea?

I just want to learn the right way.
Reply
#2
I wrote this a while back and it works for me:

GetUrl.py
import requests
import socket


class GetUrl:
    def __init__(self):
        self.ok_status = 200
        self.response = None

    def check_availability(self):
        self.internet_available = False
        if socket.gethostbyname(socket.gethostname()) != '127.0.0.1':
            self.internet_available = True
        return self.internet_available

    def fetch_url(self, url, bin=False):
        self.response = None
        if self.check_availability():
            try:
                if bin:
                    self.response = requests.get(url, stream=True, allow_redirects=False, timeout=3)
                    self.response.raise_for_status()
                else:
                    self.response = requests.get(url, allow_redirects=False, timeout=3)
                    self.response.raise_for_status()
            except requests.exceptions.HTTPError as errh:
                print("Http Error:", errh)
            except requests.exceptions.ConnectionError as errc:
                print("Error Connecting:", errc)
            except requests.exceptions.Timeout as errt:
                print("Timeout Error:", errt)
            except requests.exceptions.RequestException as err:
                print("OOps: Something Else", err)
        else:
            print("Please check network connection and try again")
        return self.response

def testit():
    gu = GetUrl()
    url = 'https://www.google.com/'
    page = gu.fetch_url(url)
    if page is not None:
        if page.status_code ==  gu.ok_status:
            print(page.text)
        else:
            print("Problem downloading page")

if __name__ == '__main__':
    testit()
So to use in another program:

Reply
#3
Thanks Larz60+,
Well I guess to fetch a network-page its good. My url starts a download (csv file), so I cant fetch it as you do.
So maybe I'm doing it the right way.

Would be fun to hear what others think about the solution.


(Mar-24-2018, 08:53 PM)Larz60+ Wrote: I wrote this a while back and it works for me:

GetUrl.py
import requests
import socket


class GetUrl:
    def __init__(self):
        self.ok_status = 200
        self.response = None

    def check_availability(self):
        self.internet_available = False
        if socket.gethostbyname(socket.gethostname()) != '127.0.0.1':
            self.internet_available = True
        return self.internet_available

    def fetch_url(self, url, bin=False):
        self.response = None
        if self.check_availability():
            try:
                if bin:
                    self.response = requests.get(url, stream=True, allow_redirects=False, timeout=3)
                    self.response.raise_for_status()
                else:
                    self.response = requests.get(url, allow_redirects=False, timeout=3)
                    self.response.raise_for_status()
            except requests.exceptions.HTTPError as errh:
                print("Http Error:", errh)
            except requests.exceptions.ConnectionError as errc:
                print("Error Connecting:", errc)
            except requests.exceptions.Timeout as errt:
                print("Timeout Error:", errt)
            except requests.exceptions.RequestException as err:
                print("OOps: Something Else", err)
        else:
            print("Please check network connection and try again")
        return self.response

def testit():
    gu = GetUrl()
    url = 'https://www.google.com/'
    page = gu.fetch_url(url)
    if page is not None:
        if page.status_code ==  gu.ok_status:
            print(page.text)
        else:
            print("Problem downloading page")

if __name__ == '__main__':
    testit()
So to use in another program:
Reply
#4
(Mar-26-2018, 11:39 AM)nutgut Wrote: Would be fun to hear what others think about the solution.
The solution is okay,but he dos a lot error checking that can be confusing.
It it's simplest form,here download a CSV from web.
Always use Requests and not urllib.
import requests

url = 'http://www.sample-videos.com/csv/Sample-Spreadsheet-10-rows.csv'
url_get = requests.get(url)
# Download csv
with open('sample.csv', 'wb') as f_out:
    f_out.write(url_get.content)
Example parse out that link from the website,and the use it.
import requests
from bs4 import BeautifulSoup

url_csv = 'http://www.sample-videos.com/download-sample-csv.php'
url = requests.get(url_csv)
soup = BeautifulSoup(url.content, 'lxml')
h1 = soup.find('h1')
print(h1.text)
print('------------')
site = soup.find('a', class_="navbar-brand")
link = soup.find('a', class_="download_csv")
adress_csv = f"{site.get('href')}/{link.get('href')}"
print(adress_csv)

# Download csv
download_link = requests.get(adress_csv)
csv_url_name = adress_csv.split('/')[-1]
print(csv_url_name)
with open(csv_url_name, 'wb') as f_out:
    f_out.write(download_link.content)
Output:
Download Sample CSV ------------ http://www.sample-videos.com/csv/Sample-Spreadsheet-10-rows.csv Sample-Spreadsheet-10-rows.csv
Reply
#5
Thanks,
I will check up the requests.


(Mar-26-2018, 05:15 PM)snippsat Wrote:
(Mar-26-2018, 11:39 AM)nutgut Wrote: Would be fun to hear what others think about the solution.
The solution is okay,but he dos a lot error checking that can be confusing.
It it's simplest form,here download a CSV from web.
Always use Requests and not urllib.
import requests

url = 'http://www.sample-videos.com/csv/Sample-Spreadsheet-10-rows.csv'
url_get = requests.get(url)
# Download csv
with open('sample.csv', 'wb') as f_out:
    f_out.write(url_get.content)
Example parse out that link from the website,and the use it.
import requests
from bs4 import BeautifulSoup

url_csv = 'http://www.sample-videos.com/download-sample-csv.php'
url = requests.get(url_csv)
soup = BeautifulSoup(url.content, 'lxml')
h1 = soup.find('h1')
print(h1.text)
print('------------')
site = soup.find('a', class_="navbar-brand")
link = soup.find('a', class_="download_csv")
adress_csv = f"{site.get('href')}/{link.get('href')}"
print(adress_csv)

# Download csv
download_link = requests.get(adress_csv)
csv_url_name = adress_csv.split('/')[-1]
print(csv_url_name)
with open(csv_url_name, 'wb') as f_out:
    f_out.write(download_link.content)
Output:
Download Sample CSV ------------ http://www.sample-videos.com/csv/Sample-Spreadsheet-10-rows.csv Sample-Spreadsheet-10-rows.csv
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Getting from <td> tag by using urllib,Beautifulsoup KuroBuster 2 2,026 Aug-20-2021, 07:53 AM
Last Post: KuroBuster
  Can urlopen be blocked by websites? peterjv26 2 3,320 Jul-26-2020, 06:45 PM
Last Post: peterjv26
  Beginner: urllib error tomfry 7 6,466 May-03-2020, 04:35 AM
Last Post: Larz60+
  SSLCertVerificationError using urllib (urlopen) FalseFact 1 5,833 Mar-31-2019, 08:34 AM
Last Post: snippsat
  Error: module 'urllib' has no attribute 'urlopen' mitmit293 2 14,949 Jan-29-2019, 02:32 PM
Last Post: snippsat
  [Errno11004] Get addrinfo failed with urlopen prashanth0988 2 13,744 Aug-02-2018, 01:41 PM
Last Post: iiooii
  urllib urlopen getting error 400 on 1 specific page glidecode 4 4,057 Mar-01-2018, 11:01 PM
Last Post: glidecode
  urllib2.urlopen() user agent header Skaperen 8 12,543 Jul-14-2017, 05:36 PM
Last Post: nilamo

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020