Bottom Page

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
 Read url from CSV and Scrape website

i have written a code to read data from CSV file and scrape data, but whenever i am running this code i receive HTTP error 400

I have url like 16k but in this i am posting only 5 url.

my csv file has two columns, 1 st one is ID and second one is URL.

whenever i am running this code it prints invalid url 400 http error.

these are my codes:
import csv
from bs4 import BeautifulSoup
import requests
import time
import os

data_obj = {}
final_data = []

def readfile():
    global data_obj
    file = "BOOK.CSV"
    f = open("./"+ file, "r")
    for row in f.readlines():
        lst = row.split(",")
        data_obj[lst[0]] = lst[1]#here reading dictionary

def writedata(alldata1, filename):
    print(" >>>> FINAL PRINTING DATA >>>> ")
    #import pdb; pdb.set_trace()
    with open("./"+filename,'w') as csvfile:
        csvfile = csv.writer(csvfile, delimiter=',')
        for i in range(0, len( alldata1 )):
            csvfile.writerow( alldata1[i]  )

def parsedata():
    global data_obj, final_data
    for sublist in data_obj.keys():
        url = data_obj[sublist]
        data = getdata(url,{})
        soup = BeautifulSoup(data, "html.parser")
def getdata(url, values):
    r =, data=values, timeout=10)
    text = r.text
    return text

def main():

this is the error i received:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"""> <html><head><title>Bad Request</title> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"/></head> <body><h2>Bad Request - Invalid URL</h2> <hr/><p>HTTP Error 400. The request URL is invalid.</p> </body></html>
i am attaching the file, can someone please tell what should i do?

Attached Files
.csv   BOOK.csv (Size: 1,004 bytes / Downloads: 521)
def getdata(url, values=None):
    r =, data=values, timeout=10)
    text = r.text
    return text
Your code is overcomplicated.

import csv
from bs4 import BeautifulSoup
import requests

def get(urls):
    for url in urls:
        yield requests.get(url).content.decode('utf-8')

with open('BOOK.csv') as csv_:
    reader = csv.reader(csv_)
    urls = [line[1] for line in urls if line]
    webpages = list(get(urls))
    for html in webpages:
        soup = BeautifulSoup(html, 'lxml')
You will be able to put together the rest.
Prince_Bhatia likes this post
"As they say in Mexico 'dosvidaniya'. That makes two vidaniyas."
Thank you so much sir, it is great learning here.
urls not defined i get this error while running this code

Top Page

Possibly Related Threads...
Thread Author Replies Views Last Post
  Unable to Scrape Website muhamdasim 1 213 Mar-21-2020, 03:31 AM
Last Post: Larz60+
  scrape data 1 go to next page scrape data 2 and so on alkaline3 6 392 Mar-13-2020, 07:59 PM
Last Post: alkaline3
  why I can't scrape a website? kmkim319 7 691 Sep-27-2019, 03:14 PM
Last Post: kmkim319
  How do i scrape website whose page changes using javsacript _dopostback function and Prince_Bhatia 1 2,245 Aug-06-2018, 09:45 AM
Last Post: wavic
  Scrape A tags from a website Prince_Bhatia 1 2,051 Oct-15-2017, 12:56 AM
Last Post: metulburr

Forum Jump:

Users browsing this thread: 1 Guest(s)