Python Forum
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Error object has no attribute text
#1
Hi all,

i am new to Python and am following examples found here
https://medium.freecodecamp.org/how-to-s...46935d93fe
to try to scrap text and prices from certain website.

It returned error although i followed the steps. Not sure what happened.

My code as per below:

# import libraries
import requests
import urllib
from bs4 import BeautifulSoup

# specify the url
quote_page='http://www.bloomberg.com/quote/SPX:IND'

# query the website and return the html to the variable ‘page’
page = urllib.request.urlopen(quote_page)

# parse the html using beautiful soup and store in variable `soup`
soup = BeautifulSoup(page,'html.parser')

# Take out the <div> of name and get its value
name_box = soup.find('h1', attrs={'class':'companyName__99a4824b'})

name = name_box.text.strip # strip() is used to remove starting and trailing
print (name)
this the text to be scrapped
<h1 class="companyName__99a4824b">S&P 500 Index</h1>

traceback:
name = name_box.text.strip # strip() is used to remove starting and trailing
AttributeError: 'NoneType' object has no attribute 'text'

Appreciate your guidance.

Thank you.

HC
Reply
#2
look at what soup give you back,so no info about stock is given that's why you get NoneType error.
One of the lines:
<p>Your usage has been flagged as a violation of our <a href="http://www.bloomberg.com/tos" rel="noopener noreferrer" target="_blank">terms of service</a>
tos(Terms Of Service)
Quote:You shall not use or attempt to use any “scraper,” “robot,” “bot,” “spider,” “data mining,” “computer code,”
or any other automate device, program, tool, algorithm, process or methodology to access, acquire, copy, or monitor any portion of the Service
So maybe find site a that more friendly that this,there may be a way be around,but not worth the effort.
It's just stock info a lot of sites has this for free with API and JSON output.
ALPHA VANTAGE | List of sites.
Reply
#3
If you need quotes, you can use iex:

#!/usr/bin/python3
import requests, json

url = "https://api.iextrading.com/1.0/stock/ibm/quote"
rsp = requests.get(url)
if rsp.status_code in (200,):
    str = rsp.text.strip() # json
    print(str)
    print()
else:
    print("ERROR:", rsp.status_code)
    print(rsp.text + "\n\n")
Reply
#4
@heiner55:
  • Don't use str as variable name. It's built-in function and you override it.
  • rsp.json() will give you parsed json response
If you can't explain it to a six year old, you don't understand it yourself, Albert Einstein
How to Ask Questions The Smart Way: link and another link
Create MCV example
Debug small programs

Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  How to fix "'dict_values' object has no attribute 'inject_wsgi'" in session_transacti devid 0 1,138 Aug-13-2023, 07:52 AM
Last Post: devid
  AttributeError: 'ellipsis' object has no attribute 'register_blueprint' Mechanicalpixelz 2 2,357 Dec-29-2021, 01:30 AM
Last Post: Mechanicalpixelz
  AttributeError: ResultSet object has no attribute 'get_text' KatMac 1 4,337 May-07-2021, 05:32 PM
Last Post: snippsat
  Python 3.9 : BeautifulSoup: 'NoneType' object has no attribute 'text' fudgemasterultra 1 8,816 Mar-03-2021, 09:40 AM
Last Post: Larz60+
  select all the span text with same attribute JennyYang 2 2,096 Jul-28-2020, 02:56 PM
Last Post: snippsat
  'NavigableString' object has no attribute 'h2' RandomCoder 5 5,310 May-20-2020, 09:01 AM
Last Post: Larz60+
  Using Python to get attribute text furiousfrodo 2 2,819 Dec-18-2019, 04:18 PM
Last Post: furiousfrodo
  AttributeError: 'str' object has no attribute 'xpath' nazmulfinance 4 10,387 Nov-11-2019, 05:15 PM
Last Post: nazmulfinance
  AttributeError: 'str' object has no attribute 'xpath' nazmulfinance 0 3,021 Nov-10-2019, 09:13 PM
Last Post: nazmulfinance
  form.populate_obj problem "object has no attribute translate" pascale 0 3,622 Jun-12-2019, 07:30 PM
Last Post: pascale

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020