Python Forum
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Too many queries?
#1
I have a script that executes every evening and picks up current crypto prices and then multiplies it with the current holding. After these two steps it writes the data and inserts a new row into an existing txt file. I use the BeautifulSoup to perform the crypto price acquisition. I have 9 queries. Everything worked as it should, but in the past few weeks I noticed that there were evenings when the script failed to finish.

I did some more tests and noticed if I run the script once again under PyCharm’s I too get an error with the BeautifulSoup getting a certain cryptos price. If I then copy this one Query outside and insert it into a separate Python script that does not include the other queries the query (that caused a problem in the primary script) works and the value does get picked up. I now removed this one query from the original script and I now have the script working.

Is there a possibility that the multiple queries cause a problem and one of them gets an error if it is a part of a bigger bunch? All the queries try to pick up values from the same webpage. I did try to play around with timers and insert some pause after each query, but that does not seem to help.

Is there something I can do to get all the queries to perform in a single script, or is it best to break the original script into two smaller scripts?
Reply
#2
please show your code
Reply
#3
First of all excuse the code itself. I'm working with python for the past 3 months... I know that I should just write a function and use it every asset, but I never got to it ;)

I did run the full code (even the commented telos one) for two months without any problems. Then I got an error and I had comment the telos part out. The code now works. If I copy out the telos code and run it in a seperate python script I do get the data without a problem.

from amount import aua, aga, xmra, eosa, lbrya, arrra, deroa, tlosa, vrsca
from bs4 import BeautifulSoup
import requests, datetime
from lxml import etree


goldurl = "https://goldswitzerland.com/charts/gold-price/eur/"
gold1 = requests.get(goldurl)
goldsoup = BeautifulSoup(gold1.text, "html.parser")
gold4 = etree.HTML(str(goldsoup)).xpath("/html/body/main/div[1]/div[1]/div/div/div[1]")[0].text[2:]
gold3 = gold4.strip()
gold = gold3.replace(",","")
golds = format(float(gold)*aua,'.2f')
goldss = golds.replace(".", ",")


silverurl = "https://goldswitzerland.com/charts/silver-price/eur/"
silver1 = requests.get(silverurl)
silversoup = BeautifulSoup(silver1.text, "html.parser")
silver4 = etree.HTML(str(silversoup)).xpath("/html/body/main/div[1]/div[1]/div/div/div[1]")[0].text[2:]
silver3 = silver4.strip()
silver = silver3.replace(",","")
silvers = format(float(silver)*aga,'.2f')
silverss = silvers.replace(".", ",")


xmrurl = "https://coinmarketcap.com/currencies/monero/xmr/eur/"
xmr1 = requests.get(xmrurl)
xmrsoup = BeautifulSoup(xmr1.text, "html.parser")
xmr4 = etree.HTML(str(xmrsoup)).xpath("/html/body/div[1]/div[2]/div[1]/div[2]/div/div[1]/div[2]/div/div[2]/div[1]/div/span")[0].text[1:]
xmr3 = xmr4.strip("€")
xmr = xmr3.replace(",","")
xmrs = format(float(xmr)*xmra,'.2f')
xmrss = xmrs.replace(".", ",")


eosurl = "https://coinmarketcap.com/currencies/eos/eos/eur/"
eos1 = requests.get(eosurl)
eossoup = BeautifulSoup(eos1.text, "html.parser")
eos4 = etree.HTML(str(eossoup)).xpath("/html/body/div[1]/div/div[1]/div[2]/div/div[1]/div[2]/div/div[2]/div[1]/div/span")[0].text[1:]
eos3 = eos4.strip("€")
eos = eos3.replace(",","")
eoss = format(float(eos)*eosa,'.2f')
eosss = eoss.replace(".", ",")


lbryurl = "https://coinmarketcap.com/currencies/library-credits/lbc/eur/"
lbry1 = requests.get(lbryurl)
lbrysoup = BeautifulSoup(lbry1.text, "html.parser")
lbry4 = etree.HTML(str(lbrysoup)).xpath("/html/body/div[1]/div/div[1]/div[2]/div/div[1]/div[2]/div/div[2]/div[1]/div/span")[0].text[1:]
lbry3 = lbry4.strip("€")
lbry = lbry3.replace(",","")
lbrys = format(float(lbry)*lbrya,'.2f')
lbryss = lbrys.replace(".", ",")


arrrurl = "https://coinmarketcap.com/currencies/pirate-chain/arrr/eur/"
arrr1 = requests.get(arrrurl)
arrrsoup = BeautifulSoup(arrr1.text, "html.parser")
arrr4 = etree.HTML(str(arrrsoup)).xpath("/html/body/div[1]/div/div[1]/div[2]/div/div[1]/div[2]/div/div[2]/div[1]/div/span")[0].text[1:]
arrr3 = arrr4.strip("€")
arrr = arrr3.replace(",","")
arrrs = format(float(arrr)*arrra,'.2f')
arrrss = arrrs.replace(".", ",")


derourl = "https://coinmarketcap.com/currencies/dero/dero/eur/"
dero1 = requests.get(derourl)
derosoup = BeautifulSoup(dero1.text, "html.parser")
dero4 = etree.HTML(str(derosoup)).xpath("/html/body/div[1]/div/div[1]/div[2]/div/div[1]/div[2]/div/div[2]/div[1]/div/span")[0].text[1:]
dero3 = dero4.strip("€")
dero = dero3.replace(",","")
deros = format(float(dero)*deroa,'.2f')
deross = deros.replace(".", ",")


# telosurl = "https://coinmarketcap.com/currencies/telos/tlos/eur/"
# telos1 = requests.get(telosurl)
# telossoup = BeautifulSoup(telos1.text, "html.parser")
# telos4 = etree.HTML(str(telossoup)).xpath("/html/body/div[1]/div/div[1]/div[2]/div/div[1]/div[2]/div/div[2]/div[1]/div/span")[0].text[1:]
# telos3 = telos4.strip("€")
# telos = telos3.replace(",","")
# teloss = format(float(telos)*telosa,'.2f')
# telosss = teloss.replace(".", ",")
teloss=100
telosss="100"


verusurl = "https://coinmarketcap.com/currencies/veruscoin/vrsc/eur/"
verus1 = requests.get(verusurl)
verussoup = BeautifulSoup(verus1.text, "html.parser")
verus4 = etree.HTML(str(verussoup)).xpath("/html/body/div[1]/div/div[1]/div[2]/div/div[1]/div[2]/div/div[2]/div[1]/div/span")[0].text[1:]
verus3 = verus4.strip("€")
verus = verus3.replace(",","")
veruss = format(float(verus)*verusa,'.2f')
verusss = veruss.replace(".", ",")


together=float(golds)+float(silvers)+float(xmrs)+float(eoss)+float(lbrys)+float(arrrs)+float(deros)+teloss+float(veruss)
togetherr = format(together,'.2f')
togetherrr = togetherr.replace(".", ",")


date=datetime.datetime.now()
dat=(f"{date.day}.{date.month}.{date.year} {date.hour}:{date.minute}:{date.second}")
log=open("diary.txt", 'a')
log.write(dat + "   AU: "+ goldss + " , AG: " + silverss + ", Monero: " + xmrss + " , EOS: " + eosss + " , LBRY: " + lbryss + " , ARRR: " + arrrss + " , DERO: " + deross + " , Telos: " + telosss + " , Verus: " + veruss + "   SUM: " + togetherrr + "\n" )
log.close()

excel=open("excel.txt", 'a')
excel.write(dat + "    COPY FROM HERE -->" + goldss + "\t" + silverss + "\t" + xmrss + "\t" + "\t" + eosss + "\t" + lbryss + "\t" + arrrss + "\t" + deross + "\t"+ "\t"+ "\t" + telosss + "\t" + verusss + "\t"+ "\t"+ "\t" + togetherrr + "<-- TO HERE\n")
excel.close()
Reply
#4
Not sure what your amount.py file looks like, but is it possible that the names you use could be interfering with the actual currency codes?
Reply
#5
It looks like this:

aua=x.x
aga=x.x
xmra=x.x
eosa=x.x
lbrya=x.x
arrra=x.x
deroa=x.x
tlosa=x.x
vrsca=x.x
Reply
#6
There are many packages available for fetching forex data.
i have not tried these, (I use yahooquery for all financials)
there's also yahoofinance which is more popular (i still like yahooquery)
you can get forex prices from either of the above.

to find others, google: fetch forex metal prices python site:pypi.org

If you still want to use beautifulsoup, i'd recommend
you should look at a beautiful soup tutorial (doesn't take long at all to do), beautifulsoup is much easier to use than what you are trying in your code.

see: (written by snippsat)
Web-Scraping part-1
Web-Scraping part-2
Reply
#7
Tnx! Will take a look at the tutorial. ;)

(Jul-04-2023, 12:05 AM)Larz60+ Wrote: There are many packages available for fetching forex data.
i have not tried these, (I use yahooquery for all financials)
there's also yahoofinance which is more popular (i still like yahooquery)
you can get forex prices from either of the above.

to find others, google: fetch forex metal prices python site:pypi.org

If you still want to use beautifulsoup, i'd recommend
you should look at a beautiful soup tutorial (doesn't take long at all to do), beautifulsoup is much easier to use than what you are trying in your code.

see: (written by snippsat)
Web-Scraping part-1
Web-Scraping part-2
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Text File Manipulation Queries? JustJeff 2 2,124 Apr-10-2021, 08:12 PM
Last Post: JustJeff
  2 queries how to use "row" zxcv 1 1,957 Oct-30-2018, 04:19 AM
Last Post: nilamo
  Pyhton and SQL Lite3 Data Queries compumarsh 4 3,115 Sep-21-2018, 02:29 PM
Last Post: compumarsh
  How Does pyspark deal with Spaces in Queries cpatte7372 3 2,942 Jul-31-2018, 09:53 PM
Last Post: micseydel
  Slow execution of MS Access queries Anirudh_Avantsa 4 4,391 Feb-12-2018, 05:01 PM
Last Post: nilamo
  Xpath queries sairam132033 6 4,926 Dec-06-2017, 11:02 PM
Last Post: Larz60+
  Problem with Python, MySQL and Multi-threading queries zagk 1 11,905 Jul-01-2017, 12:15 AM
Last Post: zagk

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020