Bottom Page

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Download multiple large json files at once
#1
I'm trying to get the auction data from the world of warcraft API for all realms to be able to scan for certain items posted across all realms.

import webbrowser
import urllib
import urllib.request
import urllib.error
import urllib, json
import re
import os.path
import time
import threading
from multiprocessing import Pool
from multiprocessing.pool import ThreadPool
from concurrent.futures.thread import ThreadPoolExecutor
from queue import Queue
from threading import Thread
import concurrent.futures
import multiprocessing

q = Queue()
url = ""
apiUrls = []
apiRealms = []
curApi = 0
realms = []
realm = ""
items = [141582, 141583, 141585, 141581, 141580, 141590, 141589, 141588, 141587, 141564, 141565, 141566, 141571, 141567,
         141576, 141577, 141578, 141570, 141569, 141568, 141572, 141573, 141574, 141575, 141579]
itemnames = ["Fran's Intractable Loop", "Sameed's Vision Ring", "Six-Feather Fan", "Demar's Band of Amore",
             "Vastly Oversized Ring", "Cloak of Martayl Oceanstrider", "Treia's Handcrafted Shroud",
             "Talisman of Jaimil Lightheart", "Queen Yh'saerie's Pendant", "Telubis Binding of Patience",
             "Mir's Enthralling Grasp", "Serrinne's Maleficent Habit", "Mavanah's Shifting Wristguards",
             "Cyno's Mantle of Sin", "Aethrynn's Everwarm Chestplate", "Fists of Thane Kray-Tan",
             "Claud's War-Ravaged Boots", "Cainen's Preeminent Chestguard", "Samnoh's Exceptional Leggings",
             "Boughs of Archdruid Van-yali", "Geta of Tay'shute", "Shokell's Grim Cinch", "Ulfgor's Greaves of Bravery",
             "Gorrog's Serene Gaze", "Welded Hardskin Helmet"]
print(len(items))

# Import realms
with open('C:/test/RealmList.txt') as f:
    realms = f.read().splitlines()
    print("Realms: " + str(realms))
    f.close()


# realms = ["Shu'halo", "Eitrigg", "Stormrage", "Moonguard"]



def scanRealm(realmb):
    startTime = time.time()
    print("Scanning " + realmb)
    url = 'https://us.api.blizzard.com/wow/auction/data/' + realmb + '?locale=en_US&access_token=hidden'
    print(url)
    # Get auction json url
    with urllib.request.urlopen(url) as response:
        html = response.read()
        html = html.decode('utf-8')

    # Get url from string
    urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', html)
    print("Got Auction Api url for realm " + realmb + ": ", urls[0])
    jsonurl = urllib.request.urlopen(urls[0])
    print("request")
    text = json.load(jsonurl.read())
    print("load")
    # write to txt
    with open('c:/Test/' + realmb + '.txt', 'w') as f:
        f.write(str(text).replace("{'auc", "\n{'auc"))
    print("Completed scanning " + realmb + " in " + str(time.time() - startTime) + " seconds.")
    f.close()
    return 1


def getJsonUrl(realmb):
    startTime = time.time()
    print("Scanning " + realmb)
    url = 'https://us.api.blizzard.com/wow/auction/data/' + realmb + '?locale=en_US&access_token=hidden'
    print(url)
    # Get auction json url
    with urllib.request.urlopen(url) as response:
        html = response.read()
        print(html)
        html = html.decode('utf-8')

    # Get url from string
    urls = re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', html)
    apiUrls.append(urls[0])
    apiRealms.append(realmb)
    print("Got Auction Api url for realm " + realmb + ": ", urls[0])

def downloadJson(realmb):
    print("request")
    response = urllib.request.urlopen(realmb)
    # print(response.read())
    text = json.load(response.read())
    print("load")
    # write to txt
    with open('c:/Test/' + apiRealms[curApi] + '.txt', 'w') as f:
      f.write(str(text).replace("{'auc", "\n{'auc"))
    print("Completed scanning " + realmb)
    f.close()
    curApi = curApi + 1
    return 1

starttime = time.time()
#Get the url's for the json files
with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor:
    pages = executor.map(getJsonUrl, realms)
print("Completed in " + str(time.time() - starttime) + " seconds.")

#download json
with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor:
    pages = executor.map(downloadJson, apiUrls)
print("Completed in " + str(time.time() - starttime) + " seconds.")
Initially I had it downloading one realm at a time, which worked fine but took almost 2 hours to complete. I'm trying to use threading to be able to scan multiple realms at once to speed it up a lot. The scanRealm function is the one that'll do one realm at a time and work fine. getJsonUrl appears to work fine and outputs urls to the json files that I need, for example http://auction-api-us.worldofwarcraft.co...tions.json. The downloadJson function is where I believe things are going wrong. It never seems to get to the point where it prints "load". No files are ever created or anything and after fiddling around looking for solutions for the past few hours I'm stumped.

Sorry for the mess of a code, I'm no professional at python and am mostly just trying to scrape up something functional to improve over time.
Quote

Top Page

Possibly Related Threads...
Thread Author Replies Views Last Post
  Download article without photo caption Helene_python 2 71 Feb-14-2019, 01:13 PM
Last Post: snippsat
  Import Library but Download first? OscarBoots 15 445 Feb-07-2019, 03:07 AM
Last Post: snippsat
  Download data from webpage after POST request AlDe 0 58 Feb-02-2019, 06:26 AM
Last Post: AlDe
  how to load large data into dataframe. sandy 0 75 Feb-01-2019, 06:19 PM
Last Post: sandy
  XML file to multiple txt files in PYTHON ayjay516 1 55 Jan-31-2019, 10:21 PM
Last Post: Larz60+
  Delete multiple files using file extention sumncguy 2 118 Jan-23-2019, 03:05 AM
Last Post: snippsat
  Large Number Math fibonacci 2 191 Nov-20-2018, 08:04 PM
Last Post: Gribouillis
  Problem with CSV download imtiazu 2 208 Nov-12-2018, 02:03 AM
Last Post: snippsat
  I wan't to Download all .zip Files From A Website (Project AI) eddywinch82 68 3,288 Oct-28-2018, 02:13 PM
Last Post: eddywinch82
  Working with large volume of data (RAM is not enough) evonevo 6 305 Oct-21-2018, 09:24 PM
Last Post: Larz60+

Forum Jump:


Users browsing this thread: 1 Guest(s)