May-19-2018, 03:21 PM
(May-19-2018, 10:55 AM)eddywinch82 Wrote: I used the following written code in Python 2.79, found on a video on Youtube, inserting my info to achieve my aim, but it unsurprisingly didn't work when I ran it timeouts and errors etc, probably due to it's simplicity :-Yes,i guess there are some lacking in understating this of topic,and maybe Python in general
The
wget
get all files method,may work or not.Not work then back to look at site source for an other method.
As i took a look,it's not so difficult to get all .UTU files.
from bs4 import BeautifulSoup import requests url = 'http://web.archive.org/web/20070611232047/http://ultimatetraffic.flight1.net:80/utfiles.asp?mode=1&index=0' url_get = requests.get(url) soup = BeautifulSoup(url_get.content, 'lxml') b_tag = soup.find_all('b') for a in b_tag: link = a.find('a')['href'] #print(link) f_name = link.split('id=')[-1] with open(f_name, 'wb')as f: f.write(requests.get(link).content)