Hi Could someone post the Thread, in the Jobs Section of the Forum, for me ?
I tried but was unable to before :-
Here is the Text :-
Hi there,
I am currently Struggling, getting Python Codes, to download all .zip Files from all Urls,
That I obtain from a path in www.flightim.com , there are 253 pages, with Zip Files on all of them. When I type my Password, after a few seconds, the message "Login Unsuccesful" displays :-
Can anyone help me, I would be willing to pay a small fee, if someone can help me.
Here is the Python Code :-
Eddie Winch
I tried but was unable to before :-
Here is the Text :-
Hi there,
I am currently Struggling, getting Python Codes, to download all .zip Files from all Urls,
That I obtain from a path in www.flightim.com , there are 253 pages, with Zip Files on all of them. When I type my Password, after a few seconds, the message "Login Unsuccesful" displays :-
Can anyone help me, I would be willing to pay a small fee, if someone can help me.
Here is the Python Code :-
import sys import getpass import hashlib import requests BASE_URL = 'https://www.flightsim.com/' LOGIN_PAGE = 'https://www.flightsim.com/vbfs/login.php?do=login' def do_login(credentials): session = requests.Session() session.get(BASE_URL) req = session.post(BASE_URL + LOGIN_PAGE, params={'do': 'login'}, data=credentials) if req.status_code != 200: print('Login not successful') sys.exit(1) # session is now logged in return session def get_credentials(): username = input('Username: ') password = getpass.getpass() password_md5 = hashlib.md5(password.encode()).hexdigest() return { 'cookieuser': 1, 'do': 'login', 's': '', 'securitytoken': 'guest', 'vb_login_md5_password': password_md5, 'vb_login_md5_password_utf': password_md5, 'vb_login_password': '', 'vb_login_password_hint': 'Password', 'vb_login_username': username, } credentials = get_credentials() session = do_login(credentials) import Fspaths from bs4 import BeautifulSoup import requests class ScrapeUrlList: def __init__(self): self.fpath = Fspaths.Fspaths() self.ziplinks = [] def get_url(self, url): page = None response = requests.get(url) if response.status_code == 200: page = response.content else: print(f'Cannot load URL: {url}') return page def get_catalog(self): base_url = 'https://www.flightsim.com/vbfs' with self.fpath.links.open('w' ) as fp: baseurl = self.fpath.base_catalog_url for pageno in range(1, 254): url = f'https://www.flightsim.com/vbfs/fslib.php?searchid=65893537&page={pageno}' print(f'url: {url}') page = self.get_url(self.fpath.base_catalog_url) if page: soup = BeautifulSoup(page, 'lxml') zip_links = soup.find_all('div', class_="fsc_details") for link in zip_links: fp.write(f"{link.find('a').text}, {base_url}/{link.find('a').get('href')}") input() else: print(f'No page: {url}') def main(): sul = ScrapeUrlList() sul.get_catalog() if __name__ == '__main__': main()Regards
Eddie Winch