Bottom Page

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
 Python Code Help Needed
Hi Could someone post the Thread, in the Jobs Section of the Forum, for me ?

I tried but was unable to before :-

Here is the Text :-

Hi there,

I am currently Struggling, getting Python Codes, to download all .zip Files from all Urls,

That I obtain from a path in , there are 253 pages, with Zip Files on all of them. When I type my Password, after a few seconds, the message "Login Unsuccesful" displays :-

Can anyone help me, I would be willing to pay a small fee, if someone can help me.

Here is the Python Code :-

import sys
import getpass
import hashlib
import requests


def do_login(credentials):
    session = requests.Session()
    req = + LOGIN_PAGE, params={'do': 'login'}, data=credentials)
    if req.status_code != 200:
        print('Login not successful')
    # session is now logged in
    return session
def get_credentials():
    username = input('Username: ')
    password = getpass.getpass()
    password_md5 = hashlib.md5(password.encode()).hexdigest()
    return {
        'cookieuser': 1,
        'do': 'login',
        's': '',
        'securitytoken': 'guest',
        'vb_login_md5_password': password_md5,
        'vb_login_md5_password_utf': password_md5,
        'vb_login_password': '',
        'vb_login_password_hint': 'Password',
        'vb_login_username': username,
credentials = get_credentials()
session = do_login(credentials)

import Fspaths
from bs4 import BeautifulSoup
import requests
class ScrapeUrlList:
    def __init__(self):
        self.fpath = Fspaths.Fspaths()
        self.ziplinks = []
    def get_url(self, url):
        page = None
        response = requests.get(url)
        if response.status_code == 200:
            page = response.content
            print(f'Cannot load URL: {url}')
        return page
    def get_catalog(self):
        base_url = ''
        with'w' ) as fp:
            baseurl = self.fpath.base_catalog_url
            for pageno in range(1, 254):
                url = f'{pageno}'
                print(f'url: {url}')
                page = self.get_url(self.fpath.base_catalog_url)
            if page:
                soup = BeautifulSoup(page, 'lxml')
                zip_links = soup.find_all('div', class_="fsc_details")
                for link in zip_links:
                    fp.write(f"{link.find('a').text}, {base_url}/{link.find('a').get('href')}")
                print(f'No page: {url}')
def main():
    sul = ScrapeUrlList()
if __name__ == '__main__':


Eddie Winch
buran wrote Aug-30-2018, 11:12 AM:
Moved to Jobs section at OP request.
Hello There,
Hope you are doing good.

I have checked the requirement and I can easily accomplish the mentioned Project in minimum cost, As i am having 5+ year of experience in Same domain.

You can add me on Skype:live:richard_25370

Please check PM

Looking forward for your reply.
I am ready to start work right now.

Best Regards
Also I forgot to mention, the Search Id changes after each Session.
The section in the File Library Section, in the www. flightsim .com Website I want to download all the ZIP Files from is :- PAI: PAI Aircraft
Hello sir,

Greeting for today..
I have 7+ years of industrial experience. Since 2 years i am working on Python. I have done 2 big projects and worked on some existing project. I can do this task. Kindly add me on Skype that is "". My hourly rate is very reasonable. Please add me we will discuss further.


Top Page

Possibly Related Threads...
Thread Author Replies Views Last Post
  Code Needs finishing Off Help Needed eddywinch82 19 2,713 May-23-2018, 06:15 AM
Last Post: eddywinch82

Forum Jump:

Users browsing this thread: 1 Guest(s)