Python Forum
Python Webscraping with a Login Website
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Python Webscraping with a Login Website
#1
Looking for some help scraping a website that requires a login. Essentially the website is to get trading card prices (that I believe are from ebay) but in a format that allows search beyond the 90 days that is on ebays site. Login url is https://members.pwccmarketplace.com/login The url I search from is https://members.pwccmarketplace.com/ I searched the previous posts and found one I thought I could try replicate but to no success. Below is the code, any help whether it could work or not would be appreciated.
#https://stackoverflow.com/questions/47438699/scraping-a-website-with-python-3-that-requires-login
import requests
from lxml import html
from bs4 import BeautifulSoup
import unicodecsv as csv
import os
import sys
import io
import time
import datetime
from datetime import datetime
from datetime import date
import pandas as pd
import numpy as np
from time import sleep
from random import randint
from urllib.parse import quote

Product_name = []
Price = []
Date_sold = []

url = "https://www.pwccmarketplace.com/login"
values = {"email": "[email protected]",
          "password": "password"}

session = requests.Session()

r = session.post(url, data=values)

Search_name = input("Search for: ")
Exclude_terms = input("Exclude these terms (- infront of all, no spaces): ")
qstr = quote(Search_name)
qstrr = quote(Exclude_terms)
Number_pages = int(input("Number of pages you want searched (Number -1): "))

pages = np.arange(1, Number_pages)

for page in pages:

    params = {"Category": 6, "deltreeid": 6, "do": "Delete Tree"}
    url = "https://www.pwccmarketplace.com/market-price-research?q=" + qstr + "+" + qstrr + "&year_min=2004&year_max=2020&price_min=0&price_max=10000&sort_by=date_desc&sale_type=auction&items_per_page=250&page=" + str(page)

    result = session.get(url, data=params)

    soup = BeautifulSoup(result.text, "lxml")

    search = soup.find_all('tr')

    sleep(randint(2,10))

    for container in search:
Any help appreciated
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Webscraping news articles by using selenium cate16 7 3,116 Aug-28-2023, 09:58 AM
Last Post: snippsat
  Webscraping with beautifulsoup cormanstan 3 1,956 Aug-24-2023, 11:57 AM
Last Post: snippsat
  Retrieve website content using Python? Vadanane 1 1,258 Jan-16-2023, 09:55 AM
Last Post: Axel_Erfurt
  Webscraping returning empty table Buuuwq 0 1,393 Dec-09-2022, 10:41 AM
Last Post: Buuuwq
  WebScraping using Selenium library Korgik 0 1,045 Dec-09-2022, 09:51 AM
Last Post: Korgik
  I want to create an automated website in python mkdhrub1 2 2,402 Dec-27-2021, 11:27 PM
Last Post: Larz60+
  Login and download an exported csv file within a ribbon/button in a website Alekhya 0 2,654 Feb-26-2021, 04:15 PM
Last Post: Alekhya
  Python to build website Methew324 1 2,229 Dec-15-2020, 05:57 AM
Last Post: buran
  Scraping all website text using Python MKMKMKMK 1 2,081 Nov-26-2020, 10:35 PM
Last Post: Larz60+
  How to get rid of numerical tokens in output (webscraping issue)? jps2020 0 1,940 Oct-26-2020, 05:37 PM
Last Post: jps2020

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020