I think you need a Session. No problem, requests have this already.
First create a session object, then get the login page:
You can use also regex, but this is more complicated.
For full help, you've to post the whole html code. Without this we can only guess. I think it's still a form on your side wrapped around with javascript for the fancy stuff.
First create a session object, then get the login page:
session = requests.Session() response = session.get('http://192.168.192.1/cgi-bin/status.cgi') # in my case # then you look in response.text for a form, there is an action and it's value is the url based # on the current path you're on, in my case i used BeatifulSoup to find it. # but you can use also a webbrowser # <form action="sendResult.cgi?section=login" id="mainForm" method="post"> # <input name="loginDelay" type="hidden" value="0"/> # <input name="page" type="hidden" value="login"/> # <div>Benutzername</div> # <input name="username" tabindex="1" type="text"/> # <div>Passwort</div> # <input name="password" tabindex="2" type="password"/> # </form> # so we need something to prepare the data for the form # all fields should be submitted, also the hidden fields # with requests you can just use a dict for this task login = {'loginDelay': '0', 'page': 'login', 'username': 'admin', 'password': 'admin'} # then you have to use the post method together with the target (action url) and # login data response = session.post('http://192.168.192.1/cgi-bin/sendResult.cgi?section=login', data=login) if response.status_code == 200: print('Success') print(response.text)For further webscraping you should use Beautiful Soup or Scrapy.
You can use also regex, but this is more complicated.
For full help, you've to post the whole html code. Without this we can only guess. I think it's still a form on your side wrapped around with javascript for the fancy stuff.
Almost dead, but too lazy to die: https://sourceserver.info
All humans together. We don't need politicians!
All humans together. We don't need politicians!