Python Forum
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
working with lxml and requests
#1
Hello. I am trying to pull xml data from a webpage, grab only two XML tags or data pieces I need and then put them in an CSV file. The webpage is a Cisco Call Manager Unity Voicemail server. I want to pull a users "Alias" and their phone extension "DtmfAccessId". Each webpage shows up to 2000 users and eventually id like to build a while statement so it goes thru each page till there are no more.

Below is all the code I've gotten so far. I can't even see the xml data. I'm a beginner in Python so be patient.

from lxml import html
import requests

page = 1

response = requests.get('https://10.10.10.1/vmrest/users?rowsPerPage=2000\&pageNumber=' + page, verify=False, auth=('user', 'pass'))

xml = response.content

data = html.document_fromstring(xml)

print(data)
Any help is greatly appreciated.
Reply
#2
If it's an html page, then using beautifulsoup would probably be the easiest way. If you can identify what the tags are (a class, or how they're nested or something), then you can use one line to get all the users as a list.
Reply
#3
Thats the issue I'm also having. How can I get the XML code of the page I'm trying to access? I cant just use a web browser and go to that link I gave you. If I do, I get an error 405 (method not supported). So I believe I have to use some sort of GET method. I just don't know where to look or find what I am looking for. Anything I've found online is just a dead end. Been at this for 3 weeks and I'm running out of steam. haha
Reply
#4
Quote:Thats the issue I'm also having. How can I get the XML code of the page I'm trying to access?
as nilamo points out, it's most likely html and not XML.
you should be using beautiful soup.
There's an excellent 2-part tutorial on this forum by snippsat, on scraping with beautifulsoup
part1 here: https://python-forum.io/Thread-Web-Scraping-part-1
part2 here: https://python-forum.io/Thread-Web-scraping-part-2
Reply
#5
Ok. I will give it a try! Thanks you guys! Appreciate it!
Reply
#6
If it doesn't support GET, then you can use something else to craft a different type of request. On Windows, Fiddler is very good. Otherwise, curl is very good.
Reply
#7
So after looking over your links here is what I have now...

import requests
from bs4 import BeautifulSoup

response = requests.get('https://10.10.10.1/vmrest/users?rowsPerPage=2000\&pageNumber=1', verify=False, auth=('user', 'pass'))

soup = BeautifulSoup(response.content, 'html.parser')
print(soup.find('title').text)
and I get the following output...

Error:
Warning (from warnings module): File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 845 InsecureRequestWarning) InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings Cisco System - Error report
It does support GET. There is an old perl script that uses GET to do the same thing, however, the old script doesnt work on a new Debian server and none of us know Perl, just basic Python for scripting access to routers and switches. The perl script uses the following modules...
LWP::UserAgent
XML::Simple

The code in perl looks like this...

use LWP::Simple;
use XML::Simple;

my $xml = new XML::Simple;
my @userdata;

$page = 1;

while(1)
{ 
  my $url = "https://USER:pass\@10.10.10.1/vmrest/users?rowsPerPage=2000\&pageNumber=$page";
  my $content = get($url);
  die "error getting $url" unless defined $content;

  my $data = $xml->XMLin($content);

# if we dont get at least one user end loop

  if(@{$data->{User}} < 1)
{
  last;
}
# build the userdata array, each entry contains "username,extension"

$start =(($page-1) * 2000);
for ($i=$start;$i<=$start + @{$data->{User}} - 1;$i++)
{
  push(@userdata,"$data->{User}->[$i-$start]->{Alias},$data->{User}->[$i-$start]->{DtmfAccessId}");
 }
$page++

# Dump the results to a file

open(UNITY,"/usr/scripts/unityLDAP/$timestamp-unity.csv");
for(@userdata)
 {
 print UNITY "$_\n";
}
close(UNITY);
Reply
#8
(Apr-18-2018, 06:16 PM)gentoobob Wrote: my $url = "https://USER:pass\@10.10.10.1/vmrest/
That's different from the url you're using. Maybe the switch doesn't understand authentication headers, and it needs to be sent as part of the url?

Also, back slashes (this: \) are almost never used in urls, so my guess is that they're in that perl string to prevent perl from parsing the string somehow. So maybe this: ?rowsPerPage=2000\&pageNumber=1 should be this: ?rowsPerPage=2000&pageNumber=1
Reply
#9
Well that’s the perl version of what needs to be done. That’s just an example. I’m trying to do that in python.
Reply
#10
A url is a url. They're not different in perl or python. So changing the url could be why you're getting different results.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  POST requests - different requests return the same response Default_001 3 1,900 Mar-10-2022, 11:26 PM
Last Post: Default_001
  requests module is not working varsh 3 3,749 Sep-10-2020, 03:53 PM
Last Post: buran
  Flask, Posgresql - Multiple requests are not working bmaganti 5 2,672 Feb-20-2020, 03:02 PM
Last Post: bmaganti
  [Help]xpath is not working with lxml mr_byte31 3 6,153 Jul-22-2018, 04:10 PM
Last Post: stranac

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020