(Apr-19-2018, 04:41 PM)gentoobob Wrote: Because the url at the end has a page number. I need a loop that starts at page one then goes to page two, three, etc until no more pages are left.
That setup work fine with a for loop to generate url's.
start = 1
stop = 5
for page in range(start, stop):
url = 'https://10.10.10.0/vmrest/users?rowsPerPage=2000&pageNumber={}'.format(page)
print(url)
Output:
https://10.10.10.0/vmrest/users?rowsPerPage=2000&pageNumber=1
https://10.10.10.0/vmrest/users?rowsPerPage=2000&pageNumber=2
https://10.10.10.0/vmrest/users?rowsPerPage=2000&pageNumber=3
https://10.10.10.0/vmrest/users?rowsPerPage=2000&pageNumber=4
Has a similar example in
Web-scraping part-2,also a demo with
concurrent.futures to speed it up.