You lost me, i will try to to use it
Thank you.
renny
Well I been at this for about 14 hours today. I am going to hit the sack.
This is what I got so far.
Response [200]>a
<Response [200]>b
<Response [200]>c
<Response [200]>d
<Response [200]>e
<Response [200]>f
<Response [200]>g
<Response [200]>h
<Response [200]>i
<Response [200]>j
<Response [200]>l
<Response [200]>m
<Response [200]>n
<Response [200]>o
<Response [200]>p
<Response [200]>r
<Response [200]>s
<Response [200]>t
<Response [200]>u
<Response [200]>v
<Response [200]>w
I do get to all the pages. soup does not work.
I want to thank you Larz60+ for your help. I will start back on it tomorrow.
renny
Thank you.
renny
Well I been at this for about 14 hours today. I am going to hit the sack.
This is what I got so far.
import requests from bs4 import BeautifulSoup from html.parser import HTMLParser baseurl = requests.get('https://www.usa.gov/federal-agencies/') valid_pages = 'abcdefghijlmnoprstuvw' for n in range(len(valid_pages)): url = f'{baseurl}{valid_pages[n]}' print(url) page = soup = BeautifulSoup(url, 'html.parser') for page in soup.find_all('ul', {'class' : 'one_column_bullet'}): print(page)this is what I get:
Response [200]>a
<Response [200]>b
<Response [200]>c
<Response [200]>d
<Response [200]>e
<Response [200]>f
<Response [200]>g
<Response [200]>h
<Response [200]>i
<Response [200]>j
<Response [200]>l
<Response [200]>m
<Response [200]>n
<Response [200]>o
<Response [200]>p
<Response [200]>r
<Response [200]>s
<Response [200]>t
<Response [200]>u
<Response [200]>v
<Response [200]>w
I do get to all the pages. soup does not work.
I want to thank you Larz60+ for your help. I will start back on it tomorrow.
renny