Feb-17-2018, 06:50 PM
Most pages have multiple h2's for example.
I can manually add [0], [1] and [2] for example which is great - but what if a page had 20 h2's? How do I scrape all of them?
I tried a couple of things... I thought maybe doing [0:] would work, but that's not the case. I'm not really sure what to type into Google, or the forum to get an answer either.
Current code which manually gets as many tags as I specify.
I can manually add [0], [1] and [2] for example which is great - but what if a page had 20 h2's? How do I scrape all of them?
I tried a couple of things... I thought maybe doing [0:] would work, but that's not the case. I'm not really sure what to type into Google, or the forum to get an answer either.
Current code which manually gets as many tags as I specify.
from bs4 import BeautifulSoup import requests url= 'https://python.org' url_get = requests.get(url) soup = BeautifulSoup(url_get.content, 'lxml') print(soup.select('h2')[0].text) print(soup.select('h2')[1].text) print(soup.select('h2')[2].text)