Another "lazier" approach is to use Selenium.
This deal with JavaScript,so we get whole page source back and not a Waring that JavaScripts most be turned on.
Example:
This deal with JavaScript,so we get whole page source back and not a Waring that JavaScripts most be turned on.
Example:
from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.keys import Keys import time from bs4 import BeautifulSoup #--| Setup options = Options() options.add_argument("--headless") browser = webdriver.Chrome(executable_path=r'C:\cmder\bin\chromedriver.exe', options=options) #--| Parse or automation browser.get('https://ucr.gov/enforcement/1712583') soup = BeautifulSoup(browser.page_source, 'lxml') status = soup.find('div', class_="sc-epnACN fEroud") print(status.text)
Output:UNREGISTERED
For more about this look at Web-scraping part-2.