Python Forum

Full Version: WebScraping using Selenium library
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi there, who knows how to properly process WebDriverWait TimeoutException? I am not sure of myself that I do everything right

Let's suppose that we have the following task - we need to parser information from dynamic website where there is show-more button


def get_source_page(uri: str) -> None: # here I created a function for getting source page
    driver.get(uri)
    
    while True: # in this loop I am trying to go through all dynamic pages inside our website main page
        try:
            find_more_element = WebDriverWait(driver=driver, timeout=2).until(
                expected_conditions.element_to_be_clickable((By.CLASS_NAME, "button-show-more"))
            )

            if find_more_element:
                actions = ActionChains(driver)
                actions.move_to_element(find_more_element).click().perform()
                time.sleep(2.5)
        except TimeoutException: # the matter is that WebDriverWait throws an exception when it can't find relevant element with class name[b] "button-show-                    more"[/b]
            with open('index.html', 'w') as file:
                file.write(driver.page_source)

            break

The meaning is that if we went through all pages and there's no more next_page_buttons, the TimeoutException would be thrown and consequently we could write it down into index.html file using with context manager