Python Forum

Full Version: selenium click in iframe fails
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
hi Guys,

What I want: clicking on the 'sign in' button on this website https://sites.google.com/site/ActiveRumblers/deck
I can get the input populated with data, but the click action ... nope. What am I doing wrong.


from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome("C:/Python27/include/chromedriver.exe")
import time

URL = "https://sites.google.com/site/ActiveRumblers/deck"
driver.get(URL)

#ensure page is loaded
time.sleep(3)

#login form is 4 iframes deep
driver.switch_to_frame(0)
driver.switch_to_frame(0)
driver.switch_to_frame(0)
driver.switch_to_frame(0)

#got xpath using google chrome, inspect, copy xpath
driver.find_element_by_xpath('/html/body/form/table/tbody/tr/td[1]/table/tbody/tr[2]/td[1]/div/input[3]').click()
driver.switch_to_default_content() 

time.sleep(2)
print "done"
change line 19 (temporarily):
driver.find_element_by_xpath('/html/body/form/table/tbody/tr/td[1]/table/tbody/tr[2]/td[1]/div/input[3]').click()
to:
element = driver.find_element_by_xpath('/html/body/form/table/tbody/tr/td[1]/table/tbody/tr[2]/td[1]/div/input[3]')
print(element)
is element empty?
hmmm yes, it is empty. so that explains the non click action.

Strange though since the xpath is correct (looking at the html of that nested iframe)

Altering the xpath to something like:
element = driver.find_element_by_xpath('//input[@value="SIGN IN"]')
# element by xpath = <selenium.webdriver.remote.webelement.WebElement (session="74feb924af78ffa24ec43105c5c80ffc", element="602b289d-b6ff-45e4-aeb7-3ed415cd942b")>
However when I add the click() to it ... nothing happens.
element = driver.find_element_by_xpath('//input[@value="SIGN IN"]')
element.click()
(Bump)

Anybody a clue?
how did you get the xpath, was it using inspect?
(Apr-27-2020, 03:55 PM)Larz60+ Wrote: [ -> ]how did you get the xpath, was it using inspect?

Quote:'/html/body/form/table/tbody/tr/td[1]/table/tbody/tr[2]/td[1]/div/input[3]'
Was obtained using inspect.

Quote:'//input[@value="SIGN IN"]'
was done myself using info I found online.
Quote:Was obtained using inspect.
I use mozilla firefox as my browser, and have encountered issues from time to time where XPATH appeared to be wrong,
What I do as a work around, is to get the XPATH of a major node the occurs just prior to the one wanted, and then traverse down the chain until I reach the tag that I want.
for example, if I want a specific tr tag in a table, I'll find the table rag, then switch over to beautifulsoup (using 'browser.page_source' to get starting source). Not a perfect way to do it, but it works.
Are you using Firefox?

here's a complete example where I do this:
from selenium import webdriver
from selenium.webdriver.common.by import By
from bs4 import BeautifulSoup
import BusinessPaths
import time
import PrettifyPage
import CreateDict
import json
import sys


class PreviewSearchPage:
    def __init__(self):
        self.bpath = BusinessPaths.BusinessPaths()
        self.pp = PrettifyPage.PrettifyPage()
        self.cd = CreateDict.CreateDict()

        self.analyze_page()

    def start_browser(self):
        caps = webdriver.DesiredCapabilities().FIREFOX
        caps["marionette"] = True
        self.browser = webdriver.Firefox(capabilities=caps)

    def stop_browser(self):
        self.browser.close()

    def save_page(self, filename):
        soup = BeautifulSoup(self.browser.page_source, "lxml")
        with filename.open('w') as fp:
            fp.write(self.pp.prettify(soup, 2))
    
    def analyze_page(self):
        self.start_browser()
        self.get_search_page('Andover')
        self.stop_browser()
    
    def get_search_page(self, searchitem):
        # pick city with multiple pages
        url = self.bpath.base_url
        self.browser.get(url)
        time.sleep(2)
        print(f'Main Page URL: {self.browser.current_url}')
        self.browser.find_element(By.XPATH, '/html/body/div[2]/div[4]/div/form/div/div/span[1]/select/option[3]').click()
        searchbox = self.browser.find_element(By.XPATH, '//*[@id="query"]')
        searchbox.clear()
        searchbox.send_keys(searchitem)
        self.browser.find_element(By.XPATH, '/html/body/div[2]/div[4]/div/form/div/div/span[3]/button').click()
        time.sleep(2)
        print(f'Results Page 1 URL: {self.browser.current_url}')
        # get page 2
        # find next page button and click
        self.browser.find_element(By.XPATH, '/html/body/div[2]/div/div[2]/div[3]/div[2]/div/span[1]/a/icon').click()
        time.sleep(2)
        print(f'Results Page 2 URL: {self.browser.current_url}')
        # Get url of a detail page
        self.browser.find_element(By.XPATH, '/html/body/div[2]/div/div[2]/table/tbody/tr[1]/td[1]/a').click()
        time.sleep(2)
        print(f'Detail Page URL: {self.browser.current_url}')