Python Forum
[Tkinter] Does anybody know that how to edit tree view on tkinter
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[Tkinter] Does anybody know that how to edit tree view on tkinter
#1
How to edit entries specific entries in treeview? Does anybody know?
Reply
#2
It makes it twice as hard when you split threads.
I took a look at your code from this post: https://python-forum.io/Thread-Tkinter-T...side-frame
So I assume you are talking about that. You use pack for geometry in that code. I posted that it's very difficult to control geometry in tkinter. Grid gives you much more control than pack, but even that's difficult to use.

As far as accessing the contents of a treeview, what control do you have as it stands now.
  • Can you detect click and/or double-click or mouse?
  • Are you having trouble determining the index of the entry being clicked?
You need to bind the mouse clicks to an event, and create a service routine.
Listview and treeview operate almost identically as far as item selection, clearing, scrolling, etc. are concerned, so any examples for either will suffice for determining how it operates.

You should read:
http://infohost.nmt.edu/tcc/help/pubs/tk...eview.html
and
http://infohost.nmt.edu/tcc/help/pubs/tk...vents.html
Also see: http://nullege.com/codes/search?cq=tkinter.ttk.Treeview
for example code.
The example that I referenced in the previous thread (was a Listview, but same operation) contains code for manipulating events, highlighting, snd clearing that almost certainly would be yhe same for Treeview.
Reply
#3
Yes sir, so i dropped that plan and went for the new one , i found one library which is http://pandastable.readthedocs.io/en/lat...mples.html


i inserted my database code into the same..

from tkinter import *
import tkinter as tk
from tkinter import ttk
from pandastable import Table, TableModel
import sqlite3
import pandas as pd
import Backend

root = tk.Tk()
root.geometry("1250x650+0+0")
root.title("MAYA")
root.configure(background="black")

f = Frame(root)
f.pack(fill=BOTH,expand=1)
conn = sqlite3.connect("99_data_increment.db")
df = pd.read_sql_query("SELECT * FROM crawled",conn)
pt = Table(f, dataframe=df, showtoolbar=True, showstatusbar=True)
pt.show()

State = StringVar()
XID = StringVar()
Project_Name = StringVar()
City = StringVar()
Main_City = StringVar()
Registration_Number = StringVar()
Promoter_Name = StringVar()
Rera_URL = StringVar()
PDF_text = StringVar()
Crawled_Date = StringVar()
Status = StringVar()
Names = StringVar()
Transaction_Date = StringVar()
Comments = StringVar()
Call_Contact_Number = StringVar()
Creation_Type = StringVar()
Builder_Website = StringVar()
Backend.insert(State, XID, Project_Name, City, Main_City, Registration_Number, Promoter_Name, Rera_URL, PDF_text, Crawled_Date, Status, Names, Transaction_Date, Comments, Call_Contact_Number, Creation_Type, Builder_Website)

root.mainloop()
backend code:

import sqlite3

def connect():
    conn=sqlite3.connect("99_data_increment.db")
    cur=conn.cursor()
    cur.execute("CREATE TABLE IF NOT EXISTS crawled (id INTEGER PRIMARY KEY, State , XID , Project_Name , City , Main_City , Registration_Number , Promoter_Name , Rera_URL , PDF_text, Crawled_Date , Status, Names, Transaction_Date, Comments, Call_Contact_Number, Creation_Type, Builder_Website)")
    conn.commit()
    conn.close()
    
def insert(State, XID, Project_Name, City, Main_City, Registration_Number, Promoter_Name, Rera_URL, PDF_text, Crawled_Date, Status, Names, Transaction_Date, Comments, Call_Contact_Number, Creation_Type, Builder_Website):
    conn=sqlite3.connect("99_data_increment.db")
    cur=conn.cursor()
    cur.execute("INSERT INTO crawled VALUES (NULL,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",(State, XID, Project_Name, City, Main_City, Registration_Number, Promoter_Name, Rera_URL, PDF_text, Crawled_Date, Status, Names, Transaction_Date, Comments, Call_Contact_Number, Creation_Type, Builder_Website))
    conn.commit()
    conn.close()
    view()

connect()
but how do i update the database now?

please check this minimal example of this library:

http://pandastable.readthedocs.io/en/lat...mples.html
Reply
#4
Please don't PM. Share with the community.
I will take a look at this later today, and if your post hasn't been answered, I'll give it a go.
Reply
#5
Hi sir,

i have reached till here......

#C:/Users/prince.bhatia/Desktop/projects/Rera_App/project-3.py
import tkinter as tk
from tkinter import ttk

from pandastable import Table, TableModel

import sqlite3

import pandas as pd
import Backend

# --- classes ---

class MyTable(Table):

    def handleCellEntry(self, row, col):
        super().handleCellEntry(row, col)
        print('changed:', row, col, "(TODO: update database)")
        return

def save():
    print(df)
    result = df.to_sql("crawled", conn, if_exists="replace")
    print(result)

# --- main ---

root = tk.Tk()
#root.geometry("1250x650+0+0")
#root.title("MAYA")
#root.configure(background="black")

f = tk.Frame(root)
f.pack(fill="both", expand=True)

conn = sqlite3.connect("99_data_increment.db")
df = pd.read_sql_query("SELECT * FROM crawled", conn)

pt = MyTable(f, dataframe=df, showtoolbar=True, showstatusbar=True) # <-- MyTable
pt.show()

f = tk.Button(root, text="Save", command=save)
f.pack(fill="both", expand=True)

root.mainloop()
right now it is creating one index automatically and when i run the code


i am also attaching my backend code.

import sqlite3

def connect():
    conn=sqlite3.connect("99_data_increment.db")
    cur=conn.cursor()
    cur.execute("CREATE TABLE IF NOT EXISTS crawled (id INTEGER PRIMARY KEY, State , XID , Project_Name , City , Main_City , Registration_Number , Promoter_Name , Rera_URL , PDF_text, Crawled_Date , Status, Names, Transaction_Date, Comments, Call_Contact_Number, Creation_Type, Builder_Website)")
    conn.commit()
    conn.close()
    
def insert(State, XID, Project_Name, City, Main_City, Registration_Number, Promoter_Name, Rera_URL, PDF_text, Crawled_Date, Status, Names, Transaction_Date, Comments, Call_Contact_Number, Creation_Type, Builder_Website):
    conn=sqlite3.connect("99_data_increment.db")
    cur=conn.cursor()
    cur.execute("INSERT INTO crawled VALUES (NULL,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",(State, XID, Project_Name, City, Main_City, Registration_Number, Promoter_Name, Rera_URL, PDF_text, Crawled_Date, Status, Names, Transaction_Date, Comments, Call_Contact_Number, Creation_Type, Builder_Website))
    conn.commit()
    conn.close()

connect()
right now i have not received any error. I guess it is adding columns also when adding rows, not sure even
Reply
#6
Hi sir,

Did you check that?
Reply
#7
Hi sir,

Did tou check that?
Reply
#8
couldn't get it to run, needs dataset

Error:
cur.execute("INSERT INTO crawled VALUES (NULL,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",(State, XID, Project_Name, City, Main_City, Registration_Number, Promoter_Name, Rera_URL, PDF_text, Crawled_Date, Status, Names, Transaction_Date, Comments, Call_Contact_Number, Creation_Type, Builder_Website)) sqlite3.InterfaceError: Error binding parameter 0 - probably unsupported type. cur.execute("INSERT INTO crawled VALUES (NULL,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",(State, XID, Project_Name, City, Main_City, Registration_Number, Promoter_Name, Rera_URL, PDF_text, Crawled_Date, Status, Names, Transaction_Date, Comments, Call_Contact_Number, Creation_Type, Builder_Website)) sqlite3.InterfaceError: Error binding parameter 0 - probably unsupported type.
Reply
#9
I have one crawler which will help you get dataset.
Hi sir, below code will help you get dataset.
import csv
from bs4 import BeautifulSoup
import requests
import time
import pdb
import sqlite3
import datetime
from datetime import date

url = "http://up-rera.in/projects"
url1 = "http://up-rera.in"
final_data = []
dct = {}


def writefiles(alldata, filename):
    with open ("./"+ filename, "w") as csvfile:
        csvfile = csv.writer(csvfile, delimiter=",")
        csvfile.writerow("")
        for i in range(0, len(alldata)):
            csvfile.writerow(alldata[i])


def getbyGet(url, values):
    res = requests.get(url, data=values)
    text = res.text
    return text


def readHeaders():
    global url, url1
    html = getbyGet(url, {})
    soup  = BeautifulSoup(html, "html.parser")
    EVENTTARGET = soup.select("#__VIEWSTATE")[0]['value']
    EVENTVALIDATION = soup.select("#__EVENTVALIDATION")[0]['value']
    VIEWSTATE = soup.select("#__VIEWSTATE")[0]['value']
    #VIEWSTATEGENERATOR = soup.select("#__VIEWSTATEGENERATOR")[0]["value"]
    headers= {'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
              'Content-Type':'application/x-www-form-urlencoded',
              'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0'}

    formfields =  {'__EVENTARGUMENT':'',
                  '__EVENTVALIDATION':EVENTVALIDATION,
                  '__EVENTTARGET':EVENTTARGET,
                  '__VIEWSTATE':VIEWSTATE,
                  "__VIEWSTATEGENERATOR": "4F1A7E70",
                  'ctl00$ContentPlaceHolder1$btnSearch':'Search',
                  'ctl00$ContentPlaceHolder1$DdlprojectDistrict':0, #this is where your city name changes in each iteration
                  'ctl00$ContentPlaceHolder1$txt_regid':'',
                  'ctl00$ContentPlaceHolder1$txtProject':''}
    s = requests.session()
    conn = sqlite3.connect("99_data_increment.db", timeout=10)
    fdate = date.today()
    #cur = conn.cursor()
    conn.execute("""CREATE TABLE IF NOT EXISTS crawled
                      (id INTEGER PRIMARY KEY, State text, XID text, Project_Name text, City text, Main_City text, Registration_Number text, Promoter_Name text, Rera_URL text, PDF_text, Crawled_Date text, Status text, Names text, Transaction_Date text, Comments text, Call_Contact_Number text, Creation_Type text, Builder_Website text,
                      CONSTRAINT number_unique UNIQUE (Registration_Number))
                      """)
#data = State text, XID text, Project Name text, City text, Main City text, Registration Number text, Promoter Name text, Rera URL text, PDF text, Crawled Date text, Status text, Names text, Transaction Date text, Comments text, Call Contact Number text, Creation Type text, Builder Website text
    cur = conn.cursor()
    res = s.post(url, data=formfields, headers=headers).text
    soup = BeautifulSoup(res, "html.parser")
    get_details = soup.find_all(id="ctl00_ContentPlaceHolder1_GridView1")
    for details in get_details:
        gettr = details.find_all("tr")[1:]
        for tds in gettr:
            td = tds.find_all("td")[1]
            rera = td.find_all("span")
            rnumber = ""
            data1 = []
            blank = ""
            for num in rera:
                rnumber = num.text
                data1.append(rnumber)
                sublist = []
                sublist.append(rnumber)
            name = tds.find_all("td")[2]
            prj_name = name.find_all("span")
            prj = ""
            for prjname in prj_name:
                prj = prjname.text
                sublist.append(prj)
            promoter_name = tds.find_all("td")[3]
            promoter = promoter_name.find_all("span")
            prom = ""
            for promname in promoter:
                prom = promname.text
                sublist.append(prom)
            district = tds.find_all("td")[4]
            dist = district.find_all("span")
            district_name = ""
            for districtname in dist:
                district_name = districtname.text
                sublist.append(district_name)
            protype = tds.find_all("td")[5]
            project = protype.find_all("span")
            projectype = ""
            for prjtype in project:
                projectype = prjtype.text
                sublist.append(projectype)
            link = "http://up-rera.in/View_Registration_Details.aspx?binid="
            for i in data1:
                a = i.split("J")
                links = link+a[1]+"&hfFlag=edit&ddlPRJ=0&txtPRJ="
                sublist.append(links)
            final_data.append(sublist)
#data = State text, XID text, Project Name text, City text, Main City text, Registration Number text, Promoter Name text, Rera URL text, PDF text, Crawled Date text, Status text, Names text, Transaction Date text, Comments text, Call Contact Number text, Creation Type text, Builder Website text

            cur.execute("INSERT OR IGNORE INTO crawled VALUES (NULL,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",("Uttar Pradesh",blank,prj,district_name,blank,rnumber, prom , links, blank,fdate, blank, blank, blank, blank, blank, blank, blank ))
    conn.commit()
        #print(final_data)
    return final_data


def main():
    datas = readHeaders()
    writefiles(datas, "Up-new.csv")
main()
Reply
#10
Sorry, but I don't want to be scraping a website I know nothing about.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Cant set api response as tree in Element tree hey_arnold 4 3,682 Mar-04-2019, 03:25 PM
Last Post: dsarin
  [Flask] Create new project for add user,edit,delete,view users in python with mysql connector chandranmanikandan 0 7,352 Oct-30-2018, 10:19 AM
Last Post: chandranmanikandan

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020