Nov-17-2022, 08:23 AM
Hi Team,
Need your help in improving my existing code using multithreading or multiprocessing as per below situation.
I have 10 sql tables in a single database.
I want to extract all sql tables parallelly.
Without affecting memory issue or slowness issue , by other team members.
Suppose I have below 9 tables in Customer Database.
I will accept database name as argument from user.
Need your help in improving my existing code using multithreading or multiprocessing as per below situation.
I have 10 sql tables in a single database.
I want to extract all sql tables parallelly.
Without affecting memory issue or slowness issue , by other team members.
Suppose I have below 9 tables in Customer Database.
I will accept database name as argument from user.
SQL Table List Table1 ,Table2,Table3,Table4,Table5,Table6,Table7,Table8,Table9,Table10. import pyodbc import csv import os connection = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER=DESKTOP-GQK64O6;DATABASE=Customer;Trusted_Connection=yes;') cursor = connection.cursor() qry = f"Select * from Table1" cursor.execute(qry) data = cursor.fetchall() print(len(data)) folderPath = f"C:\\Users\\malle\\OneDrive\\Desktop\\C\\test_data\\output{tbl}" count = 0 **# for reply in cursor.execute(qry):** for x in data: count = count + 1 print(count) fname = "Row"+str(count)+".csv" fullpath = os.path.join(folderPath,fname) print(fullpath) with open(fullpath,"w",newline="") as outfile: writer = csv.writer(outfile,delimiter="|",quoting=csv.QUOTE_NONNUMERIC) writer.writerow(col[0] for col in cursor.description) writer.writerow(x) print(f"I am row {count}",x) cursor.close() connection.close()