Sep-29-2022, 05:14 AM
Hi Buran,
superb ! Thank you so much , I liked your code.
Similar situation here, Extracting all records into single csv files.
using fetchmany extracted 60gb of data.
my code is working , can we make any improvement in below code plz
superb ! Thank you so much , I liked your code.
Similar situation here, Extracting all records into single csv files.
using fetchmany extracted 60gb of data.
my code is working , can we make any improvement in below code plz
import pyodbc import csv connection = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER=DESKTOP-GQK64O6;DATABASE=Customer;Trusted_Connection=yes;') cursor = connection.cursor() qry = "select * from employee" cursor.execute(qry) data = cursor.fetchall() With open("E:\\backup\\output.csv","w","newline="") as outfile writer = csv.writer(outfile,quoting = csv.QUOTE_NONNUMERIC) writer.writerows(col[0] for col in cursor.description) While True: rows = cursor.fetchmany(10000) if len(rows) ==0: print("no records found") break else: x = x+len(rows) print(x) for row in rows: writer.writerows(row) conn.close() print("success") print('time taken' . time.time()-initial,"Seconds")