Python Forum

Full Version: Loading HUGE data from Python into SQL SERVER
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi All,

I have used the below python code to insert the data frame from Python to SQL SERVER database.But when I am using one lakh rows to insert then it is taking more than one hour time to do this operation. Could I get an optimized Python code for my task?


import time
start_time = time.time()
import pyodbc
from sqlalchemy import create_engine
import urllib

params = urllib.parse.quote_plus(r'DRIVER={SQL Server};SERVER=ROSQC50;DATABASE=PADB;Trusted_Connection=yes')
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
engine = create_engine(conn_str)
df.to_sql(name='DummyTodaynow',con=engine, if_exists='append',index=False)
print(" %s seconds ---" % (time.time() - start_time))
Appreciate for your help on this!

Thanks,
Sandeep
Obviously df.to_sql (and sqlalchemy?) do not make bulk import, but execute individual insert for each row
see https://github.com/pandas-dev/pandas/issues/8953
and also https://stackoverflow.com/questions/3381...r-database
Thanks for the reply Buran. I am not able to understand the codes that is in the links which you mentioned.
Any other code would be helpful?

Thanks,
Sandeep