Python Forum

Full Version: Insert 10gb csv files into sql table via python
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi Team,

I want to insert CSV Records into sql table via python,
my csv file size is 10 gb,
I found this code on google, Can you suggest better way if someone has done this.



import pyodbc
import pandas as pd
import csv
import time

# Define the SQL Server login and password
sql_login = 'XXXX\\XXX-XXX-XXX'
sql_password = 'xxxxxxx'

# Connect to the SQL Server database
connection = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER=DESKTOP-GQK64O6;DATABASE=Customer;UID=' + sql_login + ';PWD=' + sql_password + ';')
cursor = connection.cursor()

# Define the file path and chunk size
file_path = 'abc.csv'
chunk_size = 1000

# Start the timer
start_time = time.time()

# Create a pandas dataframe iterator to read the CSV file in chunks
csv_iterator = pd.read_csv(file_path, chunksize=chunk_size, sep="|")

# Loop through the dataframe iterator
for chunk in csv_iterator:
    # Get the field names for the chunk
    field_names = list(chunk.columns)

    # Concatenate the field names into a string
    field_names_concatenated = ",".join(field_names)

    # Create a list of values for the chunk
    values = chunk.values.tolist()

    # Create a string of percents based on the number of values in each row
    percents_concatenated = ",".join(len(field_names) * ["?"])

    # Create the SQL statement
    sql = f"INSERT INTO employee ({field_names_concatenated}) VALUES ({percents_concatenated})"

    # Execute the SQL statement for the chunk
    cursor.executemany(sql, values)
    connection.commit()

    print(cursor.rowcount, "details inserted")

# Calculate the elapsed time
elapsed_time = time.time() - start_time
print(f"Elapsed time: {elapsed_time:.2f} seconds")
You can load CSV file from pandas directly to the database.
the csv data may need to be 'normalized', see https://python-forum.io/thread-38263-pos...#pid161978
This uses sqllite model, which is constructed like example here: https://python-forum.io/thread-24127.html
search for 'Database Model'

Once you have a model you can load table something like this:

df = pd.read_csv(fn)
df.to_sql(tablename, con=self.Model.engine, if_exists='replace')
Above code was taken from tutorial mentioned below:
DB model can easily be constructed by hand, or if you want to get fancy, the tutorial below shows how to automate everythng from model generation to database load.

see tutorial here if interested.
(Apr-28-2023, 09:33 AM)mg24 Wrote: [ -> ]my csv file size is 10 gb,
For data this big is Dask or Polars better.
Dask DataFrame copies the pandas DataFrame API,so it will work the same as Pandas

Example with timing.
# pip install "dask[complete]"

import time
from dask import dataframe as dd

start = time.time()
df = dd.read_csv('large.csv')
end = time.time()
print(f"Total Time: {(end-start)} sec")
Just bye doing this so will Dask do a lot,eg a medium size .csv 230 mb,so dos Dask read it in 0.01-sec and Pandas read it in 6.2 sec.
Dask utilizes multiple CPU cores by internally chunking dataframe and process in parallel.
Example want to import 10 GB data in your eg 6 GB RAM or more RAM.
This can’t be achieved via Pandas since whole data in a single shot doesn’t fit into memory(without chunking up),but Dask can.
Dask instead of computing first, create a graph of tasks which says about how to perform that task.
It's lazy computation which means that Dask’s task scheduler creating a graph at first followed by computing that graph when requested.