Python Forum

Full Version: Newbie question for bulk insert into SQL Server database
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I am using SQL Server 2017 to save people's information, I created a simple data table [People] in [mydb] database
CREATE TABLE [dbo].[People] (
[ID]         [int]          NOT NULL,
[Name]       [nvarchar](40) NOT NULL,
[Age]        [int]          NOT NULL,
[Height]     [int]          NOT NULL,
[Weight]     [int]          NOT NULL
CONSTRAINT PK_People PRIMARY KEY(ID))
GO
person1 = Person(Name='Bob', Age=55, Height=165, Weight=70)
person2 = Person(Name='Doe', Age=45, Height=185, Weight=90)
person3 = Person(Name='John', Age=50, Height=175, Weight=85)
people = [person1, person2, person3]
I will have around 30,000 records of people like the above.
I want to use pyodbc or whatever software package to insert all the people records into [People] data table.
I also have a primary key [ID], I want to use the order in the list of people to be the primary key, for the above example, the ID could be: 1, 2, 3, etc...
I can't figure out how to do the bulk insert into SQL Server 2017 database, I think since SQL Server 2017 has native support for python, so this could be possible.
Please advise!
Thanks,
I think you are going to have to explain what you want help with a bit more explicitly. The task you are asking about can be pretty complex, and you just posted a snip-it with SQL code not python. I don't know how much help you will get with that little information.
Sorry, I put the T-SQL statements inside Python code.
Here is my question again:
I am using SQL Server 2017 to save people's information, I created a simple data table [People] in [mydb] database
CREATE TABLE [dbo].[People] (
[ID] [int] NOT NULL,
[Name] [nvarchar](40) NOT NULL,
[Age] [int] NOT NULL,
[Height] [int] NOT NULL,
[Weight] [int] NOT NULL
CONSTRAINT PK_People PRIMARY KEY(ID))
GO
I have the following Python code to generate some records of people, and I want to save all such records into SQL Server 2017 database. The database name is: mydb, and the data table name is: [People].
import collections
 
Person = collections.namedtuple('Person', ['Name', 'Age', 'Height', 'Weight'])
 
person1 = Person(Name='Bob', Age=55, Height=165, Weight=70)
person2 = Person(Name='Doe', Age=45, Height=185, Weight=90)
person3 = Person(Name='John', Age=50, Height=175, Weight=85)
......
person30000 = Person(Name='XYZ', Age=30, Height=160, Weight=65)
people = [person1, person2, person3, ... person30000]
I have 30,000 records of people like the above, which is a list of namedtuple.
I want to use pyodbc or whatever software package to insert all the people records into [People] data table.
I also have a primary key [ID], I want to use the order in the list of people to be the primary key, for the above example, the ID could be: 1, 2, 3, etc...
I can't figure out how to do the bulk insert into SQL Server 2017 database, I think since SQL Server 2017 has native support for python, so this could be possible.
Please advise!
Thanks,
Hello:
Thanks for help, I read the document at:
http://www.pymssql.org/en/stable/pymssql...compliance
But I can't find useful bulk insert code.
Please advise or write some code to bulk insert the records into SQL Server database.
Thanks,
my link is exactly at that example. see the code
 
cursor.executemany(
    "INSERT INTO persons VALUES (%d, %s, %s)",
    [(1, 'John Smith', 'John Doe'),
     (2, 'Jane Doe', 'Joe Dog'),
     (3, 'Mike T.', 'Sarah H.')])
# you must call commit() to persist your data if you don't set autocommit to True
conn.commit()
There is a  MS supplied bulk loader for SQL Server, see: https://docs.microsoft.com/en-us/sql/rel...sql-server