Python Forum

Full Version: Flask export/upload database table in cvs/xlsx format
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I have a download button on my flask app and i am trying to add functionality that will allow user to download all data from books table locally in csv or excel format.

Another thing i would like to do is to upload excel or csv file and import the data in books table.

For download i have this

@admin_role.route('/download')
@login_required
def post():
    si = StringIO()
    cw = csv.writer(si)
    for book in Book.query.all():
        cw.writerows(book)
    output = make_response(si.getvalue())
    output.headers["Content-Disposition"] = "attachment; filename=export.csv"
    output.headers["Content-type"] = "text/csv"
    return output
But i have error
TypeError: writerows() argument must be iterable
this is the model:

class Book(db.Model):
    """
    Create a Books table
    """

    __tablename__ = 'books'

    id = db.Column(db.Integer, primary_key=True)
    book_name = db.Column(db.String(60), index=True,unique=True)
    author = db.Column(db.String(200), index=True)
    quantity = db.Column(db.Integer)
    department_id = db.Column(db.Integer, db.ForeignKey('departments.id'))
    employees_id = db.Column(db.Integer, db.ForeignKey('employees.id'))
    publisher = db.Column(db.String(200))
    no_of_pgs = db.Column(db.Integer)
    pbs_year = db.Column(db.Integer)
    genre_id = db.Column(db.Integer, db.ForeignKey('genres.id'), nullable=False)
    read = db.Column(db.Enum('NO', 'YES'), default='NO')

    borrows = db.relationship('Borrow', backref='book',
                                lazy='dynamic')
Use cw.writerows on the query.
If you iterate manually with a for-loop, you must add row by row with cw.writerow (without s at the end).
@admin_role.route('/download')
@login_required
def post():
    si = StringIO()
    cw = csv.writer(si)
    cw.writerows(Book.query.all())
    output = make_response(si.getvalue())
    output.headers["Content-Disposition"] = "attachment; filename=export.csv"
    output.headers["Content-type"] = "text/csv"
    return output
For safety reasons you should limit the query and use a paginate function.
Use a cache if you want to speed up requests.
Thank you DeaD_EyE. I tried the solution you proposed and I still have an error.

This is the error I get

 cw.writerows(Book.query.all())
_csv.Error: iterable expected, not Book
Then use a list to consume the iterator.

cw.writerows(list(Book.query.all()))
If you have many books, then chunking is better.
Then use sqlalchemy.orm.query.Query.yield_per.

Quote:Yield only count rows at a time.

The purpose of this method is when fetching very large result sets (> 10K rows), to batch results in sub-collections and yield them out partially, so that the Python interpreter doesn’t need to declare very large areas of memory which is both time consuming and leads to excessive memory use. The performance from fetching hundreds of thousands of rows can often double when a suitable yield-per setting (e.g. approximately 1000) is used, even with DBAPIs that buffer rows (which are most).

for chunks in Book.query.yield_per(100):
    cw.writerows(chunks)
@DeaD_EyE Unfortunately, the same error appears

    cw.writerows(list(Book.query.all()))
_csv.Error: iterable expected, not Book