Jun-18-2020, 11:48 AM
Then use a list to consume the iterator.
Then use sqlalchemy.orm.query.Query.yield_per.
cw.writerows(list(Book.query.all()))If you have many books, then chunking is better.
Then use sqlalchemy.orm.query.Query.yield_per.
Quote:Yield only count rows at a time.
The purpose of this method is when fetching very large result sets (> 10K rows), to batch results in sub-collections and yield them out partially, so that the Python interpreter doesn’t need to declare very large areas of memory which is both time consuming and leads to excessive memory use. The performance from fetching hundreds of thousands of rows can often double when a suitable yield-per setting (e.g. approximately 1000) is used, even with DBAPIs that buffer rows (which are most).
for chunks in Book.query.yield_per(100): cw.writerows(chunks)
Almost dead, but too lazy to die: https://sourceserver.info
All humans together. We don't need politicians!
All humans together. We don't need politicians!