Python Forum

Full Version: Little Help with Design of a Pooling Application
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
So this will take some explanation. I'll try to get to the point as quickly as possible.

I'm new to Python, but programming a while. We use a database that is not a typical one. The company just released a python package to connect to the db. We basically get one connection. Of course, we can spin up 20 connections, but the creation of these are slow. I know many companies that use this database have created a pool to manage the connections and I'm wondering the best way to set that up.

I created a basic object pool with 5 sessions. Works fine, but I suspect if we get many users things may break down.

Here's the gist of it.

On startup, we open 5 connections to the db and start up a grpc server
Request comes in to grpc server
We find a free connection to use and use it, data is retrieved from the db and sent back
If no connection is available, we wait and try again in a second or possibly spin up a couple new sessions

I worry about a few things.

If we do nothing, does that mean every time a request comes in we aren't really using the pool, just the same session over and over because Python is synchronous?

1. What if we get 100 grpc requests at one time. Should we be using async or threads or even some other type of parallel solution?
2. Do we need to set some type of lock when we retrieve the next session from the pool to prevent race conditions or is everything synchronous? If it's all synchronous by default, seems like a bottleneck which is why I think we need some type of concurrency or parallelism.

Hope this is clear. If not, please let me know.