Python Forum

Full Version: How to timeout a task using the ThreadpoolExecutor?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi all.

I'm wonder what are the possible approaches to handle timeout of a task that was submitted to an executor ThreadPoolExecutor.
I know I can timeout while getting the result like so future.result(timeout=x). But I don't understand how can I timeout a task, in a non-blocking way, when I'm on "fire and forget" mode.

Example:

# Send the task to the pool
future = self.executor.submit(mytask, arg)
future.add_done_callback(...)

# By this time, I can `get` the result with timeout
future.result(timeout=xxx)
# Send the task to the pool
future = self.executor.submit(mytask, arg)
future.add_done_callback(...)

# By this time, I can `get` the result with timeout
future.result(timeout=xxx)
But if I call the result, I'll block, and I won't be able to send more tasks to the pool.
What I'm trying to do is something like:

# Notice the timeout would be defined when sending the task, and not when getting the result
future = self.executor.submit(mytask, arg, timeout=10)

# Assign the callback normally
future.add_done_callback(...)

# Here, I don't care about the result, I want to be able to submit another task
?
I though about opening a new "maintenance" thread, that only waits for the results of all tasks, but again, wouldn't I be blocking for each call?
Say I have 100 tasks periodically being sent to the executor, each just time.sleep(10). How can I make sure that none of those tasks pass 10 seconds?
If it helps, the context is a task queue, hence the "fire and forget" logic.

Any help/suggestion is welcome.
Thanks a lot!
Hi,

I'm not sure if I understand your question correctly, but the concurent.futures module is not meant for a "fire and forgot" mode. The executors wait until tasks are finished. As the documentation for the ThreadPoolExecutor says: "All threads enqueued to ThreadPoolExecutor will be joined before the interpreter can exit.". Which means your problem blocks at one point.

If you are looking for background processing where new task can be fed any time, you may build something yourself using two threads or processes connected by a queue or you use an asyncronous task queue like e.g. Celery (or one of the lighter options).

Gruß, noisefloor
I think you may be confused about what result timeout does. This is a limit on how long the caller waits for the processes to return a result. It does not set a maximum lifetime for the process.