Python Forum

Full Version: prometheus in multiprocess code
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Battling with Prometheus, in the below code, my prometheus counters are not incrementing.

tracked it down to the working being in a multiprocess pool...

Neither FILE_GESTER_TIME nor FILE_GESTER_LINE_COUNT is incrementing when view at port 8000
# ... 
from prometheus_client import start_http_server, Summary, Counter

# Prometheus metrics
FILE_GESTER_TIME            = Summary('BSA_file_gester_worker', 'Time spent loading a file')
FILE_GESTER_LINE_COUNT      = Counter('BSA_file_gester_line_count', 'Running counter of lines loaded')

@FILE_GESTER_TIME.time()
def worker(filename, config_params):        # threaded process to attack each database async

   # --- Do work --
   # loop ...
   for file in files:
       # --- Do some more work ---   
       FILE_GESTER_LINE_COUNT.inc()

   # end for file in files

# end worker

def main():

    start_http_server(8000)


    pool = multiprocessing.Pool(multiprocessing.cpu_count())

    for file in files:
        result.append(pool.apply_async(worker, (file, config_Params )))

    pool.close()
    pool.join()     # Sleep here until all workers are done
anyone ?

G
going bump this thread again...
I've got a multiprocess program as per the above, it does not have a Flask or Dgango web interface, so the examples out there does not make sense, anyone able to assist. Like to instrument using prometheus my app.

G
no experience with that package, but did you look at https://github.com/prometheus/client_pyt...e-gunicorn
and the limitations listed there?
I did. but previously id did not look like it was fitting my use case, but must have been later at night and brain friend, revoking at it now, looks like it might work... will give it a try.
am wondering what file this refers to "Put the following in the config file:"

from prometheus_client import multiprocess

def child_exit(server, worker):
    multiprocess.mark_process_dead(worker.pid)
I see the code they showing to expose the metrics, makes me think the previous method of using the start_http_server(port) is not used anymore, and I don't see a port number being specified here where the prometheus server will come scrape the metrics.

Looks like for counters and summary the "multiprocess_mode='livesum'" is what I add.

would be great if someone thats done this can post more examples than whats on this url.

G

... not sure if the following bit shown in the above link is just a implementation example or actually code I need to include. they also refer to a file that have to be defined, assume thats where the metrics are now stored, a common place for the disconnected processes to get to... but can't figure out how where this file is defined.

# Expose metrics.
@IN_PROGRESS.track_inprogress()
def app(environ, start_response):
    registry = CollectorRegistry()
    multiprocess.MultiProcessCollector(registry)
    data = generate_latest(registry)
    status = '200 OK'
    response_headers = [
        ('Content-type', CONTENT_TYPE_LATEST),
        ('Content-Length', str(len(data)))
    ]
    start_response(status, response_headers)
    return iter([data])