Bottom Page

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
 prometheus in multiprocess code
#1
Battling with Prometheus, in the below code, my prometheus counters are not incrementing.

tracked it down to the working being in a multiprocess pool...

Neither FILE_GESTER_TIME nor FILE_GESTER_LINE_COUNT is incrementing when view at port 8000

# ... 
from prometheus_client import start_http_server, Summary, Counter

# Prometheus metrics
FILE_GESTER_TIME            = Summary('BSA_file_gester_worker', 'Time spent loading a file')
FILE_GESTER_LINE_COUNT      = Counter('BSA_file_gester_line_count', 'Running counter of lines loaded')

@FILE_GESTER_TIME.time()
def worker(filename, config_params):        # threaded process to attack each database async

   # --- Do work --
   # loop ...
   for file in files:
       # --- Do some more work ---   
       FILE_GESTER_LINE_COUNT.inc()

   # end for file in files

# end worker

def main():

    start_http_server(8000)


    pool = multiprocessing.Pool(multiprocessing.cpu_count())

    for file in files:
        result.append(pool.apply_async(worker, (file, config_Params )))

    pool.close()
    pool.join()     # Sleep here until all workers are done


Quote
#2
anyone ?

G
Quote
#3
going bump this thread again...
I've got a multiprocess program as per the above, it does not have a Flask or Dgango web interface, so the examples out there does not make sense, anyone able to assist. Like to instrument using prometheus my app.

G
Quote
#4
no experience with that package, but did you look at https://github.com/prometheus/client_pyt...e-gunicorn
and the limitations listed there?
Quote
#5
I did. but previously id did not look like it was fitting my use case, but must have been later at night and brain friend, revoking at it now, looks like it might work... will give it a try.
am wondering what file this refers to "Put the following in the config file:"

from prometheus_client import multiprocess

def child_exit(server, worker):
    multiprocess.mark_process_dead(worker.pid)
I see the code they showing to expose the metrics, makes me think the previous method of using the start_http_server(port) is not used anymore, and I don't see a port number being specified here where the prometheus server will come scrape the metrics.

Looks like for counters and summary the "multiprocess_mode='livesum'" is what I add.

would be great if someone thats done this can post more examples than whats on this url.

G

... not sure if the following bit shown in the above link is just a implementation example or actually code I need to include. they also refer to a file that have to be defined, assume thats where the metrics are now stored, a common place for the disconnected processes to get to... but can't figure out how where this file is defined.

# Expose metrics.
@IN_PROGRESS.track_inprogress()
def app(environ, start_response):
    registry = CollectorRegistry()
    multiprocess.MultiProcessCollector(registry)
    data = generate_latest(registry)
    status = '200 OK'
    response_headers = [
        ('Content-type', CONTENT_TYPE_LATEST),
        ('Content-Length', str(len(data)))
    ]
    start_response(status, response_headers)
    return iter([data])
Quote

Top Page

Possibly Related Threads...
Thread Author Replies Views Last Post
  Multiprocess not writing to file DreamingInsanity 4 231 Dec-07-2019, 03:10 PM
Last Post: DreamingInsanity
  help with multiprocess concept kiyoshi7 2 385 Aug-10-2019, 08:19 PM
Last Post: kiyoshi7
  example of multiprocess piping Skaperen 4 2,425 Dec-02-2016, 12:55 PM
Last Post: Larz60+
  multiprocess passing multiple arguments double asterisk pic8690 1 2,217 Oct-23-2016, 08:51 AM
Last Post: Skaperen

Forum Jump:


Users browsing this thread: 1 Guest(s)