Aug-22-2024, 09:57 AM
Hi everyone,
I’m working on a project where I need to handle multiple cameras for vehicle tracking. Each camera has two distinct processes:
Capturing Process: Responsible for capturing images.
Tracking Process: Responsible for tracking vehicles in the captured images (read from capturing queue)
Here’s what I want to achieve:
Separate Cores: Each of these processes (capturing and tracking) should run on its own dedicated core. For instance, if a camera has a capturing process and a tracking process, each should utilize separate cores.
Unified Management Explanation:
I am working with multiple cameras, and I want to manage their capturing and tracking processes efficiently using Python’s multiprocessing Pool. Here’s how I envision the management of these processes:
Core Utilization: In my setup, I have a total of 4 CPU cores available. Each camera requires 2 cores: one core for the capturing process and another core for the tracking process. Therefore, I can manage up to 2 cameras concurrently, utilizing all 4 cores (2 cores for Camera 1 and 2 cores for Camera 2).
Exceeding Core Limits: If I attempt to add a third camera (Camera 3), this would exceed the available core limit, as it requires another 2 cores. In this scenario, the Pool should handle the core limitations effectively.
Process Management: The Pool should manage the processes in the following way:
Process 1: The first camera’s capturing and tracking processes (Camera 1) run simultaneously on Core 1 and Core 2.
Process 2: The second camera’s capturing and tracking processes (Camera 2) run on Core 3 and Core 4.
At this point, all cores are fully utilized (100%).
Once the processes for Camera 1 finish (both capturing and tracking), the Pool should automatically start the processes for Camera 3, utilizing the freed-up cores. The process would then look like this:
After Process 1: Once Camera 1's capturing and tracking processes are complete, the Pool will allocate the available cores to Camera 3's processes (capturing and tracking).
Repeat the Cycle: This management approach continues, allowing the Pool to always utilize the available cores efficiently, starting new camera processes as previous ones complete.
This is my code
I’m working on a project where I need to handle multiple cameras for vehicle tracking. Each camera has two distinct processes:
Capturing Process: Responsible for capturing images.
Tracking Process: Responsible for tracking vehicles in the captured images (read from capturing queue)
Here’s what I want to achieve:
Separate Cores: Each of these processes (capturing and tracking) should run on its own dedicated core. For instance, if a camera has a capturing process and a tracking process, each should utilize separate cores.
Unified Management Explanation:
I am working with multiple cameras, and I want to manage their capturing and tracking processes efficiently using Python’s multiprocessing Pool. Here’s how I envision the management of these processes:
Core Utilization: In my setup, I have a total of 4 CPU cores available. Each camera requires 2 cores: one core for the capturing process and another core for the tracking process. Therefore, I can manage up to 2 cameras concurrently, utilizing all 4 cores (2 cores for Camera 1 and 2 cores for Camera 2).
Exceeding Core Limits: If I attempt to add a third camera (Camera 3), this would exceed the available core limit, as it requires another 2 cores. In this scenario, the Pool should handle the core limitations effectively.
Process Management: The Pool should manage the processes in the following way:
Process 1: The first camera’s capturing and tracking processes (Camera 1) run simultaneously on Core 1 and Core 2.
Process 2: The second camera’s capturing and tracking processes (Camera 2) run on Core 3 and Core 4.
At this point, all cores are fully utilized (100%).
Once the processes for Camera 1 finish (both capturing and tracking), the Pool should automatically start the processes for Camera 3, utilizing the freed-up cores. The process would then look like this:
After Process 1: Once Camera 1's capturing and tracking processes are complete, the Pool will allocate the available cores to Camera 3's processes (capturing and tracking).
Repeat the Cycle: This management approach continues, allowing the Pool to always utilize the available cores efficiently, starting new camera processes as previous ones complete.
This is my code
multiprocessing_manager= SyncManager( address=( SETTINGS.MULTIPREPROCESSING_MANAGER.ADDRESS, SETTINGS.MULTIPREPROCESSING_MANAGER.PORT ) ) self.pool: Pool = multiprocessing_manager.Pool(processes=max_processes)
self.capturing_manager = CameraCapturingManager( camera_id=self.camera_identity, camera_ip=self.ip, capture_quque=self.capture_queue, stop_event=self.stop_event, is_capturing_started=self.is_capturing_started ) self.capturing_process = self.pool.apply_async(self.capturing_manager.capture_images) self.camera_tracker = CameraTracker( camera_id=self.camera_identity, camera_ip=self.ip, areas=self.bays, event_handler=self.event_handler, stop_event=self.stop_event, is_tracking_started=self.is_tracking_started ) self.tracking_process = self.pool.apply_async(self.camera_tracker.run, (self.visualization_queue, self.capture_queue))This is example of how I capturing:
def capture_images(self, verbose: bool=True) -> None: self.is_capturing_started.set() while not self.stop_event.is_set(): try: current_time = time.time() if current_time - self.last_capture_time >= self.capture_interval: frame = self._capture_frame() if frame is not None: self._put_frame_in_queue(frame, current_time) self.last_capture_time = current_time time.sleep(0.01) except KeyboardInterrupt: # LOGGER.info(f"{_LOG_PREFIX}: KeyboardInterrupt received in capture_images for camera {self.camera_id}. Exiting gracefully.") pass except Exception as e: LOGGER.error(f"{_LOG_PREFIX}: Error in capture loop for camera {self.camera_id}: {e}") self.stop_event.set() break self._release_camera()