Python Forum

Full Version: What is the best way to search for a pixel in a screenshot?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I'm trying to find the most efficient way to iterate over a numpy array representing an OpenCV image and to return the location of the first pixel of a given color.
I tried two methods running the same function:
one in python
def get_location(screenshot):
    height = screenshot.shape[0]
    width = screenshot.shape[1]
    for pixel_height in range(height):
        for pixel_width in range(width):
            pixel = screenshot[pixel_height, pixel_width]
            if numpy.array_equal(pixel, WINNER_COLOR):
                return True
    return False
one in cython
@cython.boundscheck(False)
cpdef tuple get_location(unsigned char [:, :, :] screenshot):
    cdef int height, width, pixel_height, pixel_width
    height = screenshot.shape[0]
    width = screenshot.shape[1]
    for pixel_height in range(height):
        for pixel_width in range(width):
            pixel = screenshot[pixel_height, pixel_width]
            if numpy.array_equal(pixel, WINNER_COLOR):
                return True
    return ()
I cut the screenshot to parts and started a process pool with those cuts
screen = cv2.cvtColor(numpy.array(getRectAsImage(SCREEN_RECT)), cv2.COLOR_RGB2BGR)
height, width, depth = screen.shape
screen_cuts = []
for height_cut_index in range(CUTS):
    start_height = int(height / CUTS) * height_cut_index
    end_height = int(height / CUTS) * (height_cut_index + 1)
    for width_cut_index in range(CUTS):
        start_width = int(width / CUTS) * width_cut_index
        end_width = int(width / CUTS) * (width_cut_index + 1)
        cut = screen[start_width:end_width, start_height:end_height]
        screen_cuts.append(cut)
pool.map(get_location, screen_cuts)
I have two main problems; The first is that the python code is faster (1 second) compared to the cython code (3 seconds) for some reason; The second is that it is still too slow, is there a way to speed it up?
I would recommend the follwoing approach instead of using loops directly:
# image.shape = (height, width, colors)
# pixel = np.array([R, G, B]), i.e.numpy array of length <the number of colors>
# threshold -- scalar value, takes into account color similarity 
np.where((np.abs(pixel - image) < threshold).all(axis=-1))