Jan-12-2022, 05:15 PM
I'm using a tool where an image is adjusted with a zoom function. Because of this it changes into a object which make me not able to apply glTextImage2D on it. I was wondering how I could achieve this.
This question is related to the issue: https://python-forum.io/thread-36043.html
I'm new to this forum so if this is forbidden, my apologies and please let me know so I'll delete it. I'm keeping it there just in case someone has another way of approaching the problem. In this question I'm just asking how to make output of this function applicable on glTextImage2D.
Class that creates img with zoom function:
My attempt:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, img)
Print output with img assigned:
This question is related to the issue: https://python-forum.io/thread-36043.html
I'm new to this forum so if this is forbidden, my apologies and please let me know so I'll delete it. I'm keeping it there just in case someone has another way of approaching the problem. In this question I'm just asking how to make output of this function applicable on glTextImage2D.
Class that creates img with zoom function:
class Frame: boxIsVisible = False def __init__(self, img, box): self.zoom = 0.4 self.img = img self.box = box x, y, w, h = box.dim self.postFilterBox = BoundingBox(x, y, w, h) def setZoom(self, amount): self.zoom = min(max(amount, 0.01), 0.99) def filter(self): # Declare basic variables screenHeight = self.img.shape[0] screenWidth = self.img.shape[1] screenRatio = float(screenWidth) / screenHeight (boxX, boxY, boxW, boxH) = self.box.dim distX1 = boxX distY1 = boxY # dist refers to the distances in front of and distX2 = screenWidth - distX1 - boxW # behind the face detection box distY2 = screenHeight - distY1 - boxH # EX: |---distX1----[ :) ]--distX2--| # Equalize x's and y's to shortest length if distX1 > distX2: distX1 = distX2 if distY1 > distY2: distY1 = distY2 distX = distX1 # Set to an equal distance value distY = distY1 # Trim sides to match original aspect ratio centerX = distX + (boxW / 2.0) centerY = distY + (boxH / 2.0) distsRatio = centerX / centerY if screenRatio < distsRatio: offset = centerX - (centerY * screenRatio) distX -= offset elif screenRatio > distsRatio: offset = centerY - (centerX / screenRatio) distY -= offset # Make screen to box ratio constant # (constant can be changed as ZOOM in main.py) if screenWidth > screenHeight: distX = min(0.5 * ((boxW / self.zoom) - boxW), distX) distY = min(((1.0 / screenRatio) * (distX + (boxW / 2.0))) - (boxH / 2.0), distY) else: distY = min(0.5 * ((boxH / self.zoom) - boxH), distY) distX = min((screenRatio * (distY + (boxH / 2.0))) - (boxW / 2.0), distX) # Crop image to match distance values newX = int(boxX - distX) newY = int(boxY - distY) newW = int(2 * distX + boxW) newH = int(2 * distY + boxH) self.crop([newX, newY, newW, newH]) # Resize image to fit original resolution resizePercentage = float(screenWidth) / newW self.img = cv2.resize(self.img, (screenWidth, screenHeight)) for i in range(4): self.postFilterBox.dim[i] = int(self.postFilterBox.dim[i] * resizePercentage) # Flip Filtered image on y-axis self.img = cv2.flip(self.img, 2) def drawBox(self): (x, y, w, h) = self.postFilterBox.dim if x > 0: cv2.rectangle(self.img, (x, y), (x + w, y + h), (255, 255, 255), 2) def crop(self, dim): x, y, w, h = dim self.img = self.img[y:y + h, x:x + w] self.postFilterBox.dim[0] -= x self.postFilterBox.dim[1] -= y def show(self): if self.boxIsVisible: self.drawBox() cv2.imshow("Dolly Zoom", self.img)
My attempt:
box = BoundingBox(-1, -1, -1, -1) # loop while(True): for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() quit() #zoom filters ZOOM = 0.75 SHOW_BOX = True # Show detection box around the largest detected face SCALE_FACTOR = 1.2 MIN_NEIGHBORS = 8 MINSIZE = (60, 60) face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml") #capture & zoom (on face) ret, img = cap.read(0) img = cv2.flip(img, 1) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) face = face_cascade.detectMultiScale( img, scaleFactor=SCALE_FACTOR, minNeighbors=MIN_NEIGHBORS, minSize=MINSIZE, ) boxes = np.array(face) # Linear interpolate bounding box to dimensions of largest detected box if boxes.size > 0: boxLrg = largestBox(boxes) if box.dim[0] == -1: box = boxLrg else: box.lerpShape(boxLrg) # Setup frame properties and perform filter frame = Frame(img, box) frame.boxIsVisible = SHOW_BOX frame.setZoom(ZOOM) frame.filter() box = frame.box print(frame) # Copy the output from frame function into the sender texture glBindTexture(GL_TEXTURE_2D, senderTextureID) glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, frame)It only shows when I use img, but then it just shows the capture and not the zoom.
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, img)
Print output with img assigned:
[ 44 42 56] [ 44 42 57] [ 45 43 58]] [[ 32 41 72] [ 35 44 75] [ 35 45 74] ... [ 44 42 55] [ 44 42 56] [ 46 44 59]]]Print output with frame assigned:
<Frame.Frame object at 0x0000012574EB4BA8> dict_keys([<class 'numpy.ndarray'>])