Python Forum
Creating a bot for online website game or computer games
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Creating a bot for online website game or computer games
#1
I recently learned how I could automate some games. For this example i am automating "powerline.io". This can be applied with many games and looks really cool to see a bot playing for you. One problem you may encounter though, is reaction speed. For example, my bot for powerline.io sometimes can't turn quickly enough. Anyways let's get started. The import needed for this are...
PIL.ImageGrab
time
cv2
pynput.keyboard.Controller

Note: If your game requires mouse input scroll to the bottom to see how to do that

first let's import
from PIL import ImageGrab
import time, cv2
from pynput.keyboard import Controller
next we set up our controls
keyboard = Controller()
alright simple enough. Next put some docstrings
'''

'''
We'll need these for the future.
Let's define our first function
def imageGrab():
    boundary = ()
    img = ImageGrab.grab()
    img.save('ScreenShot.png', 'PNG')
Don't mind the boundary variable, we'll fill that in later. We on line 3 we grab a snapshot of our computer and save it in the variable img. Fianlly we save it to our computer with a name. By making the name a constant we don't end up with a billion snapshots in our directory. Instead it just replaces the file already there.

Next up we have our main function
def main():
    direction = 'up'
    imageGrab()

if __name__ == '__main__':
    main()
Here I tested out the imageGrab() and it worked. The direction variable is a variable I will need in the future for my game.
If you look at the screenshot it probably got some of the webpage too. So we need to narrow it down to the game. Open up a picture editor with a pixel measuring tool, for this tutorial I will be using Krita. Find the coordinates you want for a box and I suggest putting them in the docstring we made earlier for safe keeping. For some games such as powerline.io we need to act fast. So we only grab part of the screen depending on which direction the player is facing which is also why I need the direction variable in "main". So I created this -
detectionCoords = {'up' : (950, 530, 970, 545), 'left' : (975, 570, 990, 590),
                   'down' : (950, 570, 970, 585), 'right' : (945, 570, 960, 590)}

def imageGrab(direction):
    boundary = detectionCoords[direction]
    img = ImageGrab.grab(boundary)
    img.save('ScreenShot.png', 'PNG')
If you are trying to use my coords they may not work for you because of the different computer screens.

Now that we have our pictures, we want action. Some of you have mouse action(scroll to bottom) and other have keyboard action. Down below I defined three more functions. We need to move and we also need to see in front of us.
def mainDetect(direction):
    danger = detection(direction)
    direction = move(direction, danger)
    return direction
    
    def detection(direction)
        pass

    def move(direction, danger):
        pass
mainDetect will be our function for running the detection and movement. detection will return a bool value telling us wether or not we are in danger. danger is then passed on to move. move will pass back the new direction which is then passed back to main. So far we have -
from PIL import ImageGrab
import time, cv2
from pynput.keyboard import Controller

keyboard = Controller()

'''
Up - 548, 950, 540, 970
Left - 550, 940, 570, 950    (Dont' mind
Down - 584, 950, 590, 970     they're out of
Right - 548, 975, 568, 985    order)
'''

detectionCoords = {'up' : (950, 530, 970, 545), 'left' : (975, 570, 990, 590),
                   'down' : (950, 570, 970, 585), 'right' : (945, 570, 960, 590)}


def mainDetect(direction):
    danger = detection(direction)
    direction = move(direction, danger)
    return direction

        
def detection(direction):
    pass

def move(direction, danger):
    pass

def main():
    time.sleep(3) #Gives us time to open game
currentDirection = 'up'
    while True:
        imageGrab()
        currentDirection = mainDetect(currentDirection)
Now let's first fill in the detection function. This is where cv2 comes into play.
def detection(direction):
    img = cv2.imread('ScreenShot.png')
img comes out as a list, the size of the list depends on the size of your screenshot. For me it is 20x15 or 15x20 depending on the direction. the img variable looks something like this when printed -
Output:
[[[0, 0, 0], [0, 0, 0]], [[0, 0 ,0], [0, 0, 0]]]
This would be for a black 2x2 picture. If you still don't see how it works, here, I'll break it down. The outer list is y. For every y(row going down), there is a list in the outer list. Each list in the outer list is a x(column going right). Then there are the little lists with three numbers inside. That is the color of the pixel. Later we need to know this.
Here is the next part of the code -
def detection(direction):
    img = cv2.imread('Powerline.ioScreenshot.png')
    if direction == 'up' or direction == 'down':
        BigList = (img[6], img[7], img[8], (5 ,6, 7))
    else:
        BigList = (img[5], img[6], img[7], (6 ,7, 8))
Allow me to explain. looking up and down give a different x, y box than left and right. So first I do the if. BigList (even though it's a tuple) contains the coords. First we have 3 rows. We grabbed rows 7, 8, and 9 from the image. The tuple in the back is for x. So we have a 3x3 area we are checking. In order to check this the next line consists of a for loop containing an if statement -
def detection(direction):
    img = cv2.imread('Powerline.ioScreenshot.png')
    if direction == 'up' or direction == 'down':
        BigList = (img[6], img[7], img[8], (5 ,6, 7))
    else:
        BigList = (img[5], img[6], img[7], (6 ,7, 8))
    for x in range(0, 3):
        if BigList[x][BigList[3][x]][0] not in range(38, 41) and BigList[x][BigList[3][x]][1] not in range(27, 29) and BigList[x][BigList[3][x]][2] not in range(0, 5):
            print('danger')
            return True
    return False
The if statement may look a little confusing, but it grabs the color from our 3x3 we specified and checks the colors one by one. It checks if the red ratio not in a specific range and does the same for green and blue. I chose the range of the colors of the background for "powerline.io". If at any time it senses a color out of the given range it return to mainDetect True which will then be given to move. If it goes through the for loop and doesn't sense anything it returns false.
Now for our last but not least move function. First off, we have this danger variable hat has been given to us, in the move function let's do this -
def move(direction, danger):
    if danger:
        #runcode
Next we will need to define a couple dictionaries.
def move(direction, danger):
    if danger:
        rightDict = {'up' : 'right', 'right' : 'down', 'down' : 'left', 'left' : 'up'}
        detectDict = {'up' : 'w', 'right' : 'd', 'down' : 's', 'left' : 'a'}
rightDict is to get the direction to our right, detectDict is to get the keypress for the direction. Now you know how your parents always said "Look both way before crossing the street". We may not be looking both way but before we move we are goign to look to the right. It would work if we sensed danger in front of us and turned randomly, then we might end up crashing into something when we turn. So next we do imageGrab to our right and then detect with the direction right of us. If we see something in front and right, we turn left, otherwise we just turn right.
def move(direction, danger):
    if danger:
        print('danger')
        rightDict = {'up' : 'right', 'right' : 'down', 'down' : 'left', 'left' : 'up'}
        detectDict = {'up' : 'w', 'right' : 'd', 'down' : 's', 'left' : 'a'}
        ImageGrab.grab(detectionCoords[rightDict[direction]]).save('Powerline.ioScreenshot.png', 'PNG')
        if detection(rightDict[direction]):
            keyboard.press(detectDict[rightDict[rightDict[rightDict[direction]]]])
            return rightDict[rightDict[rightDict[direction]]]
        else:
            keyboard.press(detectDict[rightDict[direction]])
            return direction
    return direction
The complicated rightDict is to find our left. After pressing the key we turn left and return left as the new direction. If there is not danger to our right we turn right and send that back as our new direction. At the end my code was too slow so i narrowed it down to -
from PIL import ImageGrab
import time, cv2
from pynput.keyboard import Controller

keyboard = Controller()

'''
Up - 548, 950, 540, 970
Left - 550, 940, 570, 950
Down - 584, 950, 590, 970
Right - 548, 975, 568, 985
'''

detectionCoords = {'up' : (950, 530, 970, 545), 'left' : (975, 570, 990, 590),
                   'down' : (950, 570, 970, 585), 'right' : (945, 570, 960, 590)}


def mainDetect(direction):
    danger = detection(direction)
    direction = move(direction, danger)
    return direction

        
def detection(direction):
    img = cv2.imread('Powerline.ioScreenshot.png')
    if direction == 'up' or direction == 'down':
        BigList = (img[6], img[7], img[8], (5 ,6, 7))
    else:
        BigList = (img[5], img[6], img[7], (6 ,7, 8))
    for x in range(0, 3):
        if BigList[x][BigList[3][x]][0] not in range(38, 41) and BigList[x][BigList[3][x]][1] not in range(27, 29) and BigList[x][BigList[3][x]][2] not in range(0, 5):
            print('danger')
            return True
    return False
    

def move(direction, danger):
    if danger:
        rightDict = {'up' : 'right', 'right' : 'down', 'down' : 'left', 'left' : 'up'}
        detectDict = {'up' : 'w', 'right' : 'd', 'down' : 's', 'left' : 'a'}
        ImageGrab.grab(detectionCoords[rightDict[direction]]).save('Powerline.ioScreenshot.png', 'PNG')
        if detection(rightDict[direction]):
            keyboard.press(detectDict[rightDict[rightDict[rightDict[direction]]]])
            return rightDict[rightDict[rightDict[direction]]]
        else:
            keyboard.press(detectDict[rightDict[direction]])
            return direction
    return direction
    

def main():
    time.sleep(3)
    currentDirection = 'up'
    while True:
        ImageGrab.grab(detectionCoords[currentDirection]).save('Powerline.ioScreenshot.png', 'PNG')
        currentDirection = mainDetect(currentDirection)

if __name__ == '__main__':
    main()
For mouse input (WARNING Please don't be like me and give your computer access to your mouse without a feature to shut it down... it didn't end well for me) -
import win32api, win32con, win32gui
def click(x,y):
    win32api.SetCursorPos((x,y))
    win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0)
    win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0)

#Get mouse pos (useful for finding the coords of a button so you know where to tell the mouse to click)
_, _, mousePos = win32gui.GetCursorInfo()
Thanks for following along, I will answer any questions below
Reply
#2
Another example - https://www.youtube.com/watch?v=DNo451FWvKs - where I made piano tiles, just watch the beginning where i show the ImageGrab and the end after some debugging
Reply


Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020