##### Minimax algorithm
 Minimax algorithm freethrownucleus Not Blown Up Yet Posts: 51 Threads: 6 Joined: Dec 2022 Reputation: May-24-2023, 05:12 PM Is there anyway I can make my minimax algorithm stronger? (For a tic-tac-toe game.) ```def minimax(mark, square, alpha=-1000, beta=1000, depth=0): """minimax algorithm with alpha/beta pruning""" # Place the mark and check for a win square.mark = mark if square.check_win(): # Give extra weight to earlier wins/losses score = 10 - depth if mark is ROBOT else depth - 10 board.depth = min(board.depth, depth) elif len(empty_squares := board.empty_squares()) == 0: # No plays left. Draw. score = 0 elif mark is PLAYER: # Pick best move for robot score = -1000 for s in empty_squares: score = max(score, minimax(ROBOT, s, alpha, beta, depth + 1)) alpha = max(alpha, score) if alpha > beta: break else: # Guess what move player will make score = 1000 for s in empty_squares: score = min(score, minimax(PLAYER, s, alpha, beta, depth + 1)) beta = min(beta, score) if alpha > beta: break # Remove mark and return score for the square square.mark = EMPTY return score``` Reply Posts: 5,524 Threads: 16 Joined: Feb 2020 Reputation: May-24-2023, 08:16 PM (This post was last modified: May-24-2023, 08:16 PM by deanhystad.) If you let the minimax do an exhaustive search (no depth limit, test all possible moves) it will win or draw every time. You cannot get stronger than that. I don't see any problem with the posted code (other than the indent). This does nothing: `board.depth = min(board.depth, depth)`Maybe you are using it wrong. Reply freethrownucleus Not Blown Up Yet Posts: 51 Threads: 6 Joined: Dec 2022 Reputation: May-29-2023, 06:56 PM Okay but if I wanted to ease the algorithm, for instance if I would like to win in max 3-4 moves, could I just reduce the depth? Let's say I put depth = 10. Reply Posts: 5,524 Threads: 16 Joined: Feb 2020 Reputation: Jun-01-2023, 07:04 PM (This post was last modified: Jun-01-2023, 07:04 PM by deanhystad.) I don't understand what you mean by this: Quote: for instance if I would like to win in max 3-4 moves, could I just reduce the depth? You cannot win in max 3-4 moves without a cooperative or inept opponent. Minimax assumes the opponent will make the best possible move, not stupid moves that lead to an early defeat. You can use a "depth" in minimax to give preference to wins that occur in fewer moves, or to limit how many moves are tested before a solution is chosen. The max depth of a 3x3 tick-tac-toe board is 9 for the first move, 8 for the second, 7 for the third and so on. Limiting depth to 9 or greater will have no effect on the algorithm. It will search all possible moves before it runs into any depth limit. Limiting depth only affects early moves. Limiting depth to 7 seems like it should affect 2 moves, but it only affects the robot's first move (either move 1 or 2). By move 3, there are only 7 moves remaining, and the "max depth" of the board is 7. For the robot's second move it will try every possible combination before choosing which move to make. Limiting the depth does not have much effect on how the robot plays. I modified your minimax to count how many moves the robot attempts. When the robot plays first with no depth limit, it tries 34,202 moves. If you limit the max depth to 7, this is reduced to 25,766 moves. You could think of this as meaning the robot is only 75% as effective as it is with no depth limit, but according to the minimax algorithm, all squares are equally good choices for the first move. May as well randomly choose a square for the first move. Limiting the search depth to 5 moves reduces the number of moves searched to around 5200, only 15% as many moves as when the search depth was 9. An exact number cannot be given because the number changes based on what plays have been made. With a depth limit of 5, the robot's second, or second and third moves are affected by the limit. You might notice a difference in how the robot plays, but it really matters on what moves were made in earlier rounds. Reply

Forum Jump:

### User Panel Messages

##### Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020