Algorithms

Summarized using AI

Shall We Play A Game?

Randy Coulman • November 21, 2015 • San Antonio, TX

In his talk "Shall We Play A Game?" at RubyConf 2015, Randy Coulman explores the fascinating intersection of computer science and game playing, emphasizing the algorithms and data structures that enable computers to play complex games effectively. Coulman introduces the Ricochet Robots board game as a case study to demonstrate how computer algorithms can be applied to game mechanics, while also illustrating broader applications of these concepts in computer science.

Key points discussed include:

  • Historical Context: Game playing has significantly influenced advancements in computing, from simple games like tic-tac-toe to powerful examples like IBM's Deep Blue and Watson.
  • Game Representation: To teach computers to play games, it’s essential to represent game states accurately, particularly for Ricochet Robots, which consists of a grid with static and dynamic elements.
  • Algorithmic Foundations: Coulman explains various algorithms such as depth-first search, breadth-first search, and best-first search, detailing their applications in navigating game states and finding optimal solutions.
  • State Management: With approximately 976 billion possible game states, efficient state management is crucial. Key strategies in simplifying problem-solving include minimizing storage, using trees for data representation, and avoiding redundant state visits through tracking.
  • Optimizations: Several optimizations are outlined, such as heuristic methods to enhance search efficiency, pre-computed charts for robot movements, and using sorted arrays for quicker equivalence checks. These greatly improve the processing speed of the computer game player.
  • Advanced Search Techniques: Best-first search is presented as a more sophisticated approach that prioritizes exploring promising nodes based on calculated distances to goals, illustrated with A* algorithm examples.

Coulman concludes by emphasizing the potential for further improvements in game-playing algorithms, notably in collision detection and robot movement strategies. He also expresses gratitude to collaborators and invites engagement with his code posted on GitHub, highlighting practical implications of these algorithms in real-world hierarchical data problems. The talk encapsulates not just the complexity of teaching computers to play games, but also the rich foundation of computer science that arises from this endeavor.

Shall We Play A Game?
Randy Coulman • November 21, 2015 • San Antonio, TX

Shall We Play A Game? by Randy Coulman

Teaching computers to play games has been a pursuit and passion for many programmers. Game playing has led to many advances in computing over the years, and the best computerized game players have gained a lot of attention from the general public (think Deep Blue and Watson).

Using the Ricochet Robots board game as an example, let's talk about what's involved in teaching a computer to play games. Along the way, we'll touch on graph search techniques, data representation, algorithms, heuristics, pruning, and optimization.

Help us caption & translate this video!

http://amara.org/v/H0nI/

RubyConf 2015

00:00:14.900 Let's go ahead and get started. My name is Randy Coulman, and I work at a company called Zeal. Today, we are going to talk about teaching computers to play games.
00:00:21.270 Computer game playing has been a fascination for years. In fact, many of the things we know today in computing came from teaching computers to play games.
00:00:26.430 Algorithms, data structures, and search techniques have all been influenced by game playing. There can be games as simple as tic-tac-toe, which I’m sure we’ve all played. Teaching a computer to play tic-tac-toe is not particularly challenging anymore.
00:00:39.540 As you progress in complexity, you can encounter more advanced games that you might recognize, or perhaps not. For instance, you may not recognize a screenshot from a movie called "War Games" from 1982 featuring Matthew Broderick and Ally Sheedy. Anyway, games have been a research area for quite some time.
00:00:57.810 You might recognize a more recent example: they taught the AI called Watson to play Jeopardy and have since applied it to many other fields. Today, however, my focus will be on really simple game concepts to get started.
00:01:16.020 Much of what game playing involves are graphs, trees, and algorithms. So, why do these matter? Sure, they are great for games, but their applications extend beyond that. For instance, we might need to represent hierarchical structures in our applications.
00:01:34.229 A project I worked on six or eight months ago required us to represent a hierarchical structure. I was able to suggest one of these algorithms to solve a problem that arose with relationship mappings, such as those you might see on LinkedIn.
00:01:45.240 For example, you are only two connections away from someone on LinkedIn; that's a graph, and there are algorithms for traversing these graphs. But let's not focus on boring business problems. Instead, let's dive into game playing.
00:02:03.869 For my example, I will use a board game called Ricochet Robots. I was introduced to this game a couple of years ago, and as soon as I played it, I thought, 'I bet I could write a program to play this game for me.' Like any side project, I let the idea sit for a while until I proposed a talk on it. This left me with six weeks to write the game player and prepare the talk.
00:02:40.810 In Ricochet Robots, you have a 16 by 16 grid, and the center four squares are marked as places you cannot go. The grid includes walls, and the colored areas represent goal targets. In a game of Ricochet Robots, players must move the robots around the board and achieve different goals depending on the current turn.
00:03:09.700 The robots are distributed randomly on the board at the start. You then turn over a disk showing one of the goal symbols, which tells you what you must do next. This game can be played by any number of people, which is really cool.
00:03:40.610 Unlike most games, where each player has their own token to move, here, everyone looks at the board and tries to calculate how many moves it takes to get a particular robot into the correct square. The robots cannot stop moving once they start, unless they hit a wall or another robot, which makes the gameplay interesting.
00:04:06.579 For example, to get the green robot into the green square, I can devise a seven-move solution. You navigate the blue robot around while chasing it with the green one, bouncing off the blue robot and landing into the goal.
00:04:40.509 The game consists of playing through all 17 disks, aiming to find the shortest solution. Players have about a minute to determine their moves. If two players come up with the same-length solution, there's a tiebreaker rule: the player further behind in the game gets to claim the win.
00:04:57.169 It's a lot of fun and somewhat exciting because, even if you're not the fastest player, you still have a shot at winning. To teach a computer to play this game, we need to understand the scope of the problem at hand. In total, there are about 976.5 billion possible states that the board can exist in.
00:05:31.029 This is due to the number of cells, walls, robots, and target squares. After computing all this, it becomes clear that analyzing each state within a reasonable timeframe is impractical because at each level, you can have between nine and twenty possible next states, depending on which robot you choose to move.
00:05:51.590 Thus, you need to devise some methods for simplifying the problem, and that's what I will discuss today. The first thing to consider when writing a computer game player is how to represent the game and the structure of data.
00:06:02.199 We need to begin by representing the board. The game board configuration doesn’t change during play. It consists of the fixed 16 by 16 grid with a static center island, plus walls and targets. The arrangement of these elements remains the same throughout each game.
00:06:31.629 However, the goal cell will change each turn, albeit less frequently than the movement of the robots. Every time a robot moves, though, its position changes, leading to a combination of static and dynamic information to track.
00:06:50.229 In managing the states, we want to minimize the storage needed for the data and categorize what is static and what is dynamic. This distinction helps reduce storage use, which is crucial when solving larger problems, as storage efficiency often competes with time efficiency.
00:07:19.489 Next, we need to represent robot movement. We can think of board states as boxes connected by lines illustrating the movements made by robots to reach new states. This data structure is a tree, a classic concept in computer science.
00:07:55.490 In a tree, the topmost node is called the root, with each other node having exactly one parent. The leaf nodes, located at the bottom, do not have any children. Working with trees requires searching through them effectively, which can be done using various algorithms.
00:08:18.870 Trees can also be useful in solving mazes, as demonstrated in a book by Jamis Buck, entitled "Basil and Fabian: Three Ways Through." This book is an accessible introduction to these algorithms by having characters navigate mazes to reach a castle.
00:08:44.909 The first algorithm I'll discuss is depth-first search, which is relatively straightforward. In depth-first search, you start at the root and follow one branch all the way to the end, then backtrack before exploring another branch. This approach can be implemented through recursion.
00:09:25.610 In code, the solve method generates the initial path and recursively solves from there. As you generate possible paths, you check each to see if it's the shortest. If you don't find any, you return 'no solution.' However, with depth-first search, there are potential issues like getting stuck in cycles.
00:09:50.660 To improve on this, it's necessary to implement cycle detection to prevent infinite loops during search. For instance, if the green robot goes around in circles at the top, a naive depth-first search will keep cycling without halting. You can address this with a mechanism that tracks visited states.
00:10:22.230 For every possible move, you'd track visited states, ensuring you don’t revisit the same state. Each path explored must have its own list of visited states to avoid missing shorter solutions you have found, as navigating the same path multiple times could skew the results.
00:11:09.520 In reality, what may seem like a tree structure is actually a graph, where nodes might have multiple parent connections. Graphs are more complex than trees since they allow for more interconnected paths. A depth-first search algorithm helping in maze solving can similarly address graphs.
00:11:49.640 In Ricochet Robots, added complexities arise, such as ensuring your robot ricochets at least once to be a legal move, adding a layer of challenge compared to simpler games like tic-tac-toe. Additionally, there are optimal solutions that can emerge but are not legal moves. This makes the problem more intriguing.
00:12:07.790 Next, let’s consider depth-first search's downfall: it requires searching through entire branches to determine the shortest path. Given the 976 billion potential states, that’s not feasible. Thus, a more efficient algorithm must be found.
00:12:49.620 The alternative is breadth-first search, which explores the levels of the tree instead of going deep into a single branch. You examine all elements at one level before moving deeper, ensuring that when you find the first solution, it is guaranteed to be the shortest. The method employs a queue.
00:13:45.620 In breadth-first search, the first solution found is assured to be the shortest because you’ve evaluated all possible shorter paths first. If a state has been seen from previous exploration, you can use a global visited list to omit any state that has already been processed.
00:14:35.240 While breadth-first search is effective, it can still be slow with the large number of states. Therefore, optimizations are necessary by either performing fewer actions or executing them more quickly. Heuristic methods can help guide decisions regarding efficiency.
00:15:06.040 By processing fewer states, I aimed to simplify the traversal of the graph. Exploit heuristics, which are rules of thumb, to improve performance. For instance, when playing Ricochet Robots, the last move made is to move the goal robot into its cell.
00:15:54.490 By prioritizing this primary move in the search algorithm, I found significant improvements in processing times. I analyzed performance data from various sample games, and these new methods consistently allowed for greater efficiency in playing through the turns.
00:16:20.490 Another optimization involved checking for solutions earlier in the process. When exploring paths, I began to check if any of the successors solved the game right as they were generated, potentially discarding countless nodes before fully exploring them.
00:17:10.140 Subsequent profiling helped me identify further inefficiencies in how I was managing robot movement. Creating a pre-computed chart of each robot's stopping cells led to faster evaluations in moving robots on the grid.
00:17:34.230 I recognized that I could treat states with interchangeable robot colors as equivalent states, allowing me to save processing time and ultimately speed up state evaluations. Thus, my focus shifted to utilizing properties of equivalence classes.
00:18:14.490 This involved hashing sets of robot positions, enabling fast comparisons that yield performance gains in analyzing board states. I also discovered that using sorted arrays instead of sets for equivalence checking could enhance speed due to the way Ruby handles these data structures.
00:18:54.370 Additionally, I trimmed down the number of objects created during execution. By avoiding unnecessary state duplication or unnecessary checks for object equivalence, I enhanced overall performance.
00:19:29.269 In summary, these optimizations significantly streamlined my solver's processing speed. My testing showed remarkable improvements, with longer game simulations reducing total processing time.
00:20:10.129 There's still room for improvement, as there are certain cases where the computer could not find solutions as quickly as a human.
00:20:29.289 Moving on to another powerful algorithm, let’s discuss best-first search. Rather than strictly adhering to depth or breadth, this algorithm evaluates which node to explore next, focusing on nodes likely leading to an optimal solution.
00:21:02.870 The key is using a priority queue—elements get sorted according to priority, allowing the algorithm to explore the most promising nodes first. For example, the A* algorithm computes scores for each state based on both distance traveled and an estimate of remaining distance.
00:21:43.890 This requires generating an accurate estimating function, designing it so that it can’t overestimate the remaining distance. I learned many strategies from Michael Fogelman, leading to specific methods for creating these estimating functions.
00:22:00.490 By simulating robots' ability to move without walls—predicting their optimal paths to the goal—I was able to score each state effectively.
00:22:35.639 Upon implementing this with the existing code, I hoped to see an increase in efficiency. Yet, I found that many states yield similar estimates due to the nature of movement patterns, making it challenging to accelerate performance.
00:23:00.680 Moving forward, different heuristics are needed to refine targeting, such as utilizing recent movements to focus search efforts where they matter most and assessing directions that facilitate more direct routes towards the goal.
00:23:44.380 To conclude, I still see potential for enhancements, especially regarding collision detection during robot movement. Identifying stopping points would benefit from a more sophisticated algorithm to effectively manage the interactions of overlapping movements.
00:24:21.550 I want to thank a few key contributors to this project. Trevor Year, who designed these slides, did an extraordinary job with the graphics and animations that enhanced my presentation.
00:24:48.210 To others at Zeal who collaborated with me, provided advice, or assisted in coding and debugging, I’m grateful for your contributions. Special thanks to Michael Fogelman for his instructive talks that often inspired me to find out more.
00:25:21.130 You can check out my code on Github, and shortly I will be posting my slides to Speaker Deck as well. I welcome any feedback on my presentation skills, so don’t hesitate to share your thoughts!
00:26:05.210 Finally, I work at a web and mobile application consultancy called 40. We are currently hiring and have launched a paid apprenticeship program—if that piques your interest, come talk to me!
00:26:26.390 I also have some Zeal stickers along with Stellar High Five stickers, which you can use to recognize contributions you find meaningful—give someone a virtual high five on Twitter!
00:27:06.100 I appreciate your time and interest. Thank you! If you have any questions, feel free to ask!
00:37:02.310 I used Ruby Prof, a profiler that's excellent, generating HTML output for assessment.
00:37:05.540 When addressing the influence of concepts shared today on everyday tasks, I reflected on a project that involved hierarchical data structures. This allowed for the use of breadth-first search to find common ancestors or descendants, proving effective in practical applications.
00:37:26.550 Lastly, it's notable how some optimizations, specifically in object creation, can drastically speed up processes—by up to 550 times, which is significant!
Explore all talks recorded at RubyConf 2015
+80