Ruby
Solving Ricochet Robots
Summarized using AI

Solving Ricochet Robots

by Randy Coulman

In this video, Randy Coulman presents a detailed exploration of developing a solver for the board game Ricochet Robots, which involves using various computer science concepts to automate gameplay solutions. The primary aim is to create a computerized player that can efficiently determine optimal moves within the game.

Key points discussed throughout the presentation include:

- Introduction to Ricochet Robots: A brief overview of the game, which consists of navigating robots to specific colored target cells on a 16x16 grid, emphasizing the movement rules where robots only stop upon hitting obstacles.

- Problem Space Characterization: Coulman outlines the staggering number of possible board states (over 976 billion) and the challenges posed by maximum branching factors in terms of potential robot moves.

- Data Representation: He discusses how to represent board states effectively, combining the goal cell and robot positions into a manageable format.

- Graph vs. Tree Structures: The presentation distinguishes between tree structures and graphs, highlighting the implications of cycles in state spaces when searching for solutions.

- Search Algorithms: Coulman introduces basic algorithms such as Depth-First Search (DFS) and Breadth-First Search (BFS), explaining their implementation, advantages, and limitations.

- Heuristics and Optimizations: He emphasizes the importance of heuristics in improving the solver's performance, such as prioritizing the active robot movement and pre-computing stopping points for rapid assessments.

- Performance Testing and Results: The improvement of the solver's speed through various optimizations is discussed, backed by performance testing showcasing a significant reduction in solving time.

- Potential Future Enhancements: Plans for refining the solver include exploring the A* algorithm, leveraging heuristics for better estimation, and optimizing object handling to improve processing times.

- Community Engagement: Coulman encourages community involvement through the availability of the solver’s code on GitHub, inviting collaboration and further experimentation.

In conclusion, the talk illustrates not only the complexities of programming a game solver but also the invaluable lessons learned in the process, highlighting the intersection of real-world applications and game theory in computer science. With insights into potential future directions, Coulman underscores the importance of continuous improvement in algorithm performance, while also fostering community interaction in the development of the solver.

00:00:22.960 Okay, so today I'm going to talk about writing a solver for a game called Ricochet Robots. This is a board game. I've got a couple of copies if you want to come and play after, if I mean Holly will set up a table somewhere and play. It's a super fun game. So why would I want to talk about writing a solver for this game? Really, I wanted a solver for the game because we play to work all the time and we never know if we've got the best solutions. And of course, the best way to do a side project is to propose a conference talk about it, get accepted, and then madly write the solver while talking about it at the same time. That's the best way to get a side project done, right?
00:01:00.079 But really, what I'm going to be talking about is some computer science concepts: graph search algorithms and things like that. I learned this stuff back in college, but not everybody knows this. We had a local Ruby users group meetup where we were working on maze solvers, which used very similar algorithms. However, very few people need these algorithms, so I thought it was worth talking about, and the game is a fun way to do that.
00:01:22.280 So why do we care about graphs and graph search algorithms? Graphs are actually a very common data structure when you're modeling business organizations. In a Rails app I'm working on, we just had a problem where we needed to use graphs and a couple of these algorithms to solve a problem. For instance, we had relationship maps, like your LinkedIn connections or Facebook social connections. I'm sure you all have LinkedIn connections or are part of some social network—that's a graph structure. Travel planning, trip planning, network routing—all of these things use these kinds of data structures and algorithms. Obviously, they're much more complex than the ones I'm going to talk about, but still, they're important to know about.
00:01:49.720 So what is Ricochet Robots? As I said, it's a board game where you have a 16 by 16 grid of cells with colored target cells. You have a set of five robots distributed around the board. The idea is you have to navigate the robots to get the proper color of the robot into a specific goal cell. For example, in this case, we're looking for the green square cell, and we need to somehow move the green robot into that cell. The robots can only move in straight lines and cannot stop unless they hit something, like a wall or another robot.
00:02:05.450 Players can move any number of robots, and everyone stares at the board to figure out how many moves it would take to solve the puzzle. The player calling out the best answer within a certain time frame gets to go. Here’s an example of a solution: we're going to move the blue robot first and then chase the green robot as it bounces off the blue robot and moves into the goal cell. That's how the game works. And if you're like me, you may immediately think, 'Hey, I want to write a solver for this because it seems like you should be able to do this!'
00:02:28.830 But before you embark on writing a solver, you need to characterize the problem and understand what you're dealing with. First of all, what's the size of the problem space? There are 256 cells, with four of them always occupied by a center island, leaving 252 cells. Since there are five robots, this means there are 976.5 billion possible board states—different positions of all the robots. That's a pretty big search space!
00:03:01.580 The other complicating factor is the branching factor from each board state. There are anywhere from nine to twenty possible moves, depending on the configurations. For example, if the robots are in corners, there may only be nine possible moves for a given robot, but generally, there could be up to twenty possible directions it could go. Given these complications, you have to think about how to represent the board state in memory without taking up too much space. I studied the board a little and determined that the grid itself is fixed. It’s always the same grid with a static center island, and you only need to represent it once. The walls and targets change but remain constant throughout a single game, allowing you to represent them just once as well.
00:03:41.290 The more variable elements are the goal and the robot positions. The goal can change every turn, but the overall structure is static, so it’s manageable. Therefore, when representing a board state in the solver, I combined the goal cell and the robot positions.
00:04:00.440 You also have to represent how the robots move, and in this case, the board states are the nodes in the tree, with each edge representing the movement of one robot in a certain direction. A tree is a common data structure in computer science—nodes represent the board states, and edges are the connections between them. Each node typically has one parent, except the root node, which has none, making it a tree structure.
00:04:13.560 When you have trees, you generally need to search through them, and that’s where search algorithms come in. If you're looking for a good introduction to search algorithms, I recommend Jamis Buck's book, which I read during its development. He discusses maze solvers, but the algorithms are quite similar—highly recommended. Tomorrow, Jamis will be speaking, and I think he’ll cover some of the same material.
00:04:49.650 The first search algorithm I'm going to discuss is one of the most basic: depth-first search. In a depth-first search, you explore all the way down one branch of the tree, then back up and try a different branch. It looks like this: you go all the way down to the bottom of the tree, back up, then over again—a typical representation of depth-first search.
00:05:20.790 This algorithm is very simple to program; it’s a recursive algorithm. I have a solve method that takes the initial state and solves recursively. As I'm solving, I build a list of candidate solutions and ultimately find the shortest solution, which becomes my answer. When checking if a path is a solution, if it is, I add it to the list of candidates. If not, I find all the successor paths.
00:05:41.130 Now, in this solver, I started out solving for a single robot because that was an easier problem to work with. The first challenge you encounter is the idea of cycles. You might get a robot stuck in a corner and start going around in circles. You need to guard against that to avoid getting trapped in an infinite loop, which would prevent you from finding a solution. You might think about keeping track of all the board states seen before and not processing them again, but the problem is you can reach the same cell through different paths. For example, on the left, if I take four moves to reach that cell, and on the right, if it only took three moves, the shortest path to the goal may potentially be missed if I discard the path that ends up revisiting a previous cell.
00:06:41.330 This leads to the conclusion that this structure, which initially seems like a tree, isn't one at all. It behaves more like a graph where nodes can have multiple parents. There are paths of various lengths leading to the same node, which complicates the search. Also, while the algorithms I'm discussing aim to find optimal shortest paths, the rules of Ricochet Robots can yield short solutions that might not be legal. For instance, if a robot starts on a goal cell, that’s illegal; it must leave and return or ricochet off the walls before reaching the goal cell.
00:07:12.330 Therefore, a possible solution may require moving in a manner that looks like a cycle that isn't a legal cycle if the initial move contradicts the game’s rules. I demonstrated that navigating into the goal cell can appear to form a cycle that can be legally circumvented in an optimal solution. Consequently, these complications illustrate why searching the entire state space to find a solution becomes unmanageable—especially considering the vast number of possible configurations.
00:07:56.610 Thus, we need a better algorithm. The next algorithm I will discuss is breadth-first search. This algorithm operates by exploring the tree level by level, rather than delving deep into branches. Effectively, it evaluates all zero-move solutions first, then one-move solutions, then two, and so forth. The advantage of this algorithm is that it is guaranteed to find the shortest solution first. You always explore one move, then two, then three, and onwards. This leads to a more optimized outcome for solving the game.
00:08:41.980 The implementation of this algorithm utilizes a queue. You put the initial path into the queue and then pull the first one off. If it's a solution, you return it and are done; otherwise, you put the successor states at the end of the list and move on. Furthermore, with this algorithm, we can use a global visited list that didn’t work with depth-first search. If we arrive at a state already seen, we are guaranteed we have reached it in fewer moves the first time or at least at the same pace. This result provides valuable optimization since we won’t need to process a state multiple times.
00:09:17.160 Even so, with breadth-first search, the method might still not be fast enough. The global visited list serves as one optimization, but there are generally three kinds of optimizations you can pursue. One approach is simply to do fewer things: reducing the size of the search space. One common strategy is to perform tasks faster, which is what most people mean when they talk about performance optimization—speeding up the code, making it run faster, and utilizing profilers. The third optimization is introducing heuristics, which are essentially rules of thumb that can speed up processes.
00:09:44.020 In my solver development, the first optimization I adopted was a heuristic based on the nature of the game. Crucially, the last move you make is usually to direct the active robot into the goal cell, so it makes sense to always check the active robot first. If we do this consistently, when we reach the level where we need to find a solution, we can arrive faster by focusing on moving the active robot first. I gathered this insight while analyzing the sequence of turns where the two lines coincide on a graph.
00:10:33.290 From my research, I found that most solutions only involve two or maybe three robots, and the heuristic proved useful. However, there were outliers where the performance was significantly worse, demonstrating that heuristics do not always work as expected. I decided to maintain this heuristic since it seems beneficial overall, but I recognized the need for further analysis to ensure it truly provides value.
00:11:21.180 As I proceeded to work on objective optimizations, I realized that I was processing far too many states. In the algorithm I showed earlier, I check whether a path is a solution only when I pull it off the queue. However, what if I check if it’s a solution before I even put it in the queue? If I generate successors and check immediately, I can find the solution sooner, significantly reducing the total nodes I process. This method yielded almost a factor of three decrease in the count of states.
00:12:23.120 Next, I ran a profiler and discovered that most of my time was spent determining where the robots would stop. My algorithm needed improvement in efficiency, as I had set up a convoluted process just to check movement capability. To address this, I opted to pre-compute the stopping cells for the robots. For every cell on the board and in each direction, I determined where the robot would stop. This pre-computation, initially ignoring other robots for simplicity, was a trade-off of space for time—a common and effective technique in optimization.
00:13:08.280 The pre-computation helped greatly; my optimization resulted in a significant increase in processing states per second, demonstrating the effectiveness of this adjustment. After implementing these and exploring additional strategies, I learned from other talks, like one from Michael Fogelman, about recognizing equivalent board states. Some states, despite differing configurations, can lead to the same outcomes based on robot positions. I created classes of equivalent board states, thus simplifying the process of handling similar states.
00:14:18.300 Simultaneously, I refined the representation of board state equivalence to facilitate quicker comparisons. Minor adjustments like switching from set comparisons to sorted arrays for efficiency further contributed to overall performance. I also recognized that I was creating unnecessary objects each time I tried to move a robot. If a robot couldn’t move, I stopped the creation of new states, leading to another significant performance gain.
00:15:07.830 In the quest for optimization, I sought opportunities to compare objects by reference rather than performing deep equality checks, streamlining operations in the process. Although this solver works quite well, it's still not perfect. Certain areas continue to present challenges where improvements are needed. At some point, however, I had to pause development to prepare slides for this talk to showcase my progress thus far.
00:15:53.790 After applying optimizations, I tracked the total solving time for an example game and saw notable speed improvements. The algorithms I initially used were naïve and slowed down performance. Thanks to the optimizations I’ve discussed, I’ve seen a substantial acceleration of processing rates. Let me show you a quick demonstration of the solver in action; it's text-based, but I'm running an example that I know the timing for.
00:16:43.520 I’ll print out the random number seed used to ensure I can reproduce the same game reliably. This has proven valuable for testing and optimization. Now, I’ll run the solver; it's programmed to tackle an entire 17-move game autonomously. I can guarantee that I am not capable of playing this game as quickly as the solver can!
00:18:09.470 The blue square is a little slower; it's actually a nine-move solution. There we go, it completed in about 30 seconds—quite impressive for a complex game of Ricochet Robots. While it's effective, I recognize that numerous board states still require improvements.
00:18:54.600 So how else might I improve this solver? Certainly, I could explore better algorithms. A well-known option is the A* algorithm, which is often a best-first search. It evaluates potential solutions by considering both the distance traveled so far and estimating the remaining distance. If its estimations are accurate and conservative, A* will yield optimal solutions.
00:19:37.550 To implement this, I’ll have to devise an effective method for estimation. Based on Michael Fogelman’s insights, I can provide estimates for how long it would take to reach the goal cell from various points on the board, accounting for obstacles. This insight adds value to my strategy, as I would be calculating a comprehensive map of estimated moves based on the structural constraints of the board.
00:20:01.490 In addition to considering backward searching techniques—where I analyze possible prior positions leading to the goal—I'll examine if combining estimates and heuristics improves efficiency. Another angle is identifying smarter approaches to initial moves based on recent history. Refining my heuristics could lead to even swifter performance.
00:20:27.800 I remain aware that my currently slowest algorithmic components involve assessing robot collision risks. The current setup calculates pre-computed stopping points for walls but lacks robot dynamics, which necessitates adjustments. One possibility is to pre-compute stopping points per robot during solving.
00:21:01.560 Finally, I aim to leverage primitive types over complex objects, where applicable, to further optimize performance. Often, creating excessive objects can lead to latency, so minimizing object instantiation is crucial. Additionally, I could consider parallel processing, allowing multiple states to be solved simultaneously.
00:22:00.000 These diverse ideas pave the way for potential advancements, and I would love to hear any suggestions from you all for further optimizing this solver. I want to extend my gratitude to several individuals who contributed to this journey.
00:22:30.600 Trevor Guerra, one of the founders of Zeal and the designer of these slides, did beautiful work on the animations. I am very thankful for the effort he put into them, along with everyone else. Additionally, I'd like to thank Michael Fogelman for inspiring ideas regarding the game and solver logic. Lastly, I express special thanks to Lela Sherman Oz for introducing me to Ricochet Robots at a camp a few years ago.
00:23:11.040 The code for my solver is available on GitHub. I will continue working on it, and I encourage everyone interested to fork it and experiment. I now have a few minutes left for questions if anyone has any inquiries. Thank you very much!
Explore all talks recorded at MountainWest RubyConf 2015
+10