Open Source

Summarized using AI

Caching strategies on https://dev.to

Ridhwana Khan • October 05, 2023 • Boulder, CO

In the presentation titled "Caching Strategies on https://dev.to", delivered by Ridhwana Khan at Rocky Mountain Ruby 2023, the speaker discusses the importance of caching in improving web performance, particularly at Dev, where user experience is prioritized. The talk covers various caching strategies and their implementation, which are critical for optimizing responsiveness and efficiency in read-heavy applications like Dev. Key points include:

  • Introduction to Caching: The presenter shares her recent experiences at Dev, highlighting her role as a lead engineer and the significance of caching in enhancing application performance.
  • Reasons to Implement Caching: Caching enhances performance by speeding up content retrieval, decreasing response times, and improving user experience, particularly in areas with slow internet connectivity.
  • What to Cache and What Not to Cache: static content is often good for caching, while dynamic data and user-specific data should be avoided due to their changing nature, which can lead to inefficiencies.
  • Response Caching Types: Ridhwana explains different caching methods including edge caching, fragment caching, and browser caching, providing an overview of how each contributes to optimized performance.
    • Edge Caching: This involves caching content closer to users using services like Fastly, which helps reduce server load and improves access times.
    • Fragment Caching: Allows caching specific parts of a view that are computationally expensive, enhancing retrieval speed.
    • Browser Caching: Ensures user resources are stored in their browser to load web pages faster without needing to query the origin server again.
  • Cache Management: The importance of purging stale data and setting cache lifetimes is emphasized, as well as strategies to prevent cache inefficiencies.
  • Conclusion and Networking: The session concludes with a call for interaction, encouraging feedback and connections, as Ridhwana looks for new opportunities after her tenure at Dev.

Overall, the talk provides valuable insights into effective caching strategies that can significantly improve web application performance, alongside practical recommendations for implementation and management.

Caching strategies on https://dev.to
Ridhwana Khan • October 05, 2023 • Boulder, CO

Rocky Mountain Ruby 2023 - Caching strategies on https://dev.to by Ridhwana Khan

We've always put a lot of effort into performance at Dev (https://dev.to/). We want our users to be able to see their content almost instantaneously when interacting with our site. In order to do so we've placed emphasis on caching. We've had to ask ourselves questions like what are the right things to cache? Which layer in the stack would be best to cache it? And how will this effect the overall performance? During this presentation, I'd like to show you some of the caching strategies we have in place and discuss how they've sped up the interactions within our site.

Rocky Mountain Ruby 2023

00:00:14.719 Um, okay, cool. Thank you so much for being here today. Uh, my name is Ridhwana Khan.
00:00:21.000 I am a lead engineer, and I am here to talk to you about caching on Dev.
00:00:26.480 So, awkward story: I worked at Dev until last week, Tuesday, as a lead engineer for almost four years. However, on Tuesday, the Dev team was forced to lay off pretty much the entire engineering team, and so I guess my cache got expired—maybe purged, actually, not quite expired.
00:00:39.320 But on a good note, on a light-hearted note, I feel like I learned a lot at Dev, and I'm glad that I got the opportunity to work with incredible folks during that experience. I was able to dive deeper into caching, and now I'm here today to talk to you about it.
00:00:57.800 I like to keep my introductions pretty light and fun to share a little about me so you get to know me before I start discussing a lot of technical things. So, firstly, that very awkward photo of me is me speaking at conferences. I really love speaking at conferences, writing articles, and now that I'm unemployed, I'll get to do a little bit more of that.
00:01:19.400 It's mostly because I enjoy sharing my knowledge and meeting people from different backgrounds all over the globe. I feel like everyone has a unique perspective, which gives me a broader understanding of various topics.
00:01:32.000 I am also passionate about building diverse and inclusive communities wherever I go. This is a photo from a Women in Tech community that I built in South Africa. I've been involved in RailsBridge, and I've also participated in a community focused on the intersection of UX and development, which is really cool. I think that's one of the things that attracted me to Dev in the first place—they have a very inclusive and welcoming community.
00:01:57.200 So, the most important thing to know about me is that I'm a cat mom. I have this little fellow whom I have named Zeus. I give him all my love, all the scratchers in the world, a very warm bed, and lots of treats. In return, he gives me lots and lots of attitude and covers me in cat hair. I apologize in advance if I have cat hair on me. The few cuddles don't seem like a fair trade to me.
00:02:30.040 Finally, I am from beautiful, sunny South Africa. Has anyone here been to South Africa before? A couple of people, more than I expected! So, I want to do a quick tour of South Africa to show you where I'm from. This is an aerial view of Cape Town, the city where I live. We have a big mountain, just like here, called Table Mountain, a prominent landmark known for its flat, table-like structure.
00:02:52.800 This is Cape Point, where the Indian Ocean meets the Atlantic Ocean, and it looks really beautiful. We have wonderful winding roads where the mountains meet the sea. We're also well known for our safaris, featuring what we call the 'Big Five': the lion, the leopard, the black rhino, the African bush elephant, and the African buffalo. Fun fact: we have these animals depicted on some of our currency. Most importantly, I love that South Africa is a very diverse country with different religions, races, and cultures, filled with friendly people.
00:03:40.120 That was me convincing you to come visit, and when you do, please hit me up. So, have any of you heard about Dev or used it before? A lot of people—great! For those who have not used Dev, it is an online platform that fosters a community of software developers where you can share knowledge and support one another. The Dev environment creates a place where you can network, learn, and collaborate.
00:04:02.760 Currently, we have over one million developers signed up on the platform. Another fantastic aspect of Dev is that it is open source. Our repository can be found at this URL. It's a bit confusing because it's called 'forum,' but 'forum' is just the application that powers Dev. In this repo, you can explore our code, report bugs, create feature requests, and even fix bugs.
00:04:23.040 And now it's Octoberfest! If you're interested in contributing to open source, it's a great time to do so. Our code is written in Ruby on Rails, and we have some JavaScript for dynamic components. What can you expect from this talk? It's a lot of stuff, but I'm going to cover everything. The talk will be a bit high-level, introducing different topics on caching, with examples.
00:05:04.120 We will discuss why Dev is a good use case for caching, the pros and cons of caching, what to cache and what not to cache, the causes and effects of an inefficient cache, and then go through response caching, which will be broken down into edge caching, fragment caching, and browser caching. Additionally, we will touch on a few miscellaneous caching techniques before concluding.
00:05:51.280 The lessons I share today are based on my valuable experiences at Dev. While some examples are presented in the context of Dev, they describe general strategies that can be applied to your applications. Another thing to note is that I will use 'we' frequently, making it seem like I'm still part of Dev. I apologize; I can't get rid of that habit just yet.
00:06:34.600 At Dev, we are often complimented on the speed of the application; it's really snappy. If I play this video, you'll see that when I refresh the page, it loads almost instantaneously—no loading spinner whatsoever. Let me do it again. Okay, if we go to an article page, there's still no loading spinner; it's incredibly fast. So, why is Dev so quick?
00:07:07.639 One of the major reasons Dev is so fast is due to caching. One significant reason Dev is the perfect use case for caching is that it is pretty read-heavy. There are tens of thousands of users all around the world consuming content from Dev at any point in time. Once created, that content rarely changes, making it mostly static. So, what benefits do we get from caching? Why should you consider it?
00:07:55.760 Caching is advantageous because it helps improve the efficiency of our application while also increasing performance. Implementing a cache benefits users in numerous ways: you'll be able to retrieve content faster, improve application performance, and accelerate data retrieval, which decreases query duration. Caching also increases input-output operations per second, improving read throughput while reducing response time for read-intensive applications.
00:08:43.399 In turn, this can also lower storage costs if they charge per request by reducing latency and bandwidth required to access data. Additionally, caching helps save device energy and reduces overall energy consumption. However, my favorite aspect of caching is that it enhances user experience; faster loading times lead to happier customers.
00:09:00.840 You're also increasing accessibility to your website, particularly in regions where web infrastructure is unreliable or internet speeds are slow. We rely on caching to create a better experience. These are just some of the reasons we'll discuss more as we go through different types of caching.
00:09:27.040 Now that we've talked about what kind of data you should cache—mostly static content—let's talk about what data you should not cache. You do not want to cache pages with dynamic data that changes frequently, as this would require constant purging and outweigh the benefits of caching. You also want to avoid caching user-specific data, such as account information, as this would create numerous cache permutations and make caching ineffective.
00:10:05.480 Sometimes, premature caching can lead to a very inefficient cache. A general rule of thumb I apply is to first understand the problem at hand. I let the code run in production for a while, monitor it, and then optimize to determine whether I need to implement caching.
00:10:55.299 If you cache prematurely before fully understanding the problem, you could end up with a cache that causes higher resource usage, as deploying more expensive RAM results in overhead on the server and database. This can lead to a slow application altogether. You could also end up with unreliable data if you don't invest time in investigating when to purge the cache.
00:11:42.760 Finally, an unpleasant developer experience can arise when caching is implemented without proper foresight. If you've cached too early, developers might have to work around your cache rather than with it, leading to frustration.
00:12:14.280 Now that we understand the benefits of caching, what data to cache, what not to cache, and what an inefficient cache looks like, let’s dive into some caching strategies.
00:12:24.198 Let’s start with response caching, which is one of the most common types of caching. It accounts for the majority of use cases on Dev; as the name suggests, it involves caching a response to a request.
00:12:46.640 By doing this, we reduce the number of requests and the load on the web server. In a typical Rails application, this is what the flow of data looks like. To reduce the load on our web server, we need to understand what it does first, which has been covered in some previous talks, so I'll briefly touch on it.
00:13:23.360 The browser will send an HTTP request to the web server, which will then receive that request. It goes into the router and figures out where to route that request to the correct action method within a controller. The controller will run the code, and sometimes it will need to fetch data from the database. The web server generates the view and forms a response that is sent back to the browser.
00:13:54.440 As you can see, the web server processes a significant amount of work, which puts enormous pressure on it. To ease this load, we can implement caching in some places. Here are some response caching strategies we use: edge caching, fragment caching, and browser caching.
00:14:35.080 Edge caching allows us to avoid hitting the origin server, while fragment caching is implemented in the view layer via Rails to reduce database queries. Browser caching restricts requests so they never leave the browser. Each strategy is useful for its own specific use cases, and sometimes you may use a combination of these strategies to achieve optimal results.
00:15:28.560 Each of these strategies generally follows the same caching pattern: there’s a requester, a caching intermediary, and a fallback. When a requester makes a request, it goes to the caching intermediary. If the intermediary has the data, it returns it; if not, it will need to go to the fallback server.
00:16:23.120 When the fallback server responds to the requester, the caching intermediary retains the data for subsequent requests. Let’s start by examining edge caching—if there was a favorite type of caching, I would say it's edge caching.
00:16:43.500 Edge caching operates between the browser and the web server to intercept requests, thus reducing the need to travel back to the origin server. There are technologies such as CloudFront, Fastly, and others that you can implement for edge caching. At Dev, we use Fastly for our edge caching.
00:17:55.840 Here's how Fastly works: it caches content from our origin server at points of presence all around the world. These points aim to bring content closer to the end user, allowing for quicker data retrieval. For example, a user in South Africa doesn't have to query the origin server in the U.S.; instead, they can get the data from a nearby point of presence.
00:18:37.840 Using edge caching provides several benefits: it reduces server load and stress on the origin server, which in turn alleviates the load on the database and network. The user also benefits from improved content delivery and response times. Caching at its best enables a more predictable performance, allowing data to continue displaying during sporadic network issues.
00:19:35.840 Companies that implement edge caching benefit as well due to reduced CPU usage, which lowers costs associated with hitting a web server and database.
00:20:03.120 Okay, so we have discussed the patterns each caching strategy follows. Edge caching also abides by that pattern, but let's discuss some terms that are associated with edge caching.
00:20:29.320 When a user navigates to a page for the first time, if we have created a cache and it was refreshed, if they hit this page, it will hit the edge cache. If it's the first time, this edge cache is likely cold. A cold cache is empty without data.
00:21:05.600 When you hit a cold cache, your request is marked as a cache miss. The request goes all the way to the origin server, which retrieves and processes the data, returning it to the user. As I mentioned before, the edge cache will retain this data, thereby warming the cache. The next time the user requests that resource, they will hit a warm cache.
00:21:38.440 A warm cache is one where data is stored and ready to be served. If a cache is warm, that request will be labeled as a cache hit. All subsequent users will keep hitting a warm cache until it expires or is purged, at which point the process repeats.
00:22:23.760 At Dev, we typically use edge caching for mostly static pages. However, in real-world applications, it's challenging to find perfectly static pages; there will always be some dynamic elements. When we have more static elements than dynamic ones, we can proceed with edge caching.
00:23:01.760 At Dev, we retrieve the main content from the edge cache and make a secondary request to the server to populate the more dynamic data.
00:23:46.160 For instance, when visiting an article page, we have reactions that are frequently changing. When we refresh the page, you're able to see that the reaction count doesn't appear immediately. By examining the network tab, we can observe that on the first request, we retrieve data from the edge cache, and any subsequent requests grab the latest reactions from an endpoint.
00:24:17.760 How do you know if something is edge cached? You can observe edge caching in the browser by reloading a page. In another video, we'll load an article page; during the first visit, there was a cache miss. The status indicates it was served from a node that attempted to access the server, eventually retrieving data. Upon reloading the page, the request results in a cache hit, and we retrieve the data far more quickly.
00:25:15.360 Within our Fastly configuration, we have something called shielding, which doesn’t come by default; it has to be enabled during setup. With shielding, one of our Fastly points of presence (POPs) is used as an origin shield, which helps reduce load on our origin server.
00:25:51.440 Without shielding, each POP acts independently, meaning that when a cold cache exists, requests go straight to the origin server. For instance, if a user in Lyon queries the Fastly POP for the first time, it contacts the origin server, which then responds and retains data.
00:26:18.080 If another user in Tokyo requests the same data soon after, that user will have to go through the same process, which burdens the origin server unnecessarily. Shielding mitigates this by allowing the POP to pull data from the origin server and cache it for use by subsequent requests.
00:27:06.000 The first user retrieves from the origin server, while any following request uses the cached data at the POP, resulting in a cache hit. This not only increases the chances of users getting fast response times but disproportionally lowers the stress on the origin server.
00:27:39.840 When caching objects, we need to consider the lifespan of cached data. One approach is to set a longer cache lifetime and purge the cache under certain conditions. When we expire caches, we apply cache control headers. Here’s a snippet of code from Dev that demonstrates this.
00:29:46.640 A notable header is the max age, which defines how long the content will be readily available from the cache. In our setup, the max age is set to one day, though we can override that in certain situations.
00:29:59.040 In various contexts, we implement stale while revalidate and stale if error features to determine fresh and outdated data. Problems can arise when purging is required; instead of waiting for a cache to expire, we can update content immediately.
00:30:45.280 An example: when I publish an article on Dev, if I spot typos, I would immediately want to purge the cache to provide users with a fresh version rather than waiting for the cache expiration. When loading a page, we also include comments for SEO purposes, and several actions trigger cache purging, such as editing an article.
00:31:34.360 For Fastly's edge caching, purging a request to the Fastly API is fairly straightforward. We must ensure that we're not caching user-specific data, which can lead to various inefficiencies—in that respect, we've implemented cache safety checks.
00:32:43.680 Another caching type I want to discuss is fragment caching. While edge caching focuses on caching the entire page, fragment caching allows us to cache specific parts of the view. The latter is particularly useful when parts of a view are computationally intensive.
00:33:15.760 Fragment caching verifies whether a part of the view exists in memory, and should it not, we will compute it and store it in memory for further use. This process allows for easier retrieval and better performance, particularly in use cases like comment sections where sections may need to be cached separately.
00:34:16.880 Observing fragment caching during development is straightforward—you might see cache misses in initial requests that later turn into cache hits once elements are sufficiently established in the cache. However, too much reliance on fragment caching can create overhead; relying on optimized views is sometimes far more efficient.
00:35:19.460 Moreover, we have something referred to as Russian doll caching, which uses nested cache fragments. For example, if updates have been made to a navigation link, only the associated link needs to be changed, while others can continue presenting cached versions.
00:36:42.840 The final caching type I’d like to touch upon is browser caching. This allows resources to be cached in the browser and mitigates page loading times by eliminating server queries for cached assets like JavaScript, CSS, and images.
00:37:18.760 In practice, this involves checking the browser’s cache first and, if assets aren’t cached, querying the origin server. Fingerprinting ensures that file changes are recognized via unique identifiers, prompting users to access updated resources wherever necessary.
00:38:29.560 Interestingly, we also apply miscellaneous caching techniques that can assist performance, such as precomputing query results and storing them directly to avoid heavy computations each time those results are needed.
00:39:16.000 In conclusion, when determining whether to implement caching for a feature, I like to consider how often I see data changing. If it's relatively static, caching is a good option. However, special care should be taken not to cache user-specific data. Before diving into caching techniques, I advocate examining the code to ascertain if optimizations can be crafted without caching.
00:40:38.640 If caching is determined to be suitable after these assessments, questions arise regarding which layer to apply caching and selecting the right combination of techniques based on our specific application needs. Many factors involve how long the cache remains valid and conditions to purge it.
00:41:56.240 There are no absolute answers to these concerns; however, careful analysis leads to better performance decision-making. The techniques I shared are merely the tip of the iceberg compared to the breadth of caching strategies that can be implemented. Proper implementation conveys our value for users, our commitment to performance, and their optimal experiences.
00:43:12.640 Thank you all for spending this time with me today. I hope you found this talk to be informative. I always welcome feedback, so if you have any thoughts or questions, please reach out! Finally, I will begin searching for new opportunities in a couple of months, and if you think I could be a good fit for your globally distributed team, please don’t hesitate to connect.
Explore all talks recorded at Rocky Mountain Ruby 2023
+6