Talks

Summarized using AI

Edge Caching Dynamic Rails Apps

Michael May • June 21, 2014 • Earth

The video titled 'Edge Caching Dynamic Rails Apps' by Michael May at GoRuCo 2014 explores the effectiveness of Content Delivery Networks (CDNs) in optimizing the performance of dynamic Ruby on Rails applications. In light of persistent performance issues even after implementing traditional caching methods, the talk emphasizes the potential of edge caching as a solution.

Key points discussed include:
- Basics of Caching: Caching is crucial for improving access speed to frequently requested data and reducing the load on original storage locations. The concepts of cache hits and misses are central to understanding caching efficiency.
- Understanding CDNs: A CDN is a globally distributed network of cache servers. It serves user requests through edge caches located closer to the user, minimizing latency that might occur when retrieving data from distant origin servers.
- Static vs Dynamic Caching: While CDNs are traditionally known for caching static content, the talk highlights challenges associated with dynamic content, which changes frequently. Multiple strategies of caching, including push and pull CDNs, are addressed, showcasing their advantages and drawbacks.
- Routing and Control: The importance of routing requests geographically and controlling cached content via HTTP headers is discussed. Key directives, such as cache control and surrogate control headers, play a crucial role in managing how long responses can be cached.
- Innovative Features: The talk introduces features like TCP keep-alives and instant purging, which enhance cache performance and enable real-time updates of dynamic content — a significant advantage for dynamic data caching.
- Rails Plugin: To facilitate integration of dynamic caching in Rails applications, a Fastly Rails gem is suggested, which simplifies the process of creating unique cache keys and managing purging actions during data updates.
- Incremental Approach: Concluding recommendations encourage an iterative approach to implementing edge caching across endpoints, ensuring manageable complexity while maximizing performance gains.

In conclusion, for Rails applications to benefit from edge caching, a fundamental understanding of caching mechanisms and careful implementation of dynamic caching strategies are vital. Leveraging CDNs can significantly enhance user experience by reducing latency and ensuring content is always up-to-date.

Edge Caching Dynamic Rails Apps
Michael May • June 21, 2014 • Earth

Your rails app is slow. Even after memory caching, optimizing queries, and adding servers the problem persists, killing your user experience. You've heard of services called "Content Delivery Networks" (CDNs), that could help, but they only seem to work with static content. Worry not, for there is a solution: dynamic content caching at the edge. In this talk, we explain how CDNs can be used to accelerate dynamic rails applications.
We will cover:

What is Caching?
What are CDNs?
What is Dynamic Caching?
Instant Purging
Surrogate-Control headers
Key Based Purging
A Rails Plugin for dynamic caching integration

You'll leave with:

A deep understanding of how caching and content delivery networks actually work. Understand recent innovations in CDN technology; things that enable edge caching dynamic content. Understand how rails plugins can be used to easily add dynamic edge caching functionality to your app. Gain insight into how to hook things into rails with plugins.

Help us caption & translate this video!

http://amara.org/v/FGYu/

GORUCO 2014

00:00:14.920 Hello, and welcome to the Edge Caching for Dynamic Rails Apps talk. Today, we are going to discuss Content Delivery Networks, or CDNs.
00:00:20.640 Before diving in, let me ask: who here knows what Content Delivery Networks are or has used them? Okay, that's great!
00:00:27.720 My name is Michael May, and my love affair with Content Delivery Networks started a couple of years ago when I co-founded a company with my friend Richard Schniemann in Austin, Texas, called CDN Sumo. CDN Sumo is a Heroku add-on with the goal of making it quick and easy to gain performance benefits using a content delivery network, without needing to understand the intricacies of how they operate.
00:00:43.600 At the end of last year, CDN Sumo was acquired by a company called Fastly. Fastly is a content delivery network built on the open-source Varnish Cache. Today, we will discuss caching and examine ideas that make caching effective. We'll talk about Content Delivery Networks and how they function, as well as some innovative features that enable dynamic edge caching.
00:01:06.799 Finally, we will explore how to integrate dynamic edge caching into your Ruby on Rails apps. The core idea behind caching is that I have a piece of data stored somewhere, whether it’s on a hard drive or in a database, and I access that piece of data frequently. I want to make that access quicker, which is accomplished by moving the data closer to where it is accessed.
00:01:19.360 Additionally, caching provides the side benefit of reducing load on the original storage location, allowing for increased efficiency. Let's define some terminology I'll be using: a "cache hit" occurs when requested data is found in the cache. The inverse, a "cache miss," is when the data is not available in the cache.
00:01:37.920 We also refer to the "hit ratio," which is the percentage of cache accesses that result in a cache miss—so, the higher the hit ratio, the better the performance. Additionally, "purge" refers to the action of removing data from the cache.
00:01:47.040 Your origin is simply your HTTP application server, and the "edge" refers to an HTTP caching server. Caches are typically much smaller in storage than the original storage location, which necessitates a strategy to determine what should reside in the cache.
00:01:57.760 One common strategy is the "Least Recently Used" (LRU) algorithm. When defining an interface for this, we will need a method to set items in the cache and a method to get items out of the cache. Each of these methods will not only retrieve the data but also update its access time, moving a reference to the head of the cache.
00:02:41.440 If the cache is full, the set method will call a prune method to remove the least recently used item from the tail of the cache, which is the object that has been in the cache the longest without any access. LRU helps to determine what data should persist in the cache.
00:02:55.040 If we decide to implement this caching method, we would typically use a hash table to store the actual data, allowing for constant time get and set operations, paired with a doubly linked list to keep track of data access.
00:03:02.720 At a foundational level, there are various layers of caching, starting with CPU and hardware caches, followed by main memory as a cache for your hard disk, then software and application caches, and finally, at the top are Content Delivery Networks.
00:03:35.440 A Content Delivery Network is a globally distributed network of cache servers. When users request data, those requests are routed to edge caches, which return the data immediately if it's available. If not, the cache will fetch the data from your origin server, store it in the cache, and then pass it back to the user.
00:03:49.440 These edge caches are strategically located around the world, called Points of Presence (POPs). For example, if a user in Sydney, Australia is trying to access data from a server located in Ashburn, Virginia, there is a significant distance involved, which creates latency.
00:04:08.880 Content Delivery Networks aim to offload as much data as possible from the application server and serve it from locations closer to the user. Traditionally, CDNs have been known for caching static content, like images, JavaScript files, and stylesheets—things that don't change frequently.
00:04:26.720 The benefit of static content is that they are easy to edge cache using asset pipelines. You define an asset host configuration, set it to your CDN URL, and compile your assets so that they automatically link to the CDN, providing significant performance improvements.
00:05:00.320 However, caching dynamic data is a more complex challenge due to the nature of continually changing data. Distributing caching across CDNs amplifies that complexity. Therefore, to understand how data and requests reach the edge, we need to discuss two types of CDNs: push CDNs and pull CDNs.
00:05:20.360 Push CDNs require manual syncing of content to the edges whenever assets change, such as adding new images or updating JavaScript. Although effective, this method is error-prone due to human oversight, especially under high server load.
00:05:48.760 In contrast, pull CDNs pull content from your origin server, allowing for seamless updates whenever the content changes, which is beneficial for developers. However, there is a slight latency cost associated with the first request.
00:06:01.360 Pull CDNs work as reverse proxies between the user and the origin server. When a user makes a request, it goes to the edge [cache] which ultimately fetches the data from the origin if it’s not available. Subsequent requests are faster as the data is now stored in the cache.
00:06:24.800 Next, let's discuss how requests are routed to the edge. Typically, a client's request initiates a DNS lookup that resolves to a specific geographical region and forwards to the CDN edge. This indicates the importance of geographical distribution in effective content delivery.
00:06:41.520 Controlling the content that gets delivered through CDNs is accomplished via HTTP headers. One of the primary headers employed is the cache control HTTP header, which defines how long a response can be cached and under what conditions.
00:07:05.600 Using directives such as 'max age' dictates how long a response should be cached while considering both shared caches (like CDNs) and private caches (like user browsers). Additionally, there's the 'private' directive indicating that certain responses can't be cached by shared caches.
00:07:34.080 Another directive you may encounter is called 's-max age', which defines freshness for the shared caches, while browsers will not honor this directive. Therefore, you can establish different caching rules for your CDN and active user sessions.
00:07:44.640 Furthermore, we have the surrogate control header, which works similar to cache control but specifically for reverse proxies. It indicates how long responses should be retained within the reverse proxy caches.
00:08:09.760 Now that we've established the foundational knowledge of caching and headers, let's address some less well-known CDN features. For example, let's discuss how TCP, or Transmission Control Protocol, and HTTP keep-alives impact cache performance. The TCP handshake, which includes connection setup and data flow initiation, can introduce latency.
00:09:29.120 HTTP keep-alive functionality keeps an open TCP connection for extended periods to avoid repeated handshakes on multiple requests. This translates to quicker data retrieval when cache misses occur because the connection is already established.
00:09:58.680 Moving on, let's discuss instant purging, which becomes crucial when updating cached data. When a user modifies content on the origin server, a purge request is sent to the nearest edge cache to remove that outdated data.
00:10:20.960 This purge request replicates through the edge caches to ensure all instances of that data are updated simultaneously. The instant purging feature is vital for supporting dynamic edge caching, facilitating real-time updates while minimizing stale or old content being served.
00:10:41.920 Dynamic caching lets you cache frequently requested data temporarily while purging it as updates occur. Examples of dynamic data that can benefit from caching include API requests, comment threads, news articles, product inventories, and search results.
00:11:13.920 To effectively implement dynamic caching, three steps are involved: Firstly, you must create unique cache keys; secondly, you need to bind data to those keys; and thirdly, you’ll want to purge this data whenever changes occur.
00:11:39.760 For instance, in Ruby on Rails, you can utilize fragment caching to cache partial views effectively. When implementing caching at the edge, similar principles apply. By controlling key caching with efficient response headers such as surrogate keys, Rails can manage caching effectively.
00:12:08.800 To bind data with your cache keys, you can utilize a gem called Fastly Rails, which simplifies the process. You define a resource key (a unique key for a single product) and a table key (which maps to all products). Every time data is altered, purge requests are issued to ensure data integrity.
00:12:38.560 For the create, update, and delete actions within your Rails application, you must ensure that you also initiate purge requests that refresh your cache with the latest data, keeping your user experience as seamless and responsive as possible.
00:12:56.560 In conclusion, if you want your Rails app to leverage edge caching, take an iterative approach to incorporate caching incrementally across each endpoint. By doing this, you maintain focus and manage complexity, ensuring that your application runs efficiently.
00:13:21.440 To wrap up, I recommend that if you are not already offloading static assets to a CDN, you should start. Utilizing static content caching can lead to significant performance gains. Moreover, if you opt for edge caching, understand the benefits of instant purging and HTTP keep-alives to facilitate dynamic content delivery.
00:13:48.720 Thank you for your attention, and I am grateful to the organizers of GoRuCo for the opportunity to speak.
Explore all talks recorded at GORUCO 2014
+3