Talks

Summarized using AI

Cache=Cash!

Stefan Wintermeyer • April 29, 2013 • Portland, OR

The video titled "Cache=Cash!" presented by Stefan Wintermeyer at Rails Conf 2013 explores the critical role of caching in web applications, particularly those built with Ruby on Rails. Wintermeyer discusses how effective caching strategies can significantly improve webpage loading times and reduce server costs. The talk is structured into three key sections: reasons for implementing caching, methods to achieve efficient caching, and how to optimize down to the last mile for performance improvement.

Key Points Discussed:

  • Importance of Caching:

    • Caching is vital for cost-saving; it allows companies to deliver web pages faster without necessarily scaling hardware.
    • A snappy web application enhances user experience, influencing user behavior over time.
  • User Behavior and Performance:

    • Examples from Google and Bing show that even minor delays (e.g., 100 milliseconds) can disrupt user engagement significantly.
    • Aiming for a load time of less than 1000 milliseconds is essential to keep users satisfied.
  • Use of Low-Cost Hardware:

    • Wintermeyer shares his experiment using a Raspberry Pi to demonstrate how effective coding can optimize performance even on inexpensive hardware.
  • Fragment and HTTP Caching:

    • He elaborates on strategies like fragment caching (caching parts of the webpage) and HTTP caching (using ETags and Last-Modified headers) to avoid unnecessary rendering of HTML.
    • Efficient database structure is crucial for effective caching.
  • Calculating Success:

    • An automated script is employed to demonstrate the performance before and after optimizing various caching strategies, revealing significant runtime improvements.
  • Closing Remarks:

    • Wintermeyer recommends implementing fragment and HTTP caching for existing applications and exploring Ember.js for new applications to enhance performance further.

Main Takeaways:

  • Integrating caching from the start of development is vital; optimizing after deployment can lead to missed opportunities.
  • Companies can save costs and improve user experience significantly by adopting smart caching strategies such as fragment caching and HTTP caching.
  • The potential for serious performance gains exists, showing that sophisticated caching can yield faster, more responsive web applications even on low-cost hardware.

Cache=Cash!
Stefan Wintermeyer • April 29, 2013 • Portland, OR

Snappiness is an important key for any successful webpage. Most companies try to achieve responsive webshops by scaling their hardware big time. But Rails in combination with Nginx, Memcached and Redis is the key to deliver webpages very fast with a minimal amount of hardware. This talk will start with the basics of DHH's russian doll idea but will raise the bar than quite a bit. How can we combine fragment caching, page caching and HTTP caching to deliver personalized webshop pages for logged in users? How much brain can be delegated to Redis or the Webbrowser? Harddrive space is cheap. So use it! You'll get to know how to plan your data structure and where to use Memcached vs. Redis. Include the cache in the beginning of your development and not in the end. To make things a bit more interesting everything is replayed on a Raspberry Pi to show how much difference intelligent caching can make on any hardware. Save big time and get more clients with a faster web application!

Help us caption & translate this video!

http://amara.org/v/FGaf/

RailsConf 2013

00:00:16.400 Oh, hello. My name is Stefan Wintermeyer. That's my Twitter handle, and on the last slide, you should find my email address. Otherwise, Google will help you out.
00:00:19.039 Today, we are discussing caching for Rails. There are three parts to this talk: the first part is why we care about caching; the second part is how we do it; and the third part is how we can optimize the process.
00:00:31.359 We will only discuss HTML, specifically dynamic HTML, today. I won't cover the asset parts, which are doing a good job by themselves. Instead, I will focus on how we can generate simple HTML pages quickly. So, the first question is: why do I care about caching, or why should everyone care?
00:00:54.480 The answer is straightforward: to save money. If you can deliver a webpage in half the time, you can probably save half the amount of servers you currently have. This allows you to either rent new servers on EC2 or take some time to optimize your code and implement caching. The second main reason is snappiness. You want to create a snappy and fast web application; that is absolutely key.
00:01:31.360 Elia Grigorik from Google was kind enough to provide me with several of his slides, which illustrate the importance of snappiness. Google conducted an interesting experiment where they artificially slowed down their server for a specific user group. They discovered that starting with just 100 milliseconds of delay—which is almost nothing—users began to stop using the service. Even more intriguing was that once they turned it back on, the users took up to six weeks to return to their old behavior.
00:02:10.000 Users develop specific behaviors in relation to how they use applications, which can change based on the speed of the application. Bing was even more drastic; they slowed down their service by up to two seconds, resulting in a drastic loss of users. This highlights the importance of having a fast web service.
00:02:35.680 I will upload the slides later and publish the URL on my Twitter account. If you're interested, you can examine the data later. This graph shows what is considered instant and what is fast; we want to maintain our response time below 1000 milliseconds.
00:03:09.600 This is the amount of time a user has to wait from the moment they press enter until they see the rendered webpage. However, many users are utilizing mobile devices today. In the US, for example, 25 percent of users exclusively use mobile devices to access the internet.
00:03:40.160 Countries like Egypt show even more extreme numbers. This is a crucial consideration when discussing fast web applications. When we analyze performance, we realize that after the whole network finishes processing, we have only about 400 milliseconds left to generate our web application. We actually need to reduce this by about 100 to 150 milliseconds for the browser to render the page. This means we have to get our content rendered into an HTML page within approximately 300 milliseconds.
00:04:28.480 So, how do we achieve this? For this experiment, I used a Raspberry Pi, which costs about $35. I want to give this Raspberry Pi away. I just need to figure out how to choose a lucky Twitter subscriber from the attendees of this talk and contact them via direct message.
00:05:22.720 I am utilizing the Raspberry Pi to illustrate how much performance can be gained through good coding and thoughtful architecture. It's easy to buy quick, fast hardware, as many would do, but that is not the focus here.
00:05:37.760 We are using the following software stack: Ruby 3.2 (though it can also be done with 4.0 easily), but there's no need to wait as you can already do it with Ruby 2.3. I am currently using Ruby 1.9.3, along with Nginx as the web server, Unicorn, MySQL, and the Debian operating system for the Raspberry Pi.
00:06:25.760 The example application is a webshop. This webshop consists of several models which should be quite clear. We have users, products, and product categories. Additionally, there are discount groups allowing each user in the appropriate discount group to receive differing prices. Users also have a cart to store their purchases and a rating system for product reviews.
00:06:53.919 The user interface is based on Twitter Bootstrap, and it displays the names of products, ratings, and prices. This is the view for an anonymous user. However, we also want to test the application with logged-in users in mind.
00:07:20.440 For demonstration, I have a demo account for a user named Bob at example.com. When Bob logs in, his personalized page is displayed. Bob has a cart where he can store the items he wants to purchase and he gains access to a detailed view with pricing that reflects discounts based on his user profile.
00:08:39.680 The next step is to measure the performance of our site. To accomplish this, I wrote a water script, which is an automation tool. This scripting tool will loop through the products on our webshop and simulate user interaction to gauge load times and performance.
00:09:03.360 Bob's process involves calling up to 43 webpages. I recorded this on my laptop so you can see how slow the interactions appear. As I initiate the water script, it opens a browser and automatically interacts with the website.
00:09:37.760 The second browser is started for another user to generate fresh and uncached content to see how the application responds without caching influences.
00:10:28.480 How fast can we get this workflow running on a Raspberry Pi? Running the script on my laptop, while the server operates on the Pi, results in a time of 116 seconds for the entire process.
00:10:51.840 Analyzing the performance metrics reveals the bottleneck in our workflow: the nearly three seconds required to render the image index page. However, SQL queries are not taking nearly as long, indicating a necessity for optimization in view rendering rather than database calls.
00:12:12.240 Therefore, our first optimization step involves implementing fragment caching, which is a relatively simple measure. This means caching components of a page rather than generating them on every request. I will illustrate where we can place caching lines in the codebase.
00:12:59.919 With just five lines of code, it's incredible how much of a performance boost we can achieve; in fact, such minimal adjustments can allow us to cancel a significant number of EC2 instances, maximizing our computing efficiency with minimal code change.
00:13:47.760 But fragment caching is just the beginning. It is crucial that we cache effectively every row and component. By implementing caching structures within the table, we can allow for less data to be re-rendered when new information becomes available.
00:14:48.959 For example, if we have a table of 100 products and only one changes, caching that change in clusters rather than reinstalling the entire table view can lead to significant efficiency gains. This method allows us to minimize rendering time significantly while keeping the original structure intact.
00:15:56.560 The next phase involves HTTP caching. Web browsers and servers utilize caching mechanisms to avoid redundant page fetches, employing validation techniques like 'last modified' and 'ETags'.
00:16:43.360 For instance, when the browser requests a page, it will inquire whether the cached copy is still valid. If it is, the server can respond immediately without re-rendering the view—significantly improving performance. The use of ETags can further reduce retrieval time by confirming if the content has changed.
00:17:26.240 The overall goal is to minimize the time taken for users to retrieve information to below the 0.8-second mark by leveraging caching methods extensively.
00:18:11.679 These strategies are especially important for e-commerce platforms where peak traffic can quickly overwhelm unoptimized systems. One recommendation is to utilize off-peak hours to pre-warm caches and ensure speedy response times during peak loads.
00:18:57.679 Delivering pages using Nginx without needing to consult Rails drastically affects performance positively, as static file handling is faster than any dynamic rendering. This method ensures GZip compressed files perform the best for speed.
00:20:30.000 Caching complete pages rendered by Rails is also an achievable objective through cache file management in various ways to keep an eye on the content freshness without overburdening the server.
00:21:39.280 Have a caching strategy in place for logged-in users, which can be achieved through a series of optimizations involving both Nginx and Rails—it’s not something typically accomplished overnight.
00:23:36.799 However, having clear goals and understanding your unique user base can help determine how best to configure your system to ensure snappiness and efficiency—especially as your business grows.
00:24:51.679 If you have an existing Rails application, begin with the more straightforward caching methods and gradually implement advanced techniques as you scale—always keeping in mind the user interactions.
00:26:10.560 In conclusion, leveraging effective caching strategies can lead not only to a more efficient application but can ultimately drive better user engagement and satisfaction.
00:26:35.679 Thank you for your time.
Explore all talks recorded at RailsConf 2013
+89