Paweł Pokrywka
How I brought LCP down to under 350 ms for Google-referred users

Summarized using AI

How I brought LCP down to under 350 ms for Google-referred users

Paweł Pokrywka • April 12, 2024 • Wrocław, Poland

In the presentation titled 'How I brought LCP down to under 350 ms for Google-referred users', Paweł Pokrywka discusses his successful strategies for decreasing the Largest Contentful Paint (LCP) metric, vital for web performance and optimization.

Key points of the presentation include:

  • Introduction to LCP: LCP measures the time it takes for the largest element on a webpage to load, reflecting page performance. Google encourages lower LCP for improved search result rankings and higher conversion rates.
  • Personal context: Paweł, founder and CTO of Planu V, shares how their application, which relies heavily on Google traffic, adopted Next.js as part of a frontend migration.
  • Demonstration of performance: Through a live demo, Paweł shows an LCP of 140 milliseconds by employing gzip compression and prefetching techniques. The demo illustrates how his site ranks second for specific search phrases.
  • Technical exploration: The session transitions to reverse engineering the performance by analyzing network requests while utilizing developer tools, highlighting the significance of prefetching elements in enhancing load times.
  • Critique of AMP: Paweł discusses issues with AMP (Accelerated Mobile Pages), advocating for a system allowing developers to retain control of their designs while ensuring fast loading through Signed Exchanges (SXG). These enable caching with enhanced privacy measures.
  • Implementation of SXG: He describes the simple activation process for Cloudflare users and emphasizes the importance of crafting cache-friendly responses due to stricter SXG requirements, including avoiding personalization and limiting cookie use.
  • Optimizing Sub-resources: Emphasis is placed on prefetching sub-resources to reduce loading times, detailing the use of Link HTTP headers for advanced resource management.
  • Monitoring and adjusting strategies: Paweł reminds participants of the need for continual adjustments to caching strategies to maintain performance despite potential Google updates or changes in website configurations.

In conclusion, Pawel advocates for exploring these SXG strategies further through his blog, emphasizing the importance of optimizing both user experience and web performance metrics like LCP.

How I brought LCP down to under 350 ms for Google-referred users
Paweł Pokrywka • April 12, 2024 • Wrocław, Poland

wroclove.rb 2024

00:00:09.440 On the slide, you can see that the number is 350 milliseconds. The number is actually a bit lower, but I didn't want to put the real number here.
00:00:16.960 I was concerned that it might seem unrealistic. First, I will give you some introductory information and show you how it works through a demonstration.
00:00:31.119 Then we will discuss how it works internally, followed by basic implementation instructions, which are the most important part of this talk.
00:00:42.280 Later, I will show you what you need to know about fully implementing it along with the main benefits. Besides, I don’t have a PhD in debugging, but I will share ways to debug and also test it. This presentation will cover the measurement of the impact as well.
00:01:15.520 So first, LCP, or Largest Contentful Paint, is one of the core Web Vitals that indicates how performant your page is. It refers to the largest element on the website. If you look at the conference website, the largest contentful paint element is marked visibly.
00:01:40.399 Now that you know what LCP is, let’s consider why it's crucial to optimize it. There are two primary benefits. First, Google encourages webmasters to lower their LCP by providing them with higher positions in search results.
00:02:11.920 If your LCP is below 2 seconds, you will rank higher in SERPs. Reducing it further may not have much SEO impact, but there is also the conversion rate impact. Multiple studies indicate that the lower the LCP, the higher the conversion rate. Ideally, aiming for zero LCP is best.
00:02:37.519 To give you some context, I am the founder, co-owner, and CTO of Planu V, a wedding vendors directory that helps you find photographers, music bands, etc. Our application has been running for 16 years, and some time ago we introduced Next.js as the frontend part of the application.
00:03:05.599 We are still migrating our frontend, and a very important aspect of our website is that most of the traffic comes from Google. Now, let’s move on to the demo part.
00:03:36.040 So, this is just a quick video. Is it working with this microphone? Okay, so we see that the first step is to clear browsing data so that we can ensure the cache is empty.
00:04:29.000 Next, I will type a search phrase in Google. You can see my website is in the second position. I then throttle the traffic to slow 3G to simulate a slow connection, and after clicking, you can see that the page shows immediately.
00:05:50.360 If you don't remember how prefetching works, it involves a single HTML tag where you specify the URL. When the browser sees this tag, it prefetches the link, so when the user actually visits, it is fetched from the cache and not the network.
00:06:29.040 Now we will have a reverse engineering session, where we will explore how this works through a live example.
00:06:40.720 Let's start with Google search. I will open developer tools. We will see the network tab and monitor all network requests made by this page. Now I will enter the search phrase "music bands for weddings" and execute this query.
00:09:01.400 Despite this, you can see that requests are being prefetched correctly. Now, let's inspect the HTML of the page to see how the magic works. You can see right here that there is a "prefetch" link.
00:09:45.720 Now you may wonder why Google hasn't decided to use such a simple approach to prefetching in the search results? Do you have any ideas?
00:10:10.200 Let me give you some hints. First, there’s the issue of privacy: when the request to my website is made from Google search, I can see the user's IP address and search query, which is not ideal.
00:10:52.080 Also, Google has no control over what gets prefetched. They could unintentionally saturate the user's bandwidth by prefetching large data, which is undesirable. This concern led to the introduction of Accelerated Mobile Pages (AMP) in 2015.
00:11:47.520 AMP keeps a simplified version of a page on Google's servers and prefetches it from the search results. This solution offers much better privacy because there is no IP address being passed, and everything is served from Google's infrastructure.
00:12:57.839 The framework is lightweight and loads pages super fast. However, it comes with downsides: it is mobile-only, and the URLs appear with Google as the base, which raises concerns if Google were to alter the pages.
00:13:50.280 Given these considerations, I suggest we ditch the limited AMP framework. Instead, let's unlock developers and allow them to use the full power of HTML along with cryptographically signing the pages. We can maintain the prefetching from Google, ensuring the integrity of the webpages while still making them appealing to the webmasters.
00:14:41.160 Let's tighten security and privacy and allow for flexibility in design, ensuring users still get a fast-loading experience. This leads us to Signed Exchanges (SXG), a standard first introduced in 2018.
00:15:06.480 SXG allows the inclusion of cryptographic signatures, enabling Google bot to prefetch the signed page with the original URL still visible in the browser.
00:15:51.080 Google starts by performing a request to a web page and ideally returns a signed exchange for the specified URL to allow caching on Google's end.
00:16:21.720 This component is crucial; if Google caches the signed exchange correctly, it can deliver that directly to users without needing to make a request to the website.
00:17:00.000 Google decides which pages to prefetch based on their ranking in search results, usually preferring the top three to five listings, as properly prefetching everything is a waste of bandwidth.
00:17:40.440 To enable SXG, if you are a customer of Cloudflare, it’s as simple as clicking a button. Cloudflare acts as a reverse proxy that creates these SXG packages.
00:18:15.600 Beyond that, one must think differently about content.” This leads to the conceptual separation between content creation and content distribution.
00:18:34.080 You must decouple the request from the response, as a single response can fulfill many requests, as it is cached. Thus, your responses must comply with SXG-friendliness, which means making sure they are cache-friendly.
00:19:47.040 For this, ensure your Cache-Control headers are correctly set, using an appropriate Max-Age while keeping public visibility for the cache.
00:20:10.240 I opted for a Max-Age of 24 hours for my pages, working effectively with the expected traffic to keep them updated.
00:20:56.640 You will need to adjust this setting continually to maximize your caching effectiveness. To maintain SXG compliance, there are other response requirements, including avoiding server-side personalization.
00:21:41.760 Static assets cannot be personalized in such a way, which may be challenging for Rails developers who heavily rely on server-side data.
00:22:12.960 The use of cookies is also restricted, particularly server-side cookies. To work around this, I introduced a special endpoint that fetches session cookies.
00:23:05.280 This allows my JavaScript to access the necessary cookies in a manner compliant with SXG. You will also want to consider user avatars or other session-based attributes which could be fetched client-side.
00:23:42.440 Cross-Site Request Forgery (CSRF) protections must also be revisited since they can cause unique content issues. Likewise, HTTP Strict Transport Security (HSTS) mechanics could block SXG correctly due to strict configurations.
00:24:29.600 Additionally, first impression optimization is vital; your website should optimize for users viewing it for the first time, which aids in rapid loading.
00:25:20.640 By minimizing the API calls for session-based data, you not only improve load times but also ensure that user bots can spend their crawl budget on more pages, enhancing your overall visibility.
00:26:23.040 So far, we discussed how to optimize HTML responses and HTTP headers for the main document containing your web page. Given the considerable effort involved in configuring this optimally, the gains can seem minuscule.
00:27:34.200 Yet, once you turn your attention to prefetching sub-resources such as CSS, images, fonts, etc., the advantages start to shine through.
00:28:12.840 For that, Link HTTP headers are critical as you implement this fine-tuning. Early Hints, supported by servers like Cloudflare, can also significantly help.
00:28:58.440 Cloudflare can assist by automatically generating the links for sub-resources. They will hash these assets while forming responses, ensuring optimal delivery.
00:29:53.640 Remember that only around 20 sub-resources should be prefetched; a higher number can defeat the purpose if overload occurs.
00:30:37.680 If anything goes awry during prefetching—for instance, if one of the sub-resources fails to load—everything else will be discarded, resulting in a missed optimization opportunity.
00:31:13.680 It’s been an arduous journey navigating the under-consumed documentation, but with persistence, I learned that Google caches entries which may expire unexpectedly.
00:32:01.320 As a result, it’s crucial to continuously monitor, calibrate, and adjust your cache control strategies.
00:32:38.000 Encounters with 404s and missing assets can frequently spring up during this process. You must strike a balance between fetching the complete resource set versus ensuring a high level of successful loads.
00:33:34.760 To this end, I’ll be implementing more strategies to understand which of my resources prefetch successfully.
00:34:12.640 I also plan to create an intentionally failing resource as part of my test plans to see how these optimizations consistently perform.
00:35:00.640 In conclusion, while SXG may present difficulties and quirks, it is an ideal technology for advancing user experience and performance.
00:35:47.440 I invite you all to explore these strategies through my blog, where I will delve into further details in my upcoming posts.
00:36:51.680 Hey, I’m really impressed by how thoroughly you've researched and debugged SXG. My question is: do you need to measure without deployments in case Google changes things over time?
00:37:30.920 The measurement is an ongoing process, requiring data collection over days to evaluate the effectiveness. If nothing changes on your website, does it break sometimes due to external updates?
00:38:10.920 Currently, I don't have solid experience measuring that. I've been focusing on error rates, looking for patterns in response failures. However, if there are changes I’ll update accordingly.
00:38:54.520 If it appears broken due to configuration changes, we will need to react to that swiftly. Stay connected to my blog for updates on these observations!
Explore all talks recorded at wroclove.rb 2024
+6