Talks
Speakers
Events
Topics
Sign in
Home
Talks
Speakers
Events
Topics
Leaderboard
Use
Analytics
Sign in
Suggest modification to this talk
Title
Description
RailsConf 2019 - Cache is King by Molly Struve _______________________________________________________________________________________________ Cloud 66 - Pain Free Rails Deployments Cloud 66 for Rails acts like your in-house DevOps team to build, deploy and maintain your Rails applications on any cloud or server. Get $100 Cloud 66 Free Credits with the code: RailsConf-19 ($100 Cloud 66 Free Credits, for the new user only, valid till 31st December 2019) Link to the website: https://cloud66.com/rails?utm_source=-&utm_medium=-&utm_campaign=RailsConf19 Link to sign up: https://app.cloud66.com/users/sign_in?utm_source=-&utm_medium=-&utm_campaign=RailsConf19 _______________________________________________________________________________________________ Sometimes your fastest queries can cause the most problems. I will take you beyond the slow query optimization and instead zero in on the performance impacts surrounding the quantity of your datastore hits. Using real world examples dealing with datastores such as Elasticsearch, MySQL, and Redis, I will demonstrate how many fast queries can wreak just as much havoc as a few big slow ones. With each example I will make use of the simple tools available in Ruby and Rails to decrease and eliminate the need for these fast and seemingly innocuous datastore hits.
Date
Summarized using AI?
If this talk's summary was generated by AI, please check this box. A "Summarized using AI" badge will be displayed in the summary tab to indicate that the summary was generated using AI.
Show "Summarized using AI" badge on summary page
Summary
Markdown supported
In the RailsConf 2019 session titled "Cache is King," Molly Struve, a site reliability engineer at Kenna Security, discusses the crucial role of caching in optimizing the performance of Ruby on Rails applications. She emphasizes that performance issues can arise not only from slow queries but also from excessive database hits. Struve outlines several strategies to reduce datastore hits, highlighting the following key points: - **Understanding Datastore Hits**: Struve explains that even fast queries can accumulate to create significant performance bottlenecks if executed excessively. Therefore, it is vital to minimize the number of queries made to data stores like MySQL, Redis, and Elasticsearch. - **Real-World Example of Caching**: Utilizing a playful analogy about remembering a person's name, Struve illustrates how local caching can drastically reduce the need to repeatedly query a datastore. By storing frequently accessed data locally, applications can improve speed and efficiently use resources. - **Bulk Serialization**: When processing vulnerabilities, the team at Kenna optimized their logic by implementing bulk serialization. Instead of making individual calls for each vulnerability, they grouped requests, reducing the number of database hits from 2,100 to just 7 for every set of 300 vulnerabilities, leading to a significant reduction in load on MySQL. - **Reducing Redis Hits**: Struve discusses how they optimized their interactions with Redis by implementing an in-memory hash table to cache client indices used for data assignment in Elasticsearch. This change led to a more than 65% increase in job processing speed by minimizing repeated requests. - **Avoiding Useless Queries**: An emphasis is placed on preventing unnecessary queries altogether. Struve shares that by skipping database requests for reports without data, they reduced processing time from over 10 hours to 3 hours. - **Removing Redundant Operations**: Struve concludes with the point that sometimes the best approach is to eliminate unnecessary operations completely. This was demonstrated when they removed throttling requests to Redis, which resolved application errors and reduced load significantly. In conclusion, the overarching message from Struve's talk is to be mindful of every datastore hit, regardless of speed, as cumulative effects can lead to performance degradation. By employing caching strategies and optimizing code, developers can greatly enhance the performance and reliability of their applications, promoting better resource management and user experiences.
Suggest modifications
Cancel