Talks
Speakers
Events
Topics
Sign in
Home
Talks
Speakers
Events
Topics
Leaderboard
Use
Analytics
Sign in
Suggest modification to this talk
Title
Description
Benchmarking is an important component of writing applications, gems, Ruby implementations, and everything in between. There is never a perfect formula for how to measure the performance of your code, as the requirements vary from codebase to codebase. Elastic has an entire team dedicated to measuring the performance of Elasticsearch and the clients team has worked with them to build a common benchmarking framework for itself. This talk will explore how the Elasticsearch Ruby Client is benchmarked and highlight key elements that are important for any benchmarking framework
Date
Summarized using AI?
If this talk's summary was generated by AI, please check this box. A "Summarized using AI" badge will be displayed in the summary tab to indicate that the summary was generated using AI.
Show "Summarized using AI" badge on summary page
Summary
Markdown supported
In her talk titled **"Benchmarking Your Code, Inside and Out"**, Emily Stolfo explores the essential aspect of benchmarking code, particularly in the context of the Elasticsearch Ruby Client. The presentation was delivered at **RubyKaigi 2019** and focuses on understanding the performance measurement requirements that vary across different codebases. Emily, a Ruby engineer at Elastic, also provides insights into the broader application of benchmarking principles beyond Ruby. ### Key Points: - **Purpose of Benchmarking:** - Benchmarking is driven by a human inclination towards speed and performance measurement, which extends to evaluating code efficiency. - It serves to detect regressions when changes to the code are made, ensuring no performance issues arise. - **Types of Benchmarking:** - Overview of **macro** and **micro** benchmarking, emphasizing the importance of macro benchmarking for entire client-server applications vs. micro benchmarking for individual components. - **Framework Development:** - The benchmarking framework developed at Elastic draws upon Emily's prior experience at MongoDB, alongside insights gathered from the performance team at Elasticsearch. This collaborative effort led to a robust framework that adheres to best practices and is suitable for diverse client languages. - **Elasticsearch Benchmarking:** - Introduction of **Rally**, an open-source macro benchmarking framework used by the Elasticsearch team to stress test various aspects of the system. Rally tracks scenarios that apply different datasets to measure performance under various conditions. - **Client Benchmarking Framework:** - The framework is designed to be language agnostic and follows an open-source model, encouraging contributions and usage from different programming communities. Results are recorded according to a predefined schema promoting consistency across benchmarks. - **Best Practices in Benchmarking:** - Important best practices include ensuring the benchmarking environment closely resembles production setups, proper warm-up procedures, and adhering to structured testing to reduce variability in results. - Suggestions from the performance team, such as avoiding dynamic workers to minimize network latency during tests, were highlighted. - **Future Goals:** - Emily hopes to extend the benchmarking framework to other clients and aims for comparative dashboards that track performance improvements over time. - The eventual goal is to publish results transparently to foster community engagement and collaboration around performance optimization. ### Conclusion: Emily's talk not only emphasizes the importance of benchmarking for performance assessment in software development but also provides practical insights and methods applicable across different programming environments. The rigorous approach adopted by Elastic serves as an excellent model for teams aiming to implement effective benchmarking frameworks in their projects.
Suggest modifications
Cancel