Talks
Speakers
Events
Topics
Sign in
Home
Talks
Speakers
Events
Topics
Leaderboard
Use
Analytics
Sign in
Suggest modification to this talk
Title
Description
By, Kir Shatrov Performance regressions in edge Rails versions happen quite often, and are sometimes introduced even by experienced Core commiters. The Rails team doesn’t have any tools to get notified about this kind of regressions yet. This is why I’ve built RailsPerf, a regression detection tool for Rails. It resembles a continuous integration service in a way: it runs benchmarks after every commit in Rails repo to detect any performance regressions. I will speak about building a right set of benchmarks, isolating build environments, and I will also analyze some performance graphs for major Rails versions Help us caption & translate this video! http://amara.org/v/G61K/
Date
Summarized using AI?
If this talk's summary was generated by AI, please check this box. A "Summarized using AI" badge will be displayed in the summary tab to indicate that the summary was generated using AI.
Show "Summarized using AI" badge on summary page
Summary
Markdown supported
In this talk by Kir Shatrov at RailsConf 2015, the focus is on performance regressions in Rails applications and the development of RailsPerf, a tool designed to detect these regressions. Performance regressions occur when software operates correctly but becomes slower or uses more resources than previous versions. Shatrov highlights several significant points related to this issue: - **Definition and Impact**: A performance regression is characterized by increased page load times or memory consumption despite correct functionality. An example discussed is the upgrade to Rails 4.2, which resulted in slower performance for the Discourse application compared to Rails 4.1. - **Performance Metrics**: The two primary metrics for evaluating performance regressions are timing (how long code takes to execute) and allocations (the number of objects created in memory). Reducing both can significantly enhance application efficiency. - **Key Examples**: Numerous cases demonstrate how small code optimizations can lead to substantial performance improvements, such as utilizing string interpolation over concatenation to decrease unnecessary object allocations. - **Benchmarking**: To track performance changes in Rails, Shatrov proposes a service concept that incorporates existing benchmarks, especially those observed in the Discourse platform. The benchmarking framework aims to identify performance issues continuously during the development process. - **Automation and Integration**: Shatrov emphasizes the importance of automating benchmark tests, automating results reporting, and integrating solutions like RubyBench for better infrastructure in Rails performance testing. RailsPerf was conceived as a prototype, evolving into a more sophisticated tool. - **Future Directions**: The overall goal is to implement benchmarks for every pull request to ensure contributors receive timely feedback on performance changes. This includes exploring benchmarks for other Ruby implementations, such as JRuby. Concluding with actionable insights, Shatrov encourages developers not involved in Rails contributions to monitor their applications' performance metrics, build proper benchmarks, and stay informed through community resources. With a focus on community contribution and collaboration, RailsPerf represents a significant step towards more efficient Rails application development and performance management.
Suggest modifications
Cancel