Performance Testing
A New Kind of Analytics: Actionable Performance Analysis

Summarized using AI

A New Kind of Analytics: Actionable Performance Analysis

Paola Moretto • April 21, 2015 • Atlanta, GA

In the presentation titled "A New Kind of Analytics: Actionable Performance Analysis" by Paola Moretto at RailsConf 2015, the focus is on improving application performance through actionable analytics. Moretto, a developer and entrepreneur, emphasizes the critical importance of speed and responsiveness in applications, citing extensive research linking low performance to negative impacts on SEO, conversion rates, and user satisfaction. Here are the key points discussed:

  • Understanding Performance: Speed is not just a feature but a necessity for modern applications. Poor performance can lead to high costs and inefficient use of resources.

  • The Role of Data: Gathering data is essential. Moretto quotes, "In God we trust; all others bring data," highlighting the need to rely on concrete metrics rather than assumptions.

  • Types of Data: The presentation distinguishes between monitoring data gathered during production and testing data from a pre-production environment. Both play a vital role in identifying and addressing performance issues.

  • Monitoring and Testing: Effective performance management requires monitoring the system as the first line of defense against issues, but it should be complemented by performance testing. This dual approach allows both reactive and proactive management of performance problems.

  • Synthetic Traffic for Testing: By generating synthetic traffic in test environments, developers can create controlled scenarios to understand user behavior and identify issues before deployment. This method provides end-to-end metrics which are crucial for measuring actual user experiences compared to server-side metrics.

  • Importance of Metrics: Key performance indicators such as response times and error rates should be consistently monitored to identify and fix problems before users encounter them.

  • Continuous Testing: With the evolving nature of software, frequent testing—especially before changes in user traffic—is essential to prevent issues from arising post-deployment.

  • Advanced Analytics: The discussion further includes the roles of data mining and machine learning in locating performance problems effectively. Analyzing extensive data sets can reveal underlying issues since real performance problems often stem from complex interactions.

In conclusion, Moretto stresses that speed is the foremost feature in application development today. By implementing a blend of monitoring, testing, and analytics, developers can proactively manage performance, leading to a superior user experience. She invites further discussions and questions from the audience to share insights.

A New Kind of Analytics: Actionable Performance Analysis
Paola Moretto • April 21, 2015 • Atlanta, GA

by Paola Moretto

Applications today are spidery and include thousands of possible optimization points. No matter how deep performance testing data are, developers are still at a loss when asked to derive meaningful and actionable data that pinpoint to bottlenecks in the application. You know things are slow, but you are left with the challenge of figuring out where to optimize. This presentation describes a new kind of analytics, called performance analytics, that provide tangible ways to root cause performance problems in today’s applications and clearly identify where and what to optimize.

RailsConf 2015

00:00:12.480 Hello everybody! I'm Paola Moretto, and I'm the co-founder of a company called Nuvola. You can find me on Twitter at @paolamoreto3. A little about me: I'm a developer turned entrepreneur and have been in the high-tech industry for a long time. I love solving hard technical problems. I originally come from Italy, but I've been in the U.S. for 20 years. When I'm not writing code, I'm usually outdoors hiking.
00:00:24.640 Today, I want to talk about performance. We've heard it loud and clear here at RailsConf: faster is better. We all know what performance is, but it's important to understand the real impact of low performance. When I talk about performance, I specifically mean the speed and responsiveness that your application delivers to users. There's a famous quote from Larry Page that says speed is product feature number one. Therefore, you need to focus not only on your functional requirements but also on the non-functional requirements. Speed is paramount for any web application today. There is a lot of research and data backing this up, showing the impact of low performance. For instance, low performance negatively affects your SEO ranking, conversion rates, brand perception, brand loyalty, and advocacy. Additionally, it impacts your costs and resources because the tendency for low performance is usually to over-provision, which isn't the right answer. So, speed is crucial for web applications.
00:02:21.520 If you have a DevOps model, combining development and QA into a single team means that performance becomes even more critical. In the cloud, where you have a fully programmable and elastic infrastructure and are adopting continuous delivery, it’s essential that every build is not only functional but also meets speed requirements.
00:03:10.879 So, what do you do? How do we tackle the performance problem? The first step is to gather data. I came across a quote that I really love: 'In God we trust; all others bring data.' It's a poor model to deploy and then just hope for the best, letting your users become your QA department. This isn't the best approach. We need adequate data to understand our performance issues.
00:03:26.720 There are different types of data to consider. On the right-hand side of your environment, you have your deployments in production and live traffic, which fall under the umbrella of monitoring. Here, you gather various types of monitoring data and techniques. On the left-hand side, you have your testing environment, where you typically have a pre-production or staging environment. Sometimes, testing occurs directly on production. In this setup, you create synthetic traffic for performance testing.
00:05:30.400 Monitoring is the first step of performance management. It involves monitoring your stack, infrastructure, user behavior, and logging. You can also use streaming analytics to get high-frequency metrics. There are many monitoring solutions available, which complement each other depending on your application. However, once you have the data from monitoring, the challenge is to correlate it all and determine exactly what's happening. As the saying goes, you must first instrument and then ask questions. But remember, monitoring alone is not sufficient.
00:06:02.880 Firstly, live traffic is noisy, making troubleshooting difficult, especially if an issue arises during testing. Secondly, monitoring is reactive; it identifies issues after they occur. It's like calling AAA after a car accident; it may help, but ideally, you want to prevent the accident in the first place. Therefore, while monitoring is necessary as your first line of defense, it must be paired with performance testing, which addresses the proactive side of performance management.
00:09:19.039 Gathering synthetic traffic allows you to create controls where you can generate user traffic and conduct performance testing in a pre-production environment to avoid impacting real users. With synthetic testing, you develop scenarios based on user behavior, which provides valuable data for troubleshooting. This method simplifies identifying problematic scenarios, enabling developers to alter traffic and user variables easily.
00:12:28.320 Performance testing generates end-to-end user metrics, providing a clearer picture than server-side metrics alone. A significant disparity can exist between end-user experience and server metrics. So, to understand how performance impacts users, you need the complete view, including user metrics.
00:13:12.320 This data can help diagnose issues early, allowing developers to fix problems before deployment. To achieve this, realistic scenarios must be tested thoroughly. Testing should also encompass different device types. For global applications, performance should be tested from various geographic locations, ensuring comprehensive coverage.
00:14:56.000 When dealing with metrics, the focus should be on measuring response times, transaction completion times, and error rates. The goal is to resolve issues before they reach users and degrade their experience. Understanding when to perform these tests is crucial since software is in constant flux.
00:17:11.680 Applications are also increasingly complex, having multiple optimization points. Frequent testing, especially before implementing changes, prevents blind spots. You should always conduct tests before inversions in traffic to assure expectations align with user experience.
00:19:28.079 However, even with thorough testing, issues may still occur. A common requirement is to measure results across different types of traffic and usage patterns, identifying the first component that slows down. Thorough spending on monitoring alone progresses into identifying solutions through predictive analytics.
00:21:17.759 The next step is data mining and machine learning, which can help localize performance problems. For example, if performance issues arise during testing, analyzing network and external factors such as DNS times can reveal the underlying issues effectively. We want to draw insights from large data sets addressing specific metrics to accelerate troubleshooting.
00:22:57.679 In conclusion, speed is the most critical feature of any application. Performance testing complemented by data monitoring and analytics provides a mathematical approach to identifying performance weaknesses early. Today’s talk demonstrates how combining various methodologies addresses performance concerns before they materialize.
00:23:55.839 Thank you for your time! I would be happy to answer any questions you might have. You can find me on Twitter at @paolamoreto3, and I look forward to your feedback.
Explore all talks recorded at RailsConf 2015
+122