RubyConf 2022

A Tale of Two Flamegraphs: Continuous Profiling in Ruby

This talk will dive deep into the internals of one of the fastest growing profiling gems: Pyroscope. Pyroscope is unique because it is actually a ruby gem of another ruby gem written in rust; Pyroscope extends the popular rbspy project as a foundation to not only collect profiling data, but also to tag and analyze that data as well. You can think of Pyroscope as what you would get if rbspy and speedscope had a baby. We’ll start with the internals and end with an example of how two flamegraphs can be used to tell a story about your applications performance.

RubyConf 2022

00:00:00.000 ready for takeoff
00:00:18.480 I guess we can go ahead and get started um what's up everybody
00:00:23.580 post lunch I will try to keep everybody awake I guess
00:00:29.039 um but yeah uh what's going on everyone I'm Ryan I am one of the maintainers of
00:00:34.079 a library called pyroscope and today I'm going to talk about continuous profiling
00:00:40.079 about flame graphs particularly in Ruby I don't know if anybody happened to have
00:00:46.559 been here last year for or I guess at rubyconf last year I believe her name
00:00:52.320 was Jade Dickinson gave a talk about profiling and Flame graphs that was
00:00:57.660 really good and that's kind of where I got sort of the basis of this uh the basis of this talk from I thought she
00:01:04.559 did a really good job but you know when you're using flame graphs in um you know in some ways it kind of
00:01:10.439 tells part of the story and I think adding kind of continuous profiling as opposed to normal profiling which I will
00:01:17.820 explain what the difference is in a little bit it kind of tells a little bit more of a story and so I'll give some
00:01:23.220 eggs examples of that in the end just yeah you know real quick of what
00:01:28.619 we'll go through just what our flame graphs in general the different types of profilers how to go from normal
00:01:34.860 profiling to continuous profiling and then how to manage the storage and then
00:01:40.619 some examples of you know how you can use them to help tell a story for your
00:01:45.659 application or whatever you want to tell a story about
00:01:51.720 so there is no better place to start when talking about flame graphs than a
00:01:57.360 illustration by Julia Evans who is as many of you know pretty popular in the
00:02:02.820 Ruby community and this is a illustration she made sort of just
00:02:08.099 explaining in the abstract how how flame graphs work and so for those who aren't
00:02:14.700 familiar basically what also a less abstract version in a second but basically how it works is that you sort
00:02:21.599 of start with a you know application and some sort of metric in this case we
00:02:27.540 don't know what the metric is but on the base where you see Maine that represents 100 of the time that
00:02:35.280 this application was running and then you can kind of think of these flame graphs as like a tree and so in this
00:02:41.760 case main calls both alligator and Panda which then you know work on something
00:02:48.660 for 60 and 40 percent of the time respectively and then alligator calls
00:02:54.720 bite and teeth Panda calls bamboo so on and so forth and basically the way that
00:03:00.239 you sort of use this is it kind of helps you understand whatever metric you're looking at to be able to see sort of you
00:03:07.440 know where you want to optimize resource utilization whether that be CPU or memory or some other type of metric or
00:03:15.599 you know maybe there's something that happened on a server and you want to understand that but whatever it is this
00:03:21.720 kind of gives you honestly like a nested pie chart or like a you know pie chart
00:03:26.879 on steroids of you know what your application is doing and where you should look if some thing is wrong or if
00:03:33.540 you do want to improve performance or something along those lines she also I believe
00:03:39.260 yeah references Brendan gregan here he also uh you know one of the sort of
00:03:44.760 pioneers of using this sort of Technology at Netflix I believe in other places
00:03:53.220 and so yeah here's a less abstract version with kind of a code example on
00:03:58.260 the right and so this is how this very very basic example would get turned into
00:04:04.080 the flame graph on the left you know so you see while true then you see fast
00:04:09.659 function and slow function which each call work and work runs for two units of
00:04:15.180 time and eight units of time respectively these numbers are usually represented in
00:04:20.519 samples and so um so yeah and and so this is what it would look like with a real piece of
00:04:26.520 code broken down into a flame graph
00:04:32.639 all right and so I mentioned that I'll talk a little bit about uh you know the different kinds of profilers and so
00:04:38.759 there's really two main types that um you know that are that are popular one
00:04:44.040 is sort of the I would you know I for lack of a better term call it the standard profilers and that was that's
00:04:50.280 basically they're sort of instrumented inside the classes or the methods to report when they run so you can kind of
00:04:57.240 think of those as sort of like break points that you would put at the beginning and the end of every function
00:05:02.940 that would then sort of break it down and you know you would say this function started at this time ended at this time
00:05:09.180 you know this function called this function and so you sort of have a breakdown that way and um and so yeah
00:05:15.840 that's a very accurate way to get profiling data and to understand when applications are running and and how
00:05:22.380 they're running and uh on the so that's on the beneficial side on the downside
00:05:28.500 you know if something does go wrong it is slightly less or um I guess depending
00:05:33.539 on how many functions you have or how you're using it less performant and more likely to you know break or hang or
00:05:39.960 something if something did go wrong and uh yeah and you didn't catch it properly
00:05:46.440 the alternative to standard profilers is sampling profilers and that's something that's become much more popular more
00:05:54.300 recently as a lot of people are sort of Shifting to particularly to a continuous profiling sort of sort of model and the
00:06:02.639 way that works is a little bit differently in that rather than having to like you know instrument your code
00:06:07.979 with all these different break points or you know have it automated through some library in that way the way sampling
00:06:14.940 profilers get this data is they sort of from you know outside of your application sample the stack Trace at a
00:06:23.340 certain frequency so uh you know a common one is a hundred times per second and so it's going to look at the the
00:06:29.940 stack trace a bunch of time send all of that information somewhere else and
00:06:35.759 basically that's how it figures out what was being called and what was being worked on at any given time and so that
00:06:43.740 does you know is obviously slightly less accurate because it's going to be of
00:06:48.780 course sampling but at the same time you can sample so frequently that it you
00:06:53.940 know sort of the the benefits definitely outweigh the costs most of the time and that you can still get a pretty accurate
00:07:00.840 um you know depiction of what was going on in your application for all intensive purposes
00:07:06.660 um but you know you do it without the performance hit and it allows you to get a lot of profiling data that you can
00:07:13.380 then use um use you know however you'd like
00:07:18.900 okay so yeah so there's um you know a number of different profilers uh as particularly in the Ruby
00:07:26.460 Community um you know you may have seen or used some of these rbspy stack Prof
00:07:33.539 um rack mini profiler app profiler are all popular ones that uh you know a lot
00:07:39.240 of people use for various different use cases um I there's a a good article from that
00:07:45.960 someone from Shopify wrote that kind of like breaks down all of these all uh I guess maybe at the end of this put up
00:07:51.660 the link if you want to look at the the notes at the bottom and check that out but uh yeah they're all kind of used for
00:07:57.419 different use cases um and you know and but most frequently
00:08:02.699 it's it's not they're not used or at least um you know out of the box they're not
00:08:08.759 necessarily meant for continuous profiling they're meant more for um you know you want to SSH into a
00:08:15.000 server figure out why it's acting weird and you might you know run run rbspy to
00:08:20.099 figure out you know what's going on or you know yeah what you know what your application is doing something along those lines or maybe you know during a
00:08:27.479 test Suite or something or a script so they're typically used in more of an ad hoc way as opposed to constantly
00:08:35.880 profiling your application just because you know if you are constantly profiling
00:08:41.219 your application you then have to make some kind of sense of all of that profiling data and uh you know so a lot
00:08:48.600 of people who are using these will use them in sort of more ways where yeah where they you know are
00:08:54.720 looking specifically at a specific time period or a specific server or something along those lines
00:09:00.300 um but what uh pyroscope does and um you know what we'll talk a little bit more about as this talk goes on is
00:09:08.820 basically taking data from uh any of these profilers in in pyroscope's case
00:09:14.459 it's uh the the um the client uses Arby spy but taking any of these profilers and then sort of
00:09:21.540 like sending them to an external server that will then allow you to basically
00:09:27.540 without having to tell your application when to profile itself or having to
00:09:33.480 actively profile your application when you want to see something this profiles it all the time and then you can use uh
00:09:40.680 pyroscope to basically you know query it think like Prometheus or something like that where you can you know go in and
00:09:47.640 make sense of the data in retrospect as opposed to having to you know start sort of predict when something's going to go
00:09:53.640 wrong and have the profiles that way and so how pyroscope works is then yeah taking these profilers basically using a
00:10:01.440 component called a session manager that will then you know determine how often
00:10:06.899 to collect the profiles turn the profiles into some sort of standardized format and then send those profiles to
00:10:14.519 the pyroscope server where a bunch of compression and efficiency
00:10:21.019 you know operations are done to make it so that you can both store the data efficiently and then query the data back
00:10:27.899 efficiently and so that's sort of again the difference between sort of just like you know static profiling or ad hoc
00:10:34.320 profiling and then continuous profiling is this concept of you know putting it somewhere where it can then be queried
00:10:40.440 and used sort of retroactively um you know yeah without having to do
00:10:46.140 anything extra and yeah this is just like a little bit
00:10:51.600 more I guess in the weeds details of uh of how we did it uh it definitely gets
00:10:56.760 uh pretty complicated and has taken a lot of different iterations um but basically you know the we wanted
00:11:03.959 there to be sort of like two options one where you can sort of profile your application
00:11:09.540 um without having to do anything but also sometimes there is that that concept of like a break point or in this
00:11:16.380 case we call it a tag where you want to be able to sort of organize your profiles in some way that's meaningful
00:11:23.279 to you so in a ruby context that might be getting a profile per controller or
00:11:29.100 per action or you know something along those lines so that you can sort of see later a breakdown of why a particular
00:11:36.600 controller you know was acting in a certain way or action or whatever it might be and so in order to enable that
00:11:43.140 there's sort of you know two pieces to it one is the uh you know and the Ruby
00:11:49.680 application you know you you add the gym and you can use this tag wrapper to
00:11:54.779 basically wrap any code that you are particularly interested in seeing and
00:12:00.180 then uh and then yeah and then we have the ruby gem itself obviously which then uh basically just wraps Arby spy on top
00:12:08.040 of that and you know effectively just like adds metadata to the profiles that rbspy is collecting and then you know as
00:12:16.800 as mentioned before then you know storing it in a way where you can then query based off of those based off of
00:12:23.220 that metadata or at a high level view just seeing the whole profile itself you
00:12:29.279 know without breaking it down yeah um okay so yeah this is kind of
00:12:35.880 what pyroscope looks like uh you know a little bit different than um you know then profiles that many might be
00:12:42.240 familiar with um and yeah this is kind of just breaking down the uh UI is a little bit
00:12:48.360 different than a lot of profile profilers that people are familiar with um I know yeah typically when a lot of
00:12:55.139 people think of flame graphs they kind of think of these uh you know more fire colored ones
00:13:01.980 um and this one's a little bit different in that it uh uh let me go back to
00:13:08.700 showing a slideshow um this one is a little bit different in that you know there's like a little bit more uh kind of
00:13:15.899 um method to you know the coloring it's colored by package in this case
00:13:21.240 um also adding a table view to be able to understand sort of you know what's going on there and then you know in this
00:13:26.760 case this is profiling CPU and so this top graph might be something that you would otherwise get from
00:13:33.899 um yeah like Prometheus or something like along those lines but this sort of like you know kind of combines all of
00:13:39.000 this information into one view where you can then sort of zoom in and we'll I'll
00:13:44.339 go through a sort of live example in a little bit but you know sort of zoom in and understand kind of you know
00:13:51.660 different subsets of the application and then as I mentioned before there's sort of this tag bar here where you can then
00:13:57.660 see sort of you know if you wanted to see by controller or action or that kind of thing and
00:14:03.779 um yeah and so so so that's one thing that we sort of built on top of we added in took a lot of inspiration from like
00:14:11.579 speed scope if you've also used speed scope they have a really great UI as well and so we sort of just you know
00:14:18.540 built on top of a lot of what a lot of other tools had done and sort of various different packages and then tried to
00:14:24.839 combine it all into one convenient package where you can use it all all
00:14:30.060 together so for those last two points for the high cardinality and the query ability
00:14:36.839 anytime you talk about that you you run into this sort of issue of uh you know with high cardinality data or data that
00:14:43.560 you want to query you know how do you store it efficiently and how do you query it efficiently and so I'll talk
00:14:52.079 through this briefly on what pyroscope does to sort of like solve that problem of like as you can imagine profiling
00:14:58.860 data is pretty pretty large you know profiles can be really big there can be
00:15:03.959 really long stacks and if you're storing those at any sort of frequency it can
00:15:09.060 start to add up really quickly in a way where it sort of offsets whatever performance improvements that you make
00:15:15.899 if it's then too expensive to get the data and so you know the first problem that we solved was sort of the storage
00:15:22.380 requirements aspect of it and that basically you know as you can imagine stack traces have a lot of repeated data
00:15:29.699 and so by turning them into trees we've found ways to actually you know
00:15:34.740 Implement a lot of the sort of uh you know like algorithm algorithm you know
00:15:40.500 Basics that you learn sort of as you're as you're learning a lot of languages like Ruby or whatever in order to
00:15:46.380 compress the data using trees and get rid of a lot of that duplication in the stack traces themselves so yeah in this
00:15:54.420 case you know yeah combining these so that you don't have to say the same you
00:15:59.760 know file name for example multiple times when it's in every every stack
00:16:05.339 and then basically you know kind of then going even further the symbol names themselves have a lot of uh can have a
00:16:12.420 lot of repetition and so um basically by serializing the symbol names and sort of like storing a
00:16:19.680 dictionary you can turn you know net http.request and to just you know in
00:16:25.440 this case 0 0 which is obviously much more efficient to store that than to store this you know these long strings
00:16:32.160 for the symbol names and this you know compresses things even further giving room for that high cardinality data and
00:16:39.959 you know data that you want to be able to query later
00:16:45.000 okay so that's yeah on the storage side how we store it really efficiently and then the question becomes like you know
00:16:50.279 once you have all this data and it's stored very efficiently how do you get it back in a way that's uh you know both
00:16:57.060 fast and you know somewhat efficient and so the way we did that was using something called segment trees and
00:17:04.020 basically what we do is so you know pyroscope sort of uses rbspy to collect
00:17:09.059 data every um in 10 second sort of chunks and then it sends it to the server right and so
00:17:15.660 if you want to look at you know a day's worth of data it would have to merge you know a lot of 10 second chunks that make
00:17:22.620 up that day right and so what we did was or do is instead of storing those chunks
00:17:29.340 in solely 10 second chunks we then basically aggregate them at different granularities so that you know basically
00:17:37.200 when you want to query them back it's more efficient than having to merge you know yeah however many however many 10
00:17:43.320 second chunks are in a 20 four hour or a week long or a year-long period and this
00:17:49.320 makes it a much more efficient operation yeah here's kind of a a depiction of
00:17:54.960 that so in this case if you wanted to get 50 seconds worth of data if we didn't do anything special you would
00:18:00.179 have to then merge do four merge operations which you know will add up over time if it was a you know 5000 or 5
00:18:07.860 million second query range and instead you know with
00:18:13.140 this you know you're able to then get one 40 second block and add a 10 second block as opposed to merging five or I
00:18:21.360 guess yeah five ten second blocks together and it just makes things much more efficient and again makes it
00:18:26.640 possible to be able to then you know be as efficient as possible at its core so
00:18:32.880 that you can then break things down in um you know in a more high cardinality way later
00:18:39.840 and so yeah here's uh you know just some stats that I found of like other companies you know profiling is also
00:18:47.039 often Associated pretty closely with latency and so um companies like Amazon Google Uber you
00:18:53.400 can imagine um you know there's for every millisecond in latency particularly when
00:18:59.340 you're at that kind of scale it can definitely make it so that you know you're losing Revenue people are either
00:19:05.280 churning or leaving your website um in the case of Amazon they claim that
00:19:10.620 they lose a billion dollars in revenue and this was a while back you know they're probably even more now
00:19:16.919 um Google same thing you know the longer it takes the less likely people are to visit a website I'm sure everyone here
00:19:22.980 who has used Uber and Lyft you know when one's taking long you switch to the other vice versa and so profiling sort
00:19:30.600 of helps break down that that time period of like why is this application you know or yeah why is this particular
00:19:37.620 request taking so long to load or something like that you can then use profiling to sort of break that down and
00:19:42.840 understand it and improve you know it's sort of like the business case for profiling for many
00:19:49.919 so yeah uh now I will in the time left go through like a couple examples
00:19:55.320 um the first one I'll go through is just sort of like a um a a real world uh
00:20:00.780 example but that sort of just shows kind of what I was talking about with the ability to understand
00:20:06.720 um like tags and to break down profiles I'll show you with a ruby application
00:20:11.760 basically just think it's like a simple you know Rideshare application there's three routes one for car one for bike
00:20:18.900 one for scooter and we tag those as vehicle and then there's three regions and we tag that as region and
00:20:28.080 um and this is what we end up with let's see hopefully this will be big enough
00:20:35.100 that you all can see it um yeah so so I guess yeah so I'll start
00:20:41.400 from like the top the top piece and so uh as I mentioned the the ability to tag
00:20:46.980 things is one of the let me make it a little bit bigger um the ability to tag things is one of
00:20:52.980 the you know really uh important enhancements that we made over rbspy
00:20:58.679 um and you know in this case now you have you could break things out by action
00:21:04.020 um and you could see you know scooter slash index car slash index bike slash index and a pie chart of sort of like
00:21:11.340 the CPU utilization and you know in this case if you were trying to debug this
00:21:16.799 you might want to say okay why do cars take 60 of the total CPU for this
00:21:22.440 application whereas you know bikes and scooters take significantly less
00:21:27.840 um and then what pyroscope allows you to do is kind of then now you can select each one of these individually and uh
00:21:34.320 it's kind of hard to see because the stack is so big but this then this profile changes based off of whichever
00:21:41.360 whichever one you're looking at and so in this case you know yeah let's say we were looking at Region
00:21:47.820 um and there's you know you see one region is taking a particularly long time what pyroscope allows you to do is
00:21:53.880 sort of you know jump in and see like this I'll compare
00:21:59.580 um you know yeah this region that's consuming 50 of the time with another one that's consuming much less and uh
00:22:07.080 and then what you end up with is this flame graph here which is very tall let
00:22:12.360 me collapse it so it'll be easier to see um and so you get this like diff flame graph that basically takes the two of
00:22:18.480 those flame graphs and shows you what is the difference you know kind of like think a code diff What is the difference
00:22:24.780 between flame graph a and Flame graph B in this case you know you would be able
00:22:30.539 to then kind of see that the Baseline which is the first one is has 60 CPU
00:22:37.320 utilization the second one has 14 percent for this particular function so maybe it's I guess in this case it's
00:22:43.860 check driver availability this would be representative of you know maybe looking
00:22:49.020 for a driver is just taking much longer for one region than it is for the other you know I don't know maybe there's a
00:22:54.840 sporting event or something that's going on but uh you know if you were debugging this issue profiling would help you sort
00:23:01.080 of understand that and figure out you know what's special about one region versus another region
00:23:07.200 and as I said you know here's how you would then you know query those different tags separately
00:23:13.679 all right um another example um this is uh one that we got from or
00:23:21.059 one that we sort of added a view that we added that is pretty popular in speed scope is this view called sandwich view
00:23:26.840 so uh if I go to a another application
00:23:35.600 uh where is it okay yeah so if I go to another application um you know this one it basically uh if
00:23:44.400 if you're looking at the regular flame graph view you might not actually uh
00:23:50.159 illnesses you might not actually tell you know sort of see certain functions that are really common here
00:23:57.240 um you know so there's a lot of you know something like logging you can imagine gets spread out through a lot of
00:24:02.520 different code paths and so another thing that's that's pretty nice from here is that if you you know select so
00:24:10.260 yeah so in this case if we select logging and we're looking at the regular flame graph you know you can see little
00:24:16.020 uh you know these little pink pieces are areas where logging is getting called and often we see you know this can you
00:24:23.340 know in this case it's only taking um you know six percent of total CPU but
00:24:28.740 uh sometimes that can be a lot depending on how much logging you're doing and so another way that you can sort of use
00:24:34.559 this is to understand and sandwich view seeing sort of these are all of the functions that are calling logging and
00:24:41.039 then these are all the functions that logging then calls as as children and so
00:24:46.500 this kind of helps you then break down kind of you know those function that are spread out a lot across the code base
00:24:53.100 Sometimes the best performance impact that you can make is looking at those functions and understanding sort of why
00:24:59.039 are they taking so long maybe we should be doing less logging maybe we're sending those logs off somewhere and not using them something along those lines
00:25:07.200 and the uh the last example that I will give is one that is somewhat near to our
00:25:17.640 hearts it's um so this is actually what uh me and one of the other maintainers
00:25:23.280 who's sitting up here Dimitri what sort of gave us the idea for pyroscope we
00:25:28.380 were working at a company that was using Ruby and and basically we were using I
00:25:34.440 don't know if anybody's used the broadly package I think that's how you pronounce it and we were using that to do some
00:25:41.159 compression and we kind of were just going through a round of performance optimizations for our application and so
00:25:49.140 this is before we had pyroscope to use and we were just using Arby spy sort of in a more ad hoc way but basically we
00:25:56.520 were able to sort of use it to figure out so by default if you just you know
00:26:02.400 compress a response using broadly like this it by default is set to the maximum
00:26:09.000 level of compression which is 11 or just the number 11. and basically if you then
00:26:17.220 you know if you don't know that and you're looking at the application flame graph
00:26:23.279 um I've got too many tags open here um so yeah so we kind of just like a b
00:26:28.559 tested it basically to see okay you know what's the difference um and so we we kind of did two we did
00:26:36.059 one at 11 you know compression and one at two compression and we were able to
00:26:41.760 see pretty quickly that the you know default level of compression consumes
00:26:46.799 significantly more CPU utilization that's this yellow piece than the other
00:26:52.140 one and when we did the uh the diff between the two
00:26:58.020 um do the diff again we are basically able to kind of you know we were at first we
00:27:05.580 sort of looked at the the initial flame graph itself but basically as we kind of dug deeper we were able to find that
00:27:11.760 there yeah that the problem was that this big chunk at the bottom was this basically this uh compressed response
00:27:18.299 function and uh in our case I think it saved like roughly 30 percent of CPU we
00:27:24.299 were able to scale down a bunch of uh you know servers that we were needing to do all of this uh you know all of this
00:27:30.960 work and it was just because we we had no idea that this broadly function was by default set to the maximum level of
00:27:37.380 compression and the only way we were able to find that was that again by just like going into this View and being able
00:27:44.100 to see you know we kind of scroll down to the bottom and we see okay you know why is this this piece of the flame
00:27:51.179 graph taking 72 percent of the time and you know again that was because of the
00:27:56.400 compression thing and so um often it's yeah logging compression serialization and
00:28:03.240 deserialization something along those lines but yeah basically you know you kind of start with looking at a flame
00:28:09.299 graph like this you can make a change and then use continuous profiling to sort of compare the before and after
00:28:15.600 which you know tells a story hopefully of how you improved performance for your application
00:28:21.559 but is in the end pretty useful for understanding why performance issues are
00:28:27.360 happening and how you might be able to fix them um so let me see I believe that is
00:28:34.740 everything um we've got a couple more minutes happy to answer any questions if anybody has
00:28:42.059 any um but otherwise thanks for coming and listening to the talk and have a good rest of your conference
00:28:57.600 all right yeah that's a good question so the question was uh you know sometimes
00:29:02.640 yeah with uh with the example here you know maybe you are you know yeah maybe
00:29:08.580 you did something maybe at something actually did change in your application um and you want to be able to do that uh
00:29:14.940 one thing that I mean we use it a lot internally obviously um dog fooding it one thing that we
00:29:20.279 often do is we add a tag for uh like commit and so you can see you know
00:29:27.120 basically what version your code was on between um you know between flame graphs and or
00:29:32.820 add that in to here so that can sometimes change um
00:29:38.100 yeah I mean typically you're going to have to use this with you know some other tool probably to like help it and
00:29:45.600 it does take some sort of knowledge of what's going on in your application you know maybe CPU increased because you
00:29:51.899 know it's like holiday season and a lot of people are like buying things or something like that um but oftentimes you kind of you know
00:29:58.919 in in the real world it'll be like yeah like a spike in CPU or something where
00:30:04.020 you're like that seems fishy and then you kind of you kind of dig in a little bit further so
00:30:09.480 um it sort of depends case to case but uh you know basically yeah again using tags you can kind of help to weed out
00:30:16.740 some of your like hypotheses on why it might have changed but um otherwise this it kind of helps to
00:30:23.039 just be able to dig down and figure out you know does it look normal to you or is it something that's kind of special
00:30:29.700 yeah so yeah we use sidekick a lot again this is a demo one so I can't show the uh the actual code but yeah so the way
00:30:36.899 we do it is we just um set those up as in our case two separate applications but
00:30:43.260 um so yeah so you know same way here you kind of have you know the pyroscope server itself and then the Rideshare
00:30:48.600 application but oh sorry I should repeat the question I don't know if this is still on but yeah the question was
00:30:54.779 um like sidekick and other you know other things that might be running sort of you know yeah not at not in the same
00:31:02.460 place or whatever how does that work and so yeah you can just do the exact same thing
00:31:07.500 um the uh yeah you would just set up a different application name and then you can break it down or if you really
00:31:12.899 wanted to yeah you could send it to the same application and just use a tag for the sidekick job but yeah this is
00:31:19.679 actually really convenient for yeah debugging sidekick workers and why they um you know are notoriously hard to uh
00:31:27.059 to deal with yeah so the question was for this example where we were looking at the uh
00:31:34.740 the compression thing and there was yeah we were able to see High versus low um in this case I the way I set up this
00:31:42.120 example was more of like an an a B test type of uh you know type of situation
00:31:47.700 where they're both running at the same time um when we did it actually you know in practice
00:31:54.120 um we sort of did it you could also do it for example like staging server versus a regular server so you might
00:31:59.940 like you know compare to compare yeah like using a load testing tool for
00:32:05.100 example that's like another way that you can test it um or you know if you test in production
00:32:10.380 you know you could just push the change and see um you know how the metrics change and sort of instead you would instead of
00:32:16.919 seeing you know these two lines running at the same time you could show one line and then compare before and after versus
00:32:23.940 comparing them at the same time you know at different levels so yeah that's another way you can do it but yeah