Talks
Get Your Facts First, Then You Can Distort Them as You Please
Summarized using AI

Get Your Facts First, Then You Can Distort Them as You Please

by Steve Sanderson

In the presentation titled "Get Your Facts First, Then You Can Distort Them as You Please," Steve Sanderson discusses the significance of continuous learning and deployment within the context of a startup called Food on the Table, which focuses on meal planning services for families. The talk emphasizes an innovative, metrics-driven approach to product development, setting it apart from traditional product management processes.

Key Points:
- Continuous Learning and Deployment: Sanderson advocates for using a continuous learning model where small, data-driven experiments with users inform product decisions rather than relying on broad assumptions about customer needs.
- Comparison of Traditional Approaches: He contrasts two dominant methods in tech development: one that is heavily planned and organized, and another that is chaotic and directionless. Sanderson proposes a more organic growth model based on iterative feedback.
- Initial Experimentation: Sanderson shares a story of their initial approach, where they engaged with real customers to simulate the meal planning process, testing their hypothesis before investing in technological solutions.
- Identifying Business Metrics: The significance of establishing essential metrics is highlighted, specifically targeting user retention and engagement to create a sustainable business model.
- Leveraging Technology for Feedback: Instead of limiting their interactions to in-person meetings, the team utilized digital tools to efficiently share and receive feedback, all before any formal coding began.
- The Role of Data in Decision Making: Emphasizing the importance of tracking KPIs, he describes how they collected user event data and sought to reconcile it without losing control over their metrics.
- Testing Hypotheses Quickly: The narrative includes anecdotes about identifying user challenges and testing assumptions using minimal efforts, leading to quick iterations rather than large, time-consuming projects.
- Insights on User Behavior: Sanderson discusses how focused testing, such as optimizing the user experience on their grocery list page, helped identify key factors influencing retention rates.
- Importance of Adaptation: The presentation concludes with insights into how a flexible, testing-oriented approach allows for quick adaptation to user needs, helping the startup manage unexpected growth when featured on popular platforms.

Conclusion and Takeaways:

Steve Sanderson encapsulates the essence of the entrepreneurial journey at Food on the Table, illustrating how adopting a mindset of continuous feedback and adjustment helps mitigate developmental risks. He encourages participants to embrace experimentation and agile methodologies in their tech endeavors, suggesting that such practices enhance relevance and value to customers.

00:00:09.559 I'm really glad to be here. This is my second chance to speak here at Lone Star. When I was here a couple of years ago, I had a pretty different job with a different company. Back then, I was with a company called Five Runs. I can see at least two people here who have heard of that company. It's no longer around, but I am, and I've moved on to something else that I'm honestly excited about in a whole different way. That's what I'm here to talk about today.
00:00:40.260 I'm working in a small startup called Food on the Table. As you can guess, it is not a technology company, and we do not deliver products to technology folks. What we do is deliver services to moms, dads, families, singles, and adults who need help with meal planning, saving money on their meal planning, and eating healthy and nutritious meals without wanting to spend hours on it. It typically takes a couple of hours a week for planning. They want to know what's on sale at their grocery store, so we pull in a lot of data, integrate it, and deliver a personalized set of recipes and options regularly. It's a consumer product, a consumer service in Austin, which makes me very happy.
00:01:16.740 However, we do things pretty differently—significantly differently than pretty much every other startup I've worked at before. I've worked at startups before, and I believe many of you have as well. You may have had the experience of doing great work and producing really good code, but if you're not in product management, you realize that product management can give you direction and feedback on which features to work on based on market demands. You and your great team go off and build something cool, then deliver it, waiting for customers to come through your doors. But almost inevitably, the first feedback you receive is that it doesn't quite work the way they need it to, or while they appreciate the shiny rotating cube you built, what truly hurts them is a different problem.
00:02:34.260 At that point, you are scrambling for feedback that you can utilize. If you're lucky, you have some heart-to-hearts discussing what you learned. If not, you're left in blame territory—product management blames technology, technology blames marketing for not specifying what to build. It all degenerates into arguments about what went wrong. We do something different at Food on the Table, and I want to share that with you today. My talk is titled 'Get your facts first, then you can distort them as you please,' or, 'Why I love continuous learning and continuous deployment.' This encapsulates part of what we do.
00:03:05.459 Continuous learning and continuous deployment emphasizes our metrics-driven approach to decision-making and knowing if what we do is useful. We implement this in incredibly small increments. If you're familiar with XP or agile practices, this will seem familiar, but the key difference is that it permeates throughout the whole business process.
00:03:36.360 Often, we've experienced two alternatives. One is the well-organized and overly planned approach—product management organizes what is to be valuable and deliverable to customers, then we move to build that. The other is chaos, where we just start cranking things out without structure. It feels like these two alternatives dominate. While XP and agile provide alternatives to improve the development life cycle, I want to suggest something else: a way to grow organically based on feedback after making small decisions, driven by data rather than opinions.
00:04:59.640 What I hope to present to you is how to continually run live experiments with your users to see what works, gather more metrics than you'd know what to do with—counterintuitive at the initial stages—and continually deploy changes to adapt to those learnings. I will share some stories along the way regarding what we did and the results we achieved.
00:05:38.040 Now, I want you to take a close look at this slide. While I'm not actually going to talk about the content on this slide since it's outdated—this has all been rewritten this morning—I want to provide context about how we got started at Food on the Table.
00:06:16.800 We sought feedback from many individuals regarding our meal planning idea. If you ask a mom if meal planning is painful, particularly with kids who are picky eaters and activities like soccer, and say you can complete it in just 10 minutes instead of two hours (if they are lucky) and save money in the process, the response is overwhelmingly positive. No one typically says it's a bad idea; everyone loves it. It's similar to asking if people like puppies—of course, they do. However, focus groups provide limited insight.
00:07:33.660 Instead of sticking to traditional methods, we decided to turn things around. My background is as a developer, and my business partner has an MBA in marketing. My inclination is to create shiny objects, but we wanted to understand how families actually handle meal planning. We approached a mother, convinced her to be our first customer, and called her to learn where she shops and what she likes to eat. We then gathered a bunch of recipes from Google, cut and pasted them onto paper, and printed them out. We met her at Starbucks and simulated the entire meal planning process for her using two laptops.
00:08:59.880 By the end of the session, she had a nice meal plan that she could use for shopping. We met her again at the same Starbucks the following week, gathered feedback over the phone, and prepared another meal plan to see if she liked it. She did, and we requested that she pay for it. We charged her 10 dollars for our service, which was significant because our fundamental hypothesis was that this could be a valuable service.
00:10:14.220 If we couldn't convince someone to pay 10 dollars, we realized that we were in the wrong business. We were never going to provide the service as a customized personal valet with adults on the phone delivering meal plans. However, we knew that this was the maximum value service we could deliver, so we tested it to see if we could achieve results.
00:10:34.860 Time passed, and we transitioned through phases where we really had to identify our essential business metrics. This audience is an excellent crowd to discuss business metrics. The chart I shared is meaningless, just something I found. I had used another chart discussing gross sadness versus domestic product, which was too glib. Most people here may not want to jump for joy about business metrics, but knowing what is valuable and why is crucial.
00:11:49.200 Our basic business metrics are straightforward. We want users to return periodically and assess how many times they come back, how often they pay, and have a general sense of how to create a sustainable business. Whether it eventually becomes a company that gets sold for 100 million dollars or one that becomes a lifestyle business depends on various factors.
00:12:29.580 Understanding the end goal is essential for progress, and we use that to work backward. At the beginning, Manuel and I literally sat down with the newspaper and Google, cutting and pasting recipes onto paper. We started with one customer, then two, then three, and eventually began to face obstacles. The challenge grew costly as we had to drive to Starbucks every time we wanted to speak with a mom, realizing that this was not a scalable model.
00:13:46.680 To overcome this, we leveraged technology: phones and screen sharing. We transitioned from in-person meetings to digital collaborations. By the time we had three or four customers, we were using phone screen sharing, and later switched to using Google Docs to collaboratively cut and paste meal plans and recipes while they were looking at a URL we emailed them. Importantly, we hadn't yet written a line of code, yet we were generating revenue and iterating our way toward improving our business metrics.
00:15:37.320 This process reflects a clichéd notion about identifying obstacles: look at your current process relating to the metric you want to enhance, determine the primary obstacle, and focus on improving it. This pattern has been our guiding principle over the past year. Whenever we find ourselves confused, we stop and apply it: why are we working on this? Is this the most significant problem we are facing?
00:16:19.680 Just in case there's any clarity needed, we are not involving dogs or tape measures in this conversation—rather, the takeaway is that while we never want to do more work than necessary, we have a different mindset for metrics. We instrument everything we've done in various ways. Initially, we tracked major events through user actions—going to pages, creating items, and deleting things—and logged them in an event log.
00:17:16.680 We utilized services like Kiss Metrics and Google Analytics, but we faced problems with data being spread across multiple services, making it challenging to reconcile. Because each service optimally transformed the data, it became a hassle to align data discrepancies. Ultimately, we decided that no data about events or metrics would leave our environment unless we maintained a copy in our local storage. This way, we could control and reconcile everything.
00:18:33.480 This continuous improvement process has been beneficial because we can find issues in our metrics, analyze historical data, and develop hypotheses to address those. For instance, if we discover that our short-term retention is lacking, we can analyze which factors have correlated with users returning a second time and then design a minimal change aimed at addressing that.
00:19:38.520 When we suspected retention issues with our users, we sliced our data in a new way and determined that those who returned for a second visit were strongly correlated with those who added a recipe, looked through the recipe catalog, and printed out a grocery list. Our hypothesis was clear: we needed to ensure everyone had access to this functionality.
00:20:17.880 I proposed that the solution was to place everything on the homepage, but our team's discussions revealed the difference between correlation and causation. We examined user flows to identify where users got stuck, finding that the largest drop-off rate occurred on our grocery list page, prompting us to focus our efforts there.
00:21:02.520 As a team, we brainstormed potential solutions, including changing the overall user flow and enhancing the visual design. While I suggested radical revisions to the structure, other ideas floated included pulling some of the data from the grocery list page into the current page to motivate users to proceed. One invaluable lesson stemmed from involving a critical thinker in our brainstorming process who challenged our ideas. When he suggested simply implementing a pop-up reminder, it drove home the point: the objective isn't to create a seamless user experience from the outset but rather to test hypotheses with minimal work.
00:22:51.960 This meant checking our egos at the door; as a team, we would potentially have to implement solutions that could be perceived as ugly by others. However, the goal was to ascertain whether injecting a pop-up would lead to a notable increase in grocery list page views, and we executed this test quickly, taking mere minutes versus days.
00:23:42.840 Unexpectedly, this method resulted in an increase in the number of users transitioning to the grocery list page. However, it did not significantly uplift our short-term retention rates, indicating that our hypothesis of the pop-up being a solution was incorrect. Had we invested days building an elaborate redesign, we would have found out too late that the issue lay elsewhere. Instead, investing only hours to discover this was tremendously beneficial for the startup.
00:24:50.040 Key insights revolve around the crucial notion of isolating tests. When multiple experiments are conducted simultaneously, they may influence one another; hence, ensuring that any participants in a test have a similar experience is important for accurate result comparison.
00:25:35.520 One of our primary testing solutions is using Vanity, which manages split tests effectively. It’s crucial that in tests, the user is given distinct paths to follow and can return consistently to experience the same conditions over time. Initially, we used it out of the box, but modifications were necessary to support non-50 splits where we might want only a subset of users participating.
00:26:43.440 Additionally, Vanity was built to track only one metric at a time. Given that we often need to measure multiple factors, we've constructed internal systems to track our metrics while still leveraging Vanity's capabilities. Thus, as we near conclusions from running a particular test, additional data slicing helps confirm findings and better understand outcomes.
00:27:58.260 Through a continuous testing approach, we’ve seen the kind of outcomes that ensure relevance to our users, allowing us to adapt quickly and reject unnecessary code that doesn't benefit the customer. This lesson is vital: we must test hypotheses with minimal inputs before scaling them.
00:29:10.680 As we move toward continuous deployment—a buzzword that many have heard of—understanding that the pipeline’s input of tests, logs, and hypotheses allows for frequent iterations becomes essential. We operate through a rapid deployment schedule, allowing developers to deliver high volumes of code changes seamlessly.
00:31:19.560 In summary, what we've learned is significant. During our year of focused experimentation, we experienced a sudden spike in visibility when we got featured on LifeHacker, resulting in a dramatic increase in registrations. As a result, our system scaled effectively to manage this influx. While we aren't yet profit-making, we’re generating revenue, validating the relevancy of our services, and avoiding the pitfalls of excess code that ultimately serves no purpose.
00:32:39.480 I based my experience on the premise that continual feedback and learning mitigate the risks of wasted development efforts. If any of this resonates with you, I encourage you to engage further—it has been a remarkable journey.
00:34:07.080 I'm happy to take any questions now, and feel free to come find me if you are interested in working with me or having your interests aligned with our journey.
Explore all talks recorded at LoneStarRuby Conf 2010
+20