Mike Calhoun
Continuous Deployments and Data Sovereignty: A Case Study

Summarized using AI

Continuous Deployments and Data Sovereignty: A Case Study

Mike Calhoun • April 17, 2018 • Pittsburgh, PA

In this presentation titled "Continuous Deployments and Data Sovereignty: A Case Study," Mike Calhoun addresses the challenges of deploying a Rails application within the constraints of data sovereignty laws across different countries. The discussion begins with the landscape of data regulations, particularly focusing on how these laws affect the deployment of applications dealing with sensitive data, like health information. Calhoun identifies two critical frameworks: the Health Insurance Portability and Accountability Act (HIPAA) and data sovereignty laws, emphasizing the necessity of maintaining compliance while scaling globally.

Key points discussed in the video include:

  • Introduction to Data Sovereignty: Calhoun explains data sovereignty, which mandates that data collected is subject to the laws of the country where it is stored. This requirement complicates cloud computing solutions, where data often crosses international boundaries.

  • Continuous Deployment vs. Continuous Delivery: The speaker clarifies these concepts, noting that continuous deployment wants to automate the entire deployment process across various geographic regions, while continuous delivery focuses on keeping code ready for deployment.

  • Case Study of a Healthcare Startup: Calhoun shares the journey of a healthcare startup that initially designed its backend to comply with HIPAA, only to later discover that it needed to accommodate various international data regulations. This highlighted the importance of preparing for global clients early in application development.

  • Deployment Challenges: The speaker discusses different deployment strategies, including the problematic approach of creating separate production branches for different regions and presents a more manageable solution of regional deployments. The latter allows for a single code base with separate translation files, reducing logistical complications.

  • Utilization of AWS for Regional Deployments: The case study demonstrates how using AWS allowed the startup to deploy their application effectively across different regions, ensuring compliance with local data laws while maintaining a streamlined code management process.

  • Lessons Learned: Calhoun emphasizes the steep learning curve associated with deploying across multiple regions. He stresses the need for robust legal guidance and the importance of being aware of international data laws, the potential costs associated with global data hosting, and the necessity of ensuring user data security.

Ultimately, the talk serves as a comprehensive guide for developers and technology leaders on how to navigate the complexities of continuous deployments while adhering to stringent data laws in a global landscape. The core conclusion revolves around the fact that understanding and planning for these regulations from the outset can significantly ease the deployment process and avoid costly missteps.

Continuous Deployments and Data Sovereignty: A Case Study
Mike Calhoun • April 17, 2018 • Pittsburgh, PA

RailsConf 2018: Continuous Deployments and Data Sovereignty: A Case Study by Mike Calhoun

In any production rails application’s simplest form, there is one version of the app deployed to a single host or cloud provider of your choice, but what if there were laws and regulations in place that required your application to be replicated and maintained within the geographical boundaries of other countries. This is a requirement of countries that have data sovereignty laws and a regular hurdle to overcome when dealing with sensitive data such as protected health information. This talk provides a case study of how we devised an automatic deployment strategy to deploy to multiple countries.

RailsConf 2018

00:00:10.490 All right, hi! This is more of a turnout than I was expecting for such a dry topic, so thank you guys for coming. I also want to say thank you very much to the conference organizers for the amazing keynote. That was really great, and I walked out of that feeling really inspired. So hopefully, I'm not a letdown right after that, but maybe you guys will find this interesting. We'll see.
00:00:20.910 Firstly, hello! My name is Mike Calhoun. I'm super happy to be here in Pittsburgh. I learned that this is the birthplace of the Big Mac, which I didn't know. Also, the Klondike bar is from Pittsburgh; that's cool! I spend a lot of time in Philadelphia, so I finally got a chance to decide between Wawa and Sheets for myself, and I will not disclose my pick. I'm not picking any fights or dealing with tribalism here.
00:00:37.399 My wife actually has some family from Johnstown, so we often get asked if unions have talked to Bob lately. Okay, that aside, I'm from Vancouver, Washington, which is probably known more for its more popular suburb, Portland, Oregon. I live there with my wife, and we have two cats and a new baby. The only other thing I called out specifically in my speaker bio is our Corgi, Ruby, aptly named Paws Faraz.
00:00:57.120 You may know me from my role as the current Chief Technology Officer at Life i/o, though I am sad to be stepping down from that role and starting at Stitch Fix next month. I'm super excited for that! I talk really fast, and if you can hear the tremble in my voice right now, I'm trying really hard to concentrate on that. Last night, I did a test run, and I was at like 35 minutes exactly. I previously talked about failure, and I went through those slides so quickly, I think it was maybe 10 minutes long. Everybody had extra time to grab coffee, so I'll try to do something similar this time.
00:01:29.220 I'm excited to talk about something other than failure. Let's not talk about how bad I am at my job! As a little preamble to that, I am going to reference a lot of companies and a lot of products. I'm not specifically endorsing any of them over their competitors; I think the products we use are great, and the products we didn't use are also great. A lot of them have people here—sponsors and whatnot—and I love everybody. Everybody is super awesome; this is a great industry we work in. So please don’t take that as an indictment—or a lack of endorsement either.
00:01:54.329 Now, I have a podium, so I'm going to say a few words on data. This is not exactly connected to the topic of my talk, but I think it’s really important that we keep talking about it. We generally work in information and data, and we have a certain amount of trust that our users expect us to do the right thing with that data. These are becoming huge topics and are being thrust into national conversations, especially in light of things happening with Facebook and Cambridge Analytica.
00:02:22.650 When other industries made rapid advances in their arenas, regulatory control and oversight emerged. I look at the Industrial Revolution; we were forced to establish fair labor practices, overseen by the government. Nuclear science developed with the use of atomic weapons and the use of nuclear energy, and we established the Nuclear Regulatory Commission. The EPA emerged in response to abuses from industry, so maybe something akin to a Consumer Data Protection Agency is what we need.
00:02:54.170 I’m not the person to litigate that; I am not in politics. I am just someone who was given a microphone. But that said, we do have to consider that not all societal and political problems have technical solutions. Until then, it is up to us to be aware of the laws that attempt to govern our industry and broker trust with our users. That aside, I want to outline a few terms for this talk.
00:03:13.019 Specifically, these are topics that we came into contact with, and I think it would benefit us to establish a shared vocabulary. So first is the Health Insurance Portability and Accountability Act (HIPAA), and this is the main culprit for why we initially took some steps that ultimately got us into trouble or at least forced our hand in many ways.
00:03:32.730 HIPAA was enacted in 1996 for the United States and had two main purposes: to provide continuous health insurance coverage for workers and, more specifically to us, to reduce the administrative burdens and costs of healthcare by standardizing the electronic transmission of administrative and financial transactions. This was the first time the government took steps to protect electronic health data, which is really important because prior to this, there weren't many rules regarding the disclosure of breaches or what practices say.
00:03:54.750 There are still parts of HIPAA that are ambiguous; they literally say a consumer of this data will make a best effort. But how do you define a best effort? I don’t know; I didn’t write it down on a piece of paper and leave it at a coffee shop. In 2010, they added breach notification rules extending to covered non-HIPAA entities. Now it’s not just doctors’ offices and hospitals; it’s anybody capturing this data. If we encounter a breach, we are required to notify the Health and Human Services office.
00:04:14.760 In 2013, they added what’s called Hi-Tech—the Health Information Technology for Economic and Clinical Health expansion—continuing to expand the rules and requiring regulation to accommodate new and developing technologies. Then in 2016, we saw additions and provisions for cloud services, as that became the direction the industry was gradually starting to take. It’s a little late to the game, but required nonetheless; I guess we can’t expect rules and regulations to keep pace with technology.
00:04:49.860 Next up is data sovereignty, which is sometimes used interchangeably with data residency. They are similar but not the same. Data sovereignty is the idea that data is subject to the laws and governance structures of the nation where it’s collected. For example, I could be a German citizen living in the United States and seeing a doctor here. If my data is stored in the United States, it’s subject to United States law, not German law.
00:05:13.530 In such a case, I would be subject to HIPAA regulations. The common criticism here is that data sovereignty measures tend to impede or have the potential to disrupt processes in cloud computing. This was a big reason why they started making provisions to loosen those restrictions.
00:05:37.250 Data residency, on the other hand, is a law that basically requires that if you live in a country with data residency laws, your data must be stored and processed within the geographic boundaries of that country. Australia is a great example of this, and if you’re capturing any kind of health-related data there, you must utilize AWS in their Sydney data center.
00:05:53.610 Now, let’s talk about continuous deployment, and I may have cheated—I'll fully admit that. Continuous deployment versus continuous delivery: continuous delivery means your code is in a constant state of being ready to be deployed, but sometimes you can’t automatically trigger that. We had client-related concerns; they wanted to verify certain things in staging servers.
00:06:17.240 So for production, I’ll use an example here that’s more akin to continuous deployment, but it’s like half a step short of continuous delivery. I like a quote I dug up from Twitter, so let’s analyze the case study aspect of this. First, let’s assume the problem: We’re going to be a healthcare startup, so this is exciting! Everybody’s fists are in the air!
00:06:52.520 We’re going to capture sensitive user information and expect that our users trust us with this data. More specifically, we’re going to be a Software as a Service (SaaS) startup, putting this application out there for the world. We’ll probably use a cloud provider, and it’s going to function as a multi-tenancy single platform.
00:07:12.600 So, all users will log into it, which brings up the occasional myth about convenience: We have great tools. We just saw an amazing keynote about how we have great tools for reducing barriers to entry. I don’t need to know DevOps; I can deploy this to Heroku easily without necessarily knowing much SQL, thanks to Active Record.
00:07:35.820 Back in the day, we all had those cartoon turtles with SVN and a whole world of CI apps out there to do this. And this encompasses the majority of what we can reasonably expect to need. But then, sometimes you wind up in these situations where you have decided to be a SaaS company, collecting sensitive user information.
00:07:55.320 We’re going to assume all of our clients are in the United States. So, whoops! Then our first client is not in the United States; now we need to look at their laws and evaluate our infrastructure. One major conclusion you may find is that you’ve made a huge mistake; all along, you’ve made these assumptions that are suddenly thrown out the window.
00:08:18.530 So, you have to take a look at your international logistics. This is the first time we considered requirements beyond HIPAA. This is kind of weird because Canada, Australia, and other countries each have their own set of rules. The United States has less restrictive rules than some South American countries, where such rules are written into their constitutions.
00:08:43.290 The UK had a set of rules and then gave them up to join the EU and then now something is happening where they're developing their own set of rules again. We took a close look at potential global entities because we knew this would be an issue. We work with groups like Nike, which has a headquarters in Beaverton, Oregon.
00:09:07.590 They have a fair amount of users in Oregon, but they also are global. They’ll have offices in Africa, Australia, Asia, and South America—each of which will have its own set of rules. So, we had to take stock of what would work, and that led us to AWS.
00:09:22.440 We realized that we weren’t replacing our Heroku setup; we just wanted to augment it and needed a solution that would accommodate these rules. We knew we had to find a place where this was possible. Now that we had AWS, the question became: how were we going to integrate this into our deployment toolchain?
00:09:43.450 This brings us to option one: what if we created a new branch for every region? So, you could have production USA, and this seemed like the most obvious solution. We offer some basic white labeling aspects for our clients, so it seemed it would be easier to accommodate those needs: handling region-specific requests would be easier if we had translations, for example. I could just swap out the English and put in whatever else I wanted.
00:10:14.260 There’s a low initial time cost involved in creating branches—after all, we’ve all created a new Git branch; it’s pretty easy. However, the disadvantage is that this becomes a complete logistical nightmare. Imagine your code gets approved on staging—everything looks good, and you're not just merging into production now, you’re merging into five different production branches.
00:10:32.630 Keeping all those branches squared away can be a nightmare. God forbid one of those production branches doesn’t get the same code as another one, or you have some translation issues. It’s just not sustainable in a timely and efficient manner. Then we looked at option two, which was what we called regional deployments.
00:11:02.410 With this option, we maintained one codebase, which meant that all of the translation files would have to sit in the same repository. It continues the notion of the single platform, multi-tenancy. So let’s do an example. I can’t show the app we used, but I created a little demo app, and I hope this comes through.
00:11:31.229 At the top, there's a test suite—it has one spec and one feature spec. The spec just says it expects the goggles to equal nothing. This will come together, and it’s just going to render what’s below; it’s just a hello world page that shows an image. I have a small test suite—it passed, and you can see it runs locally.
00:12:00.719 This is all with the intention of helping move quickly. Here’s the dashboard summary, and there’s a few things to call out. At the top, I have my master branch, and it’s passed. For all intents and purposes, I’m using this as my production branch. Below that, there’s a section called servers, and we have our United States Heroku server where we’re deploying this.
00:12:31.560 This kind of mirrors what we had for our initial infrastructure. Then we add our new application in AWS, where you can see we know we’re in Oregon. This is the region we’re going to deploy this to. For some reason, in this scenario, Oregon and the United States are two different groups that have their own laws, which sounds crazy.
00:12:57.860 But actually, Canada passes rules governing health data by province, so it’s not that crazy. I have a little demo environment, which we fondly refer to as the RailsConf 2018 app. You can see it’s tracking different regions for deployment.
00:13:20.440 On the next screen, I have three panels. Originally these were separate, and now they’re a little sloppy, but they’re all ideologically linked. The first on the far left is setting up deployment for the Rails Conference 2018. It offers some out-of-the-box solutions. The list scrolls down for a while, and those are the first four we needed. Elastic Beanstalk is the one we need, so we click that.
00:13:50.530 Then it takes us to the next screen. If we're doing continuous delivery, we can choose 'manual' to retain some control over that. For this purpose, we’ll go with 'automatic.' At the bottom right, it asks what branch you want to deploy.
00:14:13.320 We pick the master branch; you can use whatever branch suits your needs. After that, I won’t give you my AWS credentials, but let’s focus on the region. You get a list of regions associated with this account. I select Oregon.
00:14:36.030 Then it automatically fills in the name of the application and environment, so I choose my demo app. You can select an existing environment or create one if you want. It determines where your code will be deployed before it goes live to its server.
00:14:56.410 You provide your server with a name to make it meaningful for your navigation, and because you want to be a good citizen developer. Once that’s set, you’re ready to deploy. You click deploy, and now your application is deploying. Going back to your dashboard, you’ll see production in Oregon is currently deploying, and all your tests are still passing.
00:15:20.880 Eventually, your code shows up, and you can access it via the designated link. For the sake of this demo, we've expanded to four regions. Now I have Canada—Toronto, to be specific—Oregon, Sydney, and my Heroku app.
00:15:44.340 This is going to be uncomfortable, so let’s see what happens. I prepared a video to showcase all this in action. Is it playing? Cool! It doesn’t play on my screen, so I’ll navigate off of this. I made a change; I’m going to commit now.
00:16:06.990 The commit message just says 'deploy because…' and you’ll see that the application is now deploying. I’m playing this at double speed so it’ll jump suddenly, and I might get nervous as I navigate it.
00:16:29.460 So the master branch is building, and it only needs to pass that one test, so this shouldn’t take too long. This is a free account, and I didn’t pay extra for the demo, not thinking I’d have to narrate it.
00:16:54.750 There it goes—the test passes. That kicks off builds in all these regions at once: first Canada, and then Sydney. Those will take a couple of minutes, and it’s running the test suite for me. In the case of these AWS builds, it’s taking the GitHub repository, zipping that up, and then sending it off to the S3 bucket before unpacking it onto the server.
00:17:16.830 Earlier, I said I would finish that sentence, and this would have been done, but I’m speaking a little fast. There we go, all right—Sydney deploys first! There’s a winner! Now Oregon starts building, Canada finishes, and Heroku starts building.
00:17:40.950 I have tabs open that you can’t see, so I’ll click into them. There—it’s deployed in Sydney! I apologize for rushing. I can come back and pause this at any point. There’s Heroku briefly; I clicked into the Heroku app.
00:18:03.190 You can see the Heroku United States app was deployed. Now we’re just waiting for Oregon. We’ve seen Canada; we’ve seen Sydney; and the United States default Heroku is in Northern Virginia. Oregon is going to be the last one to cross the finish line, and it’s done!
00:18:25.860 I’ll click over to it, and there it is! That’s the end of the video. So that was it; that was basically the exact infrastructure we built for ourselves. Every time we pushed the master branch, it would automatically trigger these deployments to occur globally, streamlining a stressful process.
00:18:48.410 From this case study, we had several findings. The pros were that this approach was very effective and scalable. The demo exemplified that; it’s even more effective without this nervous narration—if you all saw me push this up while we slept, it’d be great!
00:19:07.850 However, there was a steep learning curve to get there. Everybody is super awesome! I love these products. AWS Elastic Beanstalk—their setup was a bit more complex than Heroku, and getting everything to work in harmony was a little tricky as well. But once you got past that learning curve, it was easy to manage.
00:19:37.170 Managing all those server configurations could also get tricky; you need to have a more scalable solution for replicating your application harness.
00:19:59.470 We also experienced an initial loss of functionality regarding the social features we had built in. As we discussed next steps, it felt a bit odd after the keynote, but it seems like there could be a case for decomposing the monolithic application we were deploying.
00:20:18.620 The vector we were narrowing in on was taking our identifying information—both Protected Identifiable Information (PII) and Protected Health Information (PHI)—and building a data service that would house this information in the appropriate regions, only sending off user IDs to a social server.
00:20:43.710 As users requested friendships, we would capture those IDs and encrypt them—perhaps to AES 256—and theoretically, this would accommodate the rules because you wouldn't actually be sending identifying information out, preventing any backtracking attack.
00:21:11.420 While ideally attacks happen all the time, you would be able to detect when someone is orchestrating a sophisticated attack. However, this introduces operational costs, as moving from supporting one server in Northern Virginia to deploying across the globe is not cheap.
00:21:30.860 The operational costs in those regions expand depending on the remoteness of a region, including costs of electricity. You need to build that into your calculations. If you are dealing with large organizations, they can afford to integrate those costs into their contracts. However, if you are operating on a smaller scale, it’s not advisable to go this route right from the start.
00:21:50.510 With all this in mind, I would recommend being very mindful of your audience before you build something. Admittedly, we never would have expected our first clients to be outside the United States. The next thing I knew, I was flying to Australia and the United Arab Emirates to understand their laws.
00:22:11.110 Had we considered a global infrastructure from the start, we might have been more proactive preparing provisions to accommodate that early on, or we could have crafted a more robust plan of attack.
00:22:31.890 If you are storing sensitive data, be aware that it is subject to laws. These laws may not change at the same pace as your application, but they will evolve, and you need to stay informed about how they may affect your compliance or whether compliance will even be a requirement.
00:22:51.919 At the end of the day, just because something is there does not mean you need it. Looking back, we might have built this application using microservices from the start, but we chose to move quicker and establish a model that was easier for our team.
00:23:14.340 Now we realize that maybe down the road, we should revise this approach—thank you! My name is Mike Calhoun, and if you want to reach out, you can find me on Twitter, GitHub, or anywhere else on social media. My username is usually available as Michael One.
00:29:26.440 At some point, we made accommodations to realize that what worked in our default Heroku production would probably work across the globe, and this is mostly true. The biggest hurdle in this case was translations; Australian English differs from American English, and they are different from Spanish.
00:29:39.980 It is a tough process with a robust test suite, and when we have to deploy to five servers, it requires some coordination. To answer your question: yes, there have been instances where we wanted to deploy features only available in the United States or specific regions.
00:30:04.080 In those cases, we employed some creative database tricks. One effective tool we have used is a gem called 'Flipper,' which allows us to enable and disable our database models according to specific requirements.
00:30:32.120 In this case, we would have a parent organization that may have many companies and branches. If we only wanted certain features to be visible to particular companies, we could enable that for only them through Flipper.
00:30:52.410 To address the requirements and coordination needed for compliance, we generally have a small team of one or two engineers discussing directly with our legal counsel. When entering a new region, we try to find local legal counsel to ensure we are compliant with their laws.
00:31:04.680 On the client side, enterprise clients operating at a global scale often have their own legal counsel and security checks. You work closely with them and clarify any unreasonable demands while making sure you keep them informed throughout the process.
00:31:20.620 With this notion in mind, you work together as partners. If there's a data breach, everyone shares the blame, and whether or not it turns out to be anybody’s fault, everyone will be affected by it.
00:31:38.820 Regrettably, we have turned down clients when compliance could not be achieved, but we wish them well on their journey.
Explore all talks recorded at RailsConf 2018
+98