Software Development

Summarized using AI

DevOps for The Lazy

Aja Hammerly • April 21, 2015 • Atlanta, GA

The video "DevOps for The Lazy" by Aja Hammerly, presented at RailsConf 2015, explores how automation tools like Docker and Kubernetes can simplify DevOps processes, particularly for developers who prefer to minimize repetitive tasks. Aja, passionate about solving complex problems yet admitting to a "lazy" approach, emphasizes the importance of building systems that can self-maintain.

Key points discussed include:

- Understanding Containers: Aja defines containers as a means for multiple applications to share hardware with a minimal footprint, similar to packaging an application with all its necessary dependencies.

- Using Docker: Aja prefers Docker for its community support and ease of use, demonstrating how to set up a simple Rails application using Docker.

- Creating a Rails App: The presentation walks through building a basic to-do list app, focusing on modifying the Gemfile and Dockerfile, implementing PostgreSQL for production, and managing the Docker environment.

- Advantages of Containers: Key benefits include consistency across environments, fast startup times compared to traditional VMs, flexibility in deployment processes, and portability across platforms.

- Kubernetes for Orchestration: Transitioning to Kubernetes, Aja explains its role in managing clusters of containers, with components like pods for application deployment, services for load balancing, and replication for scaling applications.

- Challenges: Aja cautions that containers aren’t universally needed for every project and addresses potential downsides such as the YAGNI principle and considerations for security and resource management.

Aja concludes with a note on the advantages of leveraging cloud services offered by Google, highlighting a risk-free trial to encourage exploration of these DevOps tools. The session wraps with an open floor for questions, demonstrating the speaker's engagement with the audience and commitment to assisting individuals in their DevOps journey.

DevOps for The Lazy
Aja Hammerly • April 21, 2015 • Atlanta, GA

By, Aja Hammerly
Like most programmers I am lazy. I don't want to do something by hand if I can automate it. I also think DevOps can be dreadfully dull. Luckily there are now tools that support lazy DevOps. I'll demonstrate how using Docker containers and Kubernetes allows you to be lazy and get back to building cool features (or watching cat videos). I'll go over some of the pros and cons to the "lazy" way and I'll show how these tools can be used by both simple and complex apps.

Help us caption & translate this video!

http://amara.org/v/G6rF/

RailsConf 2015

00:00:12.000 Hello everyone, I'm Aja Hammerly. I'm a developer advocate at Google. You can find me tweeting at @thagomizerrb. All the code I will show in this talk is available on GitHub, where I go by the same name, thagomizer, in my examples repository.
00:00:17.760 You can find it in the RailsConf 15 folder. Before we proceed, I need to mention that all the code presented in this talk is copyrighted by Google and licensed under Apache v2. Now, first things first, I want to clarify that I'm not an expert at operations. In fact, I tend to fit the description of a jack-of-all-trades, master of none.
00:00:36.640 I've spent some time racking servers, and I have the bloody knuckles to prove it. I've built systems that look impressive, but my skills haven't reached that level yet, and every time I attempt anything that seems too professional or fancy, I end up creating something that resembles a messy hack.
00:00:48.640 Why is that? Because I’m fundamentally lazy. I’m passionate about certain topics like solving problems and building algorithms, but I find processes related to deployment, operations, and maintenance to be tedious and not particularly enjoyable. I believe it's perfectly fine to feel that way, and by 'lazy,' I mean minimizing effort to achieve success and stability.
00:01:12.400 Today, I want to make systems that allow me to be lazy and self-maintain. One excellent way I’ve discovered to accomplish this is through the use of containers. So, what exactly is a container? According to multiple sources, a container enables multiple applications to securely share hardware while maintaining a small footprint. Often, this footprint is smaller compared to using virtual machines.
00:01:43.119 In my view, containers can also be seen as a package for your execution environment. They wrap up your operating system, build dependencies, necessary libraries, and perhaps your application code all into a neat bundle. This concept reminds me of the days when I used to create small applications and share them on floppy disks, ensuring to include all necessary DLLs required to run the applications.
00:02:21.200 So how do you use containers? I utilize Docker for this purpose, though there are numerous container frameworks available. Wikipedia provides a comprehensive list and a nice chart of all of them. I particularly like Docker for its robust community and ecosystem that support it, especially since many common web frameworks, including Ruby on Rails—our focus today—have containers available on Docker Hub, essentially a repository for container images.
00:02:56.400 Now that we've established Docker sounds promising, let's dive into a brief demonstration using a basic Rails application. When I say basic, I mean very fundamental, built solely with the scaffold tool. This application represents a simple to-do list.
00:03:10.400 It contains standard Rails elements—there's a title, a very long title, some notes, a due date, and a completion integer where you can specify the percentage complete. I'm sure most of you could set up something similar quickly, but let’s go through all the necessary steps to create this application.
00:03:29.280 To achieve this with Rails, you would typically encounter a lot of tutorials dealing with Docker and Rails, but most of them tend to stop short of detailing models. A Rails application without models feels quite limited, so we’ll create a simple model that has attributes like a title, notes, due date, and a completion parameter, represented as an integer.
00:03:49.440 For my production database, I’ll be using PostgreSQL, while SQLite will be employed for development and testing. Many of you are likely familiar with this setup—you’ve probably implemented it numerous times before.
00:04:20.079 However, when using PostgreSQL in production, it’s essential to modify the default Rails Gemfile accordingly. As you can see, the necessary adjustments are highlighted in red. You will need to create a new group for production and include the pg gem there, while relocating SQLite, which is set in the default group, into the development and test groups.
00:04:31.199 Additionally, since I’ve modified my database setup, I also need to update my database.yml file. This means setting the adapter to PostgreSQL for production and adjusting my username, password, and host based on environment variables that Docker sets for me. I promise I will revisit this in just a moment.
00:05:01.039 Now, let's talk about the Dockerfile. The Dockerfile is simply a list of commands that establish your environment. I prefer to write a minimal Dockerfile, which for this application can actually be condensed into three lines.
00:05:17.039 The first line specifies the base image I want to use, as most Docker images build upon other images. For example, the Rails image is derived from a Ruby image and so on. Thus, when choosing the Rails image, I can benefit from it being nearly identical to my requirements without needing to delve too deeply into operations or package installations on an Ubuntu machine.
00:05:36.000 In this case, I'm also indicating that my Rails environment is set to production because the Rails image I’m employing lacks that specification. The CMD line is the entry point for my application, meaning it defines what occurs once my container is built and my application initiates, which utilizes a shell script.
00:05:53.840 Let's check what this shell script contains. My init.sh script is also composed of three straightforward lines. I’m exporting the secret key base for Rails, executing the command ‘bundle exec db:create,’ and starting the Rails server. Yes, I’m using the default server, which is a bit lazy, but there's nothing overly complicated happening here. I quickly pieced all this together during a hack night.
00:06:18.160 Now, with the code set up, the next step is to build our image using Docker. An 'image' is merely a template from which containers are created. The command `docker build thagomizer/todo .` tells Docker to set up the image using the Dockerfile in the current directory.
00:06:38.560 Building the image is a straightforward process, and it happens quickly. One reason for this speed is Docker caches all the intermediate steps involved in creating the image. So, if I change something at the end, I can rely on the cached versions of all previous layers, eliminating the need to repeat steps like fetching packages or compiling gems from scratch if they contain C extensions.
00:07:02.080 Once the image is built, the next task is to run it. I’ll walk you through the necessary commands. I’m not sure what level of Docker experience people in the audience have, but I’m assuming it’s similar to my experience three months ago.
00:07:26.560 To begin with, we need to establish the database component because the database must be operational before we can start running the web server. This command creates a new container named 'db' using the PostgreSQL image. The subsequent lines are setting some environment variables for the PostgreSQL container, determining the password and username for access.
00:07:57.039 All of these instructions are thoroughly documented on the GitHub page for the PostgreSQL image. It's delightful, especially for someone who prefers to minimize effort. The ‘-d’ option runs the container in detached mode, allowing it to operate in the background without requiring interaction from me.
00:08:13.440 Next, we’ll initiate the web server. Here’s the command for running a container named 'web,' again utilizing the image I built earlier using 'thagomizer/todo.' We're also running this in detached mode. The following line maps ports; here, I'm mapping port 3000 of my container to port 3000 on the host machine, which could be localhost on Linux.
00:08:33.280 Additionally, this function is quite amazing—it allows my containers to communicate with one another. In this case, I'm linking the 'db' container I just created to the 'web' container, using the alias 'pg' for PostgreSQL. This linking creates a secure tunnel between the two containers and adds several helpful environment variables in the web container, making it easier for me to connect to the database.
00:09:01.200 Let’s take a look—now both the database and the web server are starting up successfully. There are multitude of methods for setting up containers, and this method alongside virtual machines offers a very swift way to test your applications.
00:09:18.000 Now, I’ll grab the IP address of the running application and input it into my browser. I’ll navigate to port 3000, where the tasks collection can be found. There it is! My app is live and running well. It’s worth noting that I accomplished all this setup in less than 20 lines of code beyond the standard Rails boilerplate.
00:09:39.760 These roughly translate to about five lines of code needed for the Gemfile, five lines in the database YAML configuration, three lines for the Dockerfile, and another three lines for the init script. I genuinely cannot think of a simpler way to set up a Rails app to run on an Ubuntu environment without exceeding 20 lines of configuration code.
00:10:19.760 So, why would anyone consider utilizing Docker or containers? One significant advantage is consistency. I find it incredibly frustrating to hear someone say, 'But it works on my machine!' I started my career in Quality Assurance and have heard that far too often. I care much more about whether an application works in production.
00:10:45.440 Using containers ensures that your staging environment aligns with your production environment. They are composed of the same images and operating systems with matching libraries. That means you don't have to worry about causing subtle discrepancies in your Gemfile or minor version mismatches that could break your app.
00:11:07.120 The next talk after mine will cover how Docker can also be utilized in a development environment to achieve exactly the same conditions concerning OS, libraries, and configurations. It’s now either working or it’s not, and everyone has the same experience.
00:11:39.200 Regarding speed, containers can start up very quickly. Recently, a video was released showing the Kubernetes team racing a Kubernetes cluster against making a latte. Although it resulted in a tie, the focus was on how rapidly containers could start compared to the lengthy process of booting up virtual machines that required time-consuming setups.
00:11:58.600 Additionally, when you’ve got caching set up, changes to the last layers can be applied rapidly without repeating time-consuming earlier steps. Flexibility is an additional advantage—when working with a microservices architecture or distributed computing, containers allow you to adjust how different processes are hosted on various hardware.
00:12:29.440 For example, in development, you might want all functionalities running in a single VM, whereas in testing, you might require a split setup, and in production, you might need multiple machines to handle the workloads seamlessly. Such configurations can be easily managed with different container orchestration tools.
00:12:56.000 Another vital benefit of containers is their portability; they can operate on various platforms, such as Linux, Mac, or Windows. You can roll containers on your hardware, on cloud providers, or in a personal environment, maintaining behavior consistency regardless of the underlying system.
00:13:40.160 Repeatability is crucial as well. Last year was particularly challenging for those managing DevOps processes; as many of you might know, there were numerous security bugs that required addressing, and while automated deployment could easily apply code updates multiple times a week, ensuring lower-level updates were done efficiently proved difficult.
00:14:03.200 Using containers, you can streamline your process such that the methodology for updating your code mirrors that used for updating your operating system or libraries. Having one unified approach reduces the chance of error.
00:14:18.760 Yet, there are some downsides to containers. The most prominent among them is the ‘you aren’t gonna need it’ (YAGNI) principle; not all applications require containers, especially a small proof-of-concept project or personal blog, where the overhead may outweigh the benefits. This technology is fantastic for those who can take advantage of it, but I would never claim it's universally applicable to every scenario.
00:14:46.960 Furthermore, while containers have been around for a while, Docker is relatively new with regards to its community adoption. Having worked at Google, I know that our internal operations have been utilizing containers for years, and though I'm enthusiastic about this technology, some organizations might be hesitant to adopt it due to its newer status.
00:15:25.760 I've presented you with some impressive examples of operating containers on my Mac; however, I presume none of you run your production sites from a MacBook Air. If you do, I’d love to chat afterward! Now, how can containers be managed in a cloud environment or data centers, especially when deploying multiple versions or requesting a specific number of web servers?
00:16:13.440 I will introduce Kubernetes, an open-source project from Google for managing clusters of containers. The premise is straightforward: you define your desired state in a format that Kubernetes comprehends, launch Kubernetes across a cluster of VMs or bare metal machines, and tell it to maintain that desired state.
00:16:38.640 Kubernetes utilizes a specific vocabulary and components, which I'll briefly define: the master node manages all operations, while minions run on separate VMs. The core unit of deployment is a pod—think of it as a functional component of an application that works together rather than independently.
00:17:10.080 In an application setup, the relationship between the web app and the database can differ—whether they co-exist within a single pod or are placed in separate ones depends on your scaling needs. In this demonstration, I'll keep the web app and the database in distinct pods, so two web servers will be used alongside one database.
00:17:41.240 A service in Kubernetes is essentially an abstraction over a logical set of pods, serving as a load balancer. If you've got multiple pods working, services distribute the traffic effectively, allowing seamless communication between pods.
00:17:56.560 For instance, we create services to allow our database pods to communicate with our web pods in the Kubernetes cluster. There’s also a replication controller responsible for ensuring a specific number of pods run at any given time. You simply state your desired state, like two replicas, and the replication controller ensures that it’s achieved.
00:18:11.760 It's important to note that although some of these components are in beta, they're actively being used, so consider this a friendly warning. However, Kubernetes provides options: it's open-source and can run on your hardware, on cloud provider VMs, or in various setups.
00:18:29.440 Now let’s look at some code, starting with a generic Kubernetes configuration file. While not a specific example, the basic layout remains consistent across many configurations—it typically includes an ID, a kind (like pod, service, or replication controller), and an API version, enough to describe your desired state.
00:19:29.440 For our demo, I’ll create a database pod that contains the PostgreSQL image. I can specify environment variables similarly to how I did when using docker run, set ports, and define mappings accordingly.
00:19:45.520 Next, any pods should have a service for external interactions. It will expose the database service at port 5432, allowing any requests to access it. This service definition will make it easier for our web app to interact with our database by linking it appropriately.
00:20:03.200 Now, let's configure the web service to include a replication controller; I want to run two replicas. The replicas will be defined under the count, and I’ll provide a template specifying how the controller should create these pods. In this instance, we’ll have one web container using the image 'thagomizer/todo.'
00:20:36.160 For this accurate replica set, we must also create a service that exposes our pods; any pods named 'web' will now be accessible through the service defined in this manner. This organization keeps all configurations clear and enables smooth operation.
00:21:07.760 To run all of this on Google Kubernetes' infrastructure (which is in public alpha), you utilize the gcloud command-line tool. Simply input commands to create your pods and services using the configuration files I mentioned earlier. Once executed, gcloud will confirm the pods’ states, and you can see running instances with their respective images.
00:21:34.560 After setting everything up, you can create your services and ensures communication among the necessary components. Kubernetes allows scaling on-the-fly; for example, if you want five replicas, adjust your replication controller to reflect that.
00:22:10.240 Within seconds, Kubernetes will internally process this adjustment ensuring that the desired state is achieved. It automatically pulls down images and scales to meet the defined requirements without needing to provision new VMs.
00:22:32.000 As noted in a previous talk by my colleague, I hope I’ve been straightforward, but I've acknowledged several complex areas I've handwaved over. For instance, I didn’t cover how to tie the database into persistent disks—without this, any restart would lead to data loss.
00:23:06.320 Establishing a shared disk among multiple web clients is another topic not touched on today. The concepts I am discussing are foundational and while I present such introduction-level topics, they can be less straightforward in practice.
00:23:25.840 I must clarify that security should be carefully evaluated; this setup should not be deemed adequately secure for production environments without thorough auditing tailored to your application's specific requirements.
00:23:52.480 Replication, particularly in database clusters, wasn’t covered, but there are numerous tutorials available for that. Additionally, remember, Docker is designed for Linux machines. If you're on Mac or Windows, you'll need additional tools like Boot2Docker utilizing virtualization to create a functional Docker environment.
00:24:10.080 Numerous size concerns exist as the standard Rails image tends to be quite heavy. If size matters, you might consider building your own images tailored to your specific needs, stripping out unnecessary components to save space. The key is balancing image size with the scope of your projects.
00:24:37.760 In conclusion, I see this as a fundamental talk; many of you may find I barely scratched the surface of letting you in on something new. However, for deeper insights into this topic, I highly recommend Brian Helm's talk on shipping Ruby applications with Docker from RubyConf last year. It is an excellent resource that includes live demos and provides a broader understanding.
00:25:11.760 For those interested in cloud management, peruse the documents for Google Cloud's Container Engine, where you can find hands-on tutorials showcasing basic concepts through practical examples. For details on Kubernetes, the Kubernetes website is available as a comprehensive resource.
00:25:47.120 If you seek community help, join discussions in the Google Containers room on Freenode or ask questions tagged appropriately on Stack Overflow, as Google Cloud actively promotes engagement and support among users.
00:26:29.440 Now comes the sales pitch! I work at Google Cloud Platform, offering a plethora of services from storage to VMs, logging, monitoring, data analysis tools, and more. I gave a talk at the West RubyConf showcasing various features across global data centers, providing robust cloud services tailored to meet your needs.
00:27:00.000 It's intriguing to note that our cloud customers operate on the same hardware that powers YouTube and Google Search. All enhancements made to that infrastructure also benefit our cloud customers. Plus, there's a risk-free trial offering $200 in credits valid for 60 days; simply enter your credit card for identification purposes, and you'll face no charges unless you permit it.
00:27:38.159 If you encounter challenges during the trial process, feel free to reach out; I can assist you in navigating through it and ensure you can take full advantage of our offerings without unnecessary barriers. Lastly, I want to thank the conference organizers for their hard work in arranging this event.
00:28:06.799 I also appreciate my colleagues who reviewed these slides despite my poor timing due to their time zones. I'll be here shortly with plastic dinosaurs in hand for anyone who asks questions or chats with me—feel free to collect stickers as well!
00:28:49.440 And now, with about 10 minutes left, I’d like to open the floor to any questions you might have.
Explore all talks recorded at RailsConf 2015
+122