DevOps
Building Helm Charts From the Ground Up...

Summarized using AI

Building Helm Charts From the Ground Up...

Amy Chen • October 12, 2017 • Earth

The video titled 'Building Helm Charts From the Ground Up' features Amy Chen, a software engineer at Rancher Labs, who presents at the Rocky Mountain Ruby 2017 event. In this session, Amy explains the foundational concepts of Kubernetes and the role of Helm in managing application deployments.

Key points discussed in the video include:

- Introduction to Containers: Amy describes containers as 'baby computers' that provide resource isolation and ease of transfer, emphasizing their importance in modern infrastructure.

- Difference Between Containers and Virtual Machines: She highlights the efficiency of containers compared to virtual machines and their role in abstracting infrastructure for application developers.

- Deployment Challenges: Amy addresses the complexities involved in deploying applications to the cloud, such as ensuring consistency across different environments.

- Kubernetes Overview: She discusses Kubernetes' role as a container management platform, outlining its main components, including Pods (the basic scheduling unit), Deployments (which manage Pods), Services (which facilitate communication among Pods), and Ingress controllers (which manage external traffic).

- Helm as a Package Manager: The talk culminates in the introduction of Helm, which simplifies the management of Kubernetes configurations with a focus on packaging application definitions into 'charts'. This allows for easier deployments and version control.

Throughout the presentation, Amy uses various analogies and visuals to clarify complex concepts, making the content more accessible. She concludes by underscoring the importance of managing configuration files effectively and how Helm can streamline this process for developers. The main takeaways from the talk emphasize the significance of understanding containers and Kubernetes in contemporary application deployment and the utility of Helm for managing these technologies efficiently.

Building Helm Charts From the Ground Up...
Amy Chen • October 12, 2017 • Earth

Rocky Mountain Ruby 2017 - Building Helm Charts From the Ground Up: An Introduction to Kubernetes by Amy Chen

Rocky Mountain Ruby 2017

00:00:15 Hey everyone, my name is Amy. I wanted to get a general idea of what your backgrounds are. If you know what containers are, please raise your hands. Okay, so if that's a yes, I've heard of containers, but do I actually know what containers are? Can you raise your hand? Okay, cool, so that's quite a lot fewer. Then, do any of you work in DevOps? Can you raise your hands? Okay, cool. Perfect! I think that I have scaled my slides to be just the right level.
00:00:53 So, my name is Amy. I am a software engineer at Rancher Labs, and I actually started working there in March. It's my first job out of college, and we are an infrastructure container management company. I also run a YouTube channel called 'Amy Codes,' so if any of you want to check that out, I'll probably do another version of this talk on that channel as well. I'll also have these slides on Twitter.
00:01:16 I created this talk at like 5 a.m. this morning because I'm super great at procrastinating. Jeff told me last night that the theme was 'trust,' which I had forgotten about. I remembered it again when he told me, and this morning, I was like, 'How am I going to talk about trust? What the heck?! I don't know how I'm going to do that.' So then I thought, 'Forget that! I'm going to talk about lack of trust.'
00:01:50 I'm assuming a lot of you here are app developers, and since many of you come from touring schools, I assume you do a lot of Rails apps. I don't trust app developers because I work on infrastructure, so the idea is that everything is ephemeral.
00:02:00 The main question is: you wrote this awesome Rails app, right? Now you need to get it onto the cloud. The core concept of the cloud, again, is that everything is ephemeral. Things die, things are complicated, and lots of things need to talk to each other. Networks go down, and this all falls under the topic of distributed systems and cloud computing, which is pretty big.
00:02:39 You put a lot of trust into folks who run your infrastructure, but do you actually know what's going on? Should you really put that trust into those people focusing on your infrastructure? Let's talk about what's actually going on behind the scenes.
00:03:02 The main aspect of this relationship is that we want our services, such as web applications, to be always available. We want them to be accessible, and we want there to be low latency. We do not want to wait forever for your page to load. As application developers, it's not really your job to think about how to deploy these applications.
00:03:32 For instance, you shouldn't have to think about high unexpected traffic, application failures, scaling things, or making your resource utilization more efficient. That’s where containers come in. What is a container? I’m going to simplify it to the point where I’m going to call it a 'baby computer' inside another computer. The baby computer is the container, and the bigger computer is the server.
00:06:01 So why do we want a baby computer inside another computer? The idea is that it's easier to transfer around, providing resource isolation. It’s also super efficient in terms of using these resources. What the container does is encompass the application environment, and I will go more into detail on that in my next slide.
00:06:39 A common conceptual comparison is a virtual machine. However, it's important to note that containers and virtual machines are very different. The main difference is that containers are much more efficient, and you can research that on your own time.
00:06:59 Containers ultimately abstract away your infrastructure as application developers. You don't really want to think about your infrastructure or how your application relates to it; you just want to develop and create clean code. I’m here to help handle the rest.
00:07:47 Let's go ahead and walk through what's going on in a sample container. Here we have your app that you’ve developed on your local machine, along with different dependencies: various versions of Ruby, operating systems, and a bunch of things installed that make your app work locally.
00:08:04 When we try to deploy this to the cloud, you can't have these inconsistencies; everything needs to be consistent. You need to test with different web browsers, Ruby versions, and operating systems. That’s where containers help—they replicate these environments easily and quickly.”
00:08:59 So, ignore the writing on the right; let’s focus on the picture on the left. This is what’s conceptually happening: you have your application, and we have our container. We define different versions, such as an operating system (Alpine is a Linux operating system), and we might want to run Python or some setup scripts to ensure the application functions correctly.
00:09:39 When you deploy your applications locally and test these things, you do this manually. We can automate all of that with containers. So on the right side, this is essentially what we're doing with our containers. The top shows we have Alpine installed with Python, and the specified Python version could be 3.7, for example.
00:10:29 We copied your application to this container, then we run installations to set it up. When the container starts, we run a command to execute your application, such as Python. The core concept of what's happening with a container is that it becomes easily replicable with the use of configuration files.
00:11:13 Containers went through an existential crisis because a lot needs to happen to make these containers available on your servers. For instance, where should the container live? This falls under scheduling, while networking addresses how do I communicate with other containers? Lastly, you must consider what happens when my container gets sick—this is known as failure recovery.
00:12:04 The key principle with Kubernetes is that if something dies, you kill it. That may sound harsh, but that’s how infrastructure works with Kubernetes. This is where Kubernetes comes in as a container management platform. Essentially, Kubernetes provides abstractions to organize these baby computers (containers).
00:12:46 There are a lot of terms in Kubernetes that sound complicated, but they’re merely ways to organize containers. I’ll walk you through Kubernetes core vocabulary which will help you understand future documentation better.
00:13:18 The term 'ephemeral' comes to mind again. Containers can be stopped, rescheduled to different servers, killed, or otherwise affected. The idea is that we can return to a state quickly without having to finagle too much.
00:13:39 In Kubernetes, some containers are tightly coupled and need to communicate with each other. Therefore, they must be scheduled on the same server. To organize these, we introduce the concept of Pods. Pods are the scheduling unit in Kubernetes and can consist of one or more containers.
00:14:09 Each Pod gets its own IP address, which is only reachable within the cluster. You will often find that you rarely have more than one container within a Pod, so you can think of it as a fancy container with an IP address.
00:14:56 Next, we have the term 'Deployment,' which manages a group of Pods. Deployments help your actual state align with your desired state. For instance, you can define a number of replicas you want running. The deployment controller ensures the right number of Pods is maintained. If one pod dies, the deployment spins up another, so you maintain the desired state.
00:15:36 However, we also need additional organizational concepts. While deployments focus on actual versus desired states, the term 'Service' refers to a group of Pods or deployments. Services become useful as Pods can have unreliable IP addresses that change when they restart.
00:16:18 Services are dependable endpoints for communication among Pods. For example, if you have a front-end service and a back-end service, they can reliably communicate with each other without worry of changing IP addresses. I will further differentiate deployments from services: deployments focus on the actual versus desired state, while services facilitate communication among Pods.
00:17:15 All these discussions focused within a cluster thus far raise another question. How does the outside world reach your cluster? This is where the Ingress controller comes in. An Ingress controller directs traffic from outside to inside your cluster based on specified endpoints.
00:18:13 An Ingress controller allows you to define endpoints that direct traffic to specific services. In the example diagram, I've illustrated an Ingress route directing traffic from an endpoint to a service.
00:19:02 As we wrap up, everything we've discussed is defined in configuration files, which can appear complicated. However, breaking this down shows simplicity. For example, let’s review the deployment configuration where we have specified the number of replicas.
00:19:54 When it comes to services, traffic is routed based on specified labels. For ingress, external traffic goes to your app through the defined service port, allowing connection to your application. Hence, all this fancy vocabulary essentially helps organize the management of containers.
00:20:50 I want to emphasize that managing these YAML files could become cumbersome, especially with similar deployments. This is where Helm comes in—a package manager for Kubernetes that tracks what versions and states you want to deploy with configuration files called 'charts.' Charts group together all the necessary definition files for deployment.
00:21:55 When you run the command to install the chart, it will bundle all these configurations, making the process seamless. Helm provides additional features like rolling back to previous versions of charts, pulling official charts, and many other tools to assist in managing deployments.
00:22:34 Thank you for your time!
Explore all talks recorded at Rocky Mountain Ruby 2017
+4