Jeremy Green
Summarized using AI

Going Serverless

by Jeremy Green

In this video titled "Going Serverless," Jeremy Greene discusses the concept of serverless architecture, particularly focusing on the use of the Serverless Framework with AWS Lambda and API Gateway. The term 'serverless' refers to a method of deploying applications without needing to manage servers directly, allowing developers to concentrate on writing code rather than dealing with infrastructure. Jeremy emphasizes the benefits of using AWS services, such as operational efficiency and scalability, and addresses concerns regarding vendor lock-in.

Key points discussed in the video include:

- Definition of Serverless: Although the term suggests a lack of servers, there are servers in the background; the emphasis is on abstraction away from server management.

- Comparison to Heroku: Similar to how Heroku simplifies deployment, the Serverless Framework provides a streamlined approach for deploying AWS Lambda functions.

- Advantages of AWS: Using AWS allows developers to leverage extensive data center resources, operational expertise, and cost efficiencies related to deploying applications.

- Introducing the Serverless Framework: This framework facilitates the management of AWS Lambda, API Gateway, and CloudFormation services through code, removing the complications of GUI management.

- How to Get Started: Jeremy guides viewers through setting up a new AWS account and using the Serverless Framework, starting with creating a project and deploying functions via the command line.

- Demo of Function Creation: A practical demonstration is provided, showing the creation of a function called 'left-pad' and how to set up routes and API endpoints using serverless.yml configuration files.

- Testing Functions: Emphasizes the importance of testing serverless functions and suggests using Mocha and Chai for unit testing.

- Ruby Integration in AWS Lambda: Jeremy discusses integrating Ruby within a serverless architecture, although it is not natively supported in AWS Lambda.

- Performance Considerations: The concept of 'cold starts' and their impact on performance are explained, along with a comparison of execution times between Node.js and mruby functions.

In conclusion, the video highlights that the Serverless Framework allows developers to efficiently integrate various AWS services and deploy applications without direct server management, paving the way for more agile development practices. By leveraging AWS resources through serverless architecture, developers can focus on enhancing application code rather than managing infrastructure details.

00:00:09.610 Hello, everyone. It's time to get started here. My name is Jeremy Greene, and I'm talking about going serverless.
00:00:14.809 So, what do we mean by 'serverless'? That's kind of a weird term. The idea is that we want to deploy web applications without servers. I'm going to make a bold assumption here: I bet some of you are thinking a very particular thing right now. You might be thinking, 'Serverless? Really? Come on! If you're going to serve something, you need servers, right?' So, if it'll make you feel better, we can think of it like this: we'll call it 'serverless' with big air quotes.
00:00:41.869 The goal here is that, yes, there are servers involved, but we don't want to think about them. We want to think about our code as just code and not worry about the specifics of the infrastructure it's being deployed to. If you've deployed to something like Heroku, you already have a sense of how this works. On Heroku, you’re not really thinking about instances; your main consideration might be your dyno size. You're not focusing on the physical infrastructure that your code runs on. You're just coding your stuff, pushing it up, and letting Heroku handle the details, such as which machines it runs on.
00:01:09.560 That's the perspective we want to adopt when talking about serverless. In particular, during this talk, we will discuss the Serverless Framework. The Serverless Framework is a tool for building web, mobile, and Internet of Things applications exclusively on AWS Lambda and API Gateway, along with other related services.
00:01:43.340 Now, a little bit more mind-reading: I bet some of you noticed that 'exclusively' on the last slide, and you're probably wondering about vendor lock-in. The cons of vendor lock-in are pretty obvious, and I won't dig into them right now. But, you might be asking, 'Why would you sign up for this kind of vendor lock-in?' The most compelling reason is this: Amazon is better at operations than you will ever be. If you disagree with me, I'd love to discuss it further, and we can create a lovely fact-checking proposal to help Amazon improve their service.
00:02:11.450 This is especially true if you are an application developer and not an operations engineer. Another compelling reason to consider AWS is scale. They have data centers all over the globe. By deploying into their infrastructure, you can leverage their scale with minimal effort. Cost is another factor—especially when you compare the salaries of operations engineers. Deploying within Amazon's infrastructure can be quite cost-effective.
00:02:36.480 A little about me: my name is Jeremy Greene. I'm a consultant, an author, and I run a couple of SaaS businesses. You can find me on Twitter at @jagthedrummer. Feel free to send me an email if you'd like. I recently co-authored a book called the Independent Consulting Manual. One of my SaaS products is Remark, and I'm also into drumming, photography, and brewing, so if you share any of those interests, let's chat.
00:03:05.180 I want to give a shoutout to my client, ClickFunnels, who has supported me in this talk. Working with them is where I really got into all the serverless concepts. ClickFunnels has been very supportive, and I truly appreciate their backing. Now, I don’t want to dive too deep into what they do, but I've prepared a highly technical diagram of their infrastructure.
00:03:43.720 So, enough silliness! Let’s discuss the building blocks we’re going to be using. The first essential piece is AWS Lambda. AWS Lambda essentially allows function execution on demand. I think of it as Heroku for single functions. You give them one little function, tell them how you want it to be run, and then you can call that function either via their API or through a service called API Gateway—a routing service that helps set up endpoints.
00:04:01.630 When someone hits one of those endpoints, the request can be routed to Lambda. You might also use DynamoDB to store data, or you might use RDS or other services they provide. Additionally, CloudFormation acts as infrastructure as code. This way, you can describe the assets you need in your infrastructure, like a DynamoDB table or an RDS instance.
00:04:35.890 You can keep that description in your source code repository and push it to Amazon, requesting them to deploy these assets for you. This makes it straightforward to establish staging environments that can be duplicated into production. You can be assured that you're using the same configurations across all your stacks.
00:05:01.360 When you put all this together, you end up with a system that looks something like this: you're using multiple Amazon services to route requests, and they equip you with various tools to achieve this. You can access your Lambda functions through the AWS Console, where you can see a list of all your Lambda functions and browse their repository of demo code to explore various applications that might inspire you.
00:05:44.950 When you go to create a Lambda function, you’ll need to set several configuration variables, such as memory allocation and timeout settings. You can also link your function to an API endpoint or set up event sources. However, if you spend too much time configuring everything in the GUI, you might start to feel like you're coding in the browser. If that happens, you will go through a cycle where you feel frustrated and worried about the numerous things that could potentially go wrong when someone starts making changes in a browser without committing any source code.
00:06:06.420 To avoid this scenario, we introduce the Serverless Framework. This framework allows you to manage Lambda, API Gateway, and other CloudFormation services through code instead of a GUI. Their documentation is available at documentation.serverless.com, and I should mention that this is a very young project that is evolving quickly, so be aware that everything I talk about may change rapidly.
00:06:31.110 When you start with Serverless, the first step is to create a new AWS account. You should not use the one you already have, especially not for your production environment. The reason is that the service documentation currently suggests that for getting started, you should create an admin superuser that can do anything in your infrastructure. This poses a security risk; you don't want that profile to be accessible to the wrong people, as they could shut down your production instances or delete S3 buckets.
00:07:02.490 So, take this seriously: start with a new account. After you figure out what you want to deploy in production, take the time to understand the permission model properly. To get started with Serverless, it’s an NPM module. My apologies to Searls and Tenderlove for having to mention Node, but I'm fully on board with making Ruby great again.
00:07:36.060 Once you install it, you can run the command 'serverless project create.' The CLI gives you a convenient shorthand for Serverless; you can call it 'SLS' if you prefer. The first thing you'll see is some sweet ASCII art, which is a sign that it's good. After that, it will guide you through the process of creating a new project. It will ask you to enter a name, select a stage—essentially an environment, similar to development, staging, and production—and decide which profile you want to use.
00:08:20.260 You'll also choose the AWS region where you want to deploy your application. After it performs some initial setup, it will tell you that your project is ready and that some configurations have been deployed to CloudFormation. If you navigate to the directory created for your new project, you'll find a structure that resembles what you typically encounter in Rails, like config/application.rb and other boilerplate files necessary for running your project.
00:08:51.030 We're not going to delve into all that right now because we want to focus on building something more functional than just a simple 'Hello, World' application. So, instead, we will build something useful—like a simple service. And guess what? There's an NPM module for that, so it must have some utility.
00:09:28.340 The first action will be to use 'serverless function create' and give it a name—let's go with 'left-pad.' It will then ask you to choose a runtime; in this case, we will select Node.js version 14. It will also ask whether you want to create an API endpoint, an event, or just the function alone. In this scenario, we'll create the endpoint to simplify deployment.
00:09:59.730 When you look into the directory created for you, it will have three files. The 'event.json' file contains sample data that your function will use. 'handler.js' is where your function's code resides, while 'serverless.yml' holds the configuration for your Lambda function, the endpoint, and any other necessary resources. At this point, your application framework is nearly ready for deployment, so we can quickly ship it.
00:10:38.600 This is part of the workflow you'll follow with Serverless. You won't be able to run Lambda and API Gateway directly on your local development box. Instead, you'll deploy your code to AWS, where you can test it in the development stage. Once you're satisfied, you can promote it to a staging or production environment.
00:11:05.150 To deploy, you’ll use 'serverless deploy'—the 'deploy' is short for dashboard. This command will provide you with some additional ASCII art and a list of all the elements you can deploy. I've chosen to deploy the function and endpoint, and once you execute the deploy command, it will start the process, eventually notifying you that it has deployed the function into the development stage.
00:11:37.500 At the bottom of this output, you'll see the URL where you can access your function through the API endpoint. If you visit that URL, you’ll receive a JSON response indicating that your lambda function executed successfully. The response will look something like this: {'message': 'Go Serverless!'}. Now, let’s take a look at the code that generates this output.
00:12:02.080 This is the default handler generated by Serverless for you. It is very simple. It exports a handler function that accepts three arguments: an event, a context, and a callback. By default, it simply calls the callback with the desired payload.
00:12:29.900 The event parameter contains a JSON object filled with data that you, as the developer or client calling the function, assemble. These are the inputs for the function that you wish to process to produce the outputs. The context parameter is an object supplied by the Lambda infrastructure that provides details about the invocation.
00:12:51.110 One of the critical properties of context is the 'get remaining time in milliseconds' method. This allows you to perform long-running tasks, as it indicates how much time you have left before your lambda is forcibly terminated. You can check the remaining time while working through a series of records, writing to the database when you reach the last record.
00:13:15.650 The callback function is also provided by Lambda, and its structure is straightforward: you call it with an error (if one occurred) and the data (the result of your processing). If you only need to return an error, pass that error as the first argument. If you want to return actual data, pass null for the error and your data as the second argument.
00:13:39.560 Now let’s get this 'left-pad' function operational. To do this, navigate into the 'left-pad' directory, and execute 'npm init' to set up a local Node modules directory along with a package.json. This process allows you to include any additional libraries you'd like to be bundled with your function.
00:14:18.430 Everything in your function directory will be sent to Amazon when you run 'serverless deploy.' We'll install 'left-pad' as a dependency and save it, which will add it to our package.json file. Once that's done, we can start updating the handler.
00:14:45.700 The first step is to require the 'left-pad' library outside of the handler, since anything outside the handler runs when AWS invokes your Lambda for the first time. This is important because any long-running code should not be part of the direct processing of your inputs.
00:15:12.320 Next, let's declare a couple of variables to extract data from the event passed in. We’ll create a padded string using the padding function we obtained from the NPM module. Then, we'll construct a payload that returns the padded string as a JSON object. Finally, we’ll call the callback function, passing null as the first argument, followed by the payload.
00:15:51.930 We also need to make some adjustments to the serverless.yml file, which is automatically generated by Serverless. This is where you indicate how much memory your function requires and set the overall timeout, among other settings. In the handler line, you specify the name of the function that should be invoked.
00:16:16.890 By default, it will be 'handler.handler', as Serverless generates a file called handler.js, exporting a function named 'handler.' This file structure is created when you create your function and specify that you want to establish an API endpoint, thus automatically generating a path called 'left-pad.' This means that once you hit the corresponding URL, it will invoke this function through the Gateway using a standard HTTP method.
00:16:50.320 You can utilize methods such as GET, PUT, POST, and DELETE when creating your own endpoints. By default, the request template remains blank, which tells API Gateway how to generate the event that will be sent to your function based on the incoming HTTP request. Since a Lambda function doesn't inherently know about HTTP, you'll need to teach API Gateway how to convert requests into the event your function can process.
00:17:27.880 To that end, you can introduce a few lines to define the event properties: one for 'string' and another for 'padding.' The 'string' event should come from the input parameters of the HTTP request. Once you’ve made these updates, you can re-run your deploy command, and it will provide you with a URL.
00:18:05.530 If you call the URL with query parameters, such as 'string' and 'padding,' it will return the padded string, formatted to the defined length of ten spaces. Testing this function is straightforward, as it can be treated like any other vanilla function. You don't need to worry about Lambda or API Gateway; you can simply test the function itself.
00:18:39.390 For testing, you can use Mocha for driving the tests and Chai for assertions. Create a test file for your handler, requiring the handler file you’re testing. Then, set up a mock event to pass into the function during testing, while the context can remain an empty object since it’s not being utilized.
00:19:10.950 For the callback, this is where you'll assert the returned value from Lambda. You can create a function with the same signature as the lambda-provided callback. When this function is called, you can check that the error is null (as expected) and verify that the response contains the expected padded string, resulting from the test event.
00:19:39.720 Once you have the handler test file prepared, you can execute 'mocha handler test' to see if it returns the expected output, confirming that your function is successful. Now, shifting focus to Ruby—it’s a bummer that Ruby isn't supported directly in AWS Lambda right out of the box.
00:20:14.530 However, we can implement some strategies to get it functioning. What I'll demonstrate here is still at the proof-of-concept stage and not ready for production use. If you plan to use Ruby in Lambda, you’ll need to enhance it and improve its resilience to errors and crashes.
00:20:43.880 What I did here is run 'serverless function create' and named it 'ruby-hello-world.' I included two additional files in that directory—one is a Ruby script that processes some Ruby code, and the other one is the executable for the selected Ruby runtime. I opted for 'mruby' for this demo because I had seen a proof-of-concept that Nick Caranto previously developed using it.
00:21:14.020 AWS does specify what OS and AMI image they run for Lamdas, so if you need to compile your Ruby code, you can set up an EC2 instance to compile and package it, and then add the binaries to your project. You will want to ensure everything is statically linked, as the default Lambda runtime image provides a limited environment.
00:21:38.740 To execute Ruby in this backhanded manner, you essentially create a Node.js handler that spawns the ruby executable. You configure your Node handler to call the mruby executable from within your project directory, directing it to run your specified Ruby script. You can use 'JSON.stringify' to pass the input JSON into Ruby.
00:22:06.020 You'll want to attach listeners to handle any standard output or error output emitted by the Ruby script, pushing that data into an array. As soon as the child process completes, invoke the Lambda callback to signal completion.
00:22:44.640 For this simple demo, the Ruby handler script only outputs a couple of lines from the routine. If you run it, the output will return a JSON object, encapsulating both the message emitted from the Node.js handler and the collected output from the Ruby script.
00:23:14.410 While this is not production-ready, it serves as a proof of concept that you can integrate Ruby with the Serverless Framework inside AWS Lambda. Now, let’s discuss performance. Generally speaking, the performance is reasonably good.
00:23:51.420 This chart illustrates the API Gateway timing; the baseline runs between 30 to 40 milliseconds for the time taken from when a request hits API Gateway through Lambda and back to the response. However, there can be spikes up to 650 or 700 milliseconds, which is indicative of what Lambda refers to as the 'cold start penalty.'
00:24:14.630 The cold start penalty occurs when AWS doesn't already have a Lambda function deployed and ready to accept requests. During a cold start, AWS provisions a container and loads your code onto the disk in that container before calling your handler.
00:24:46.590 Both scenarios—when you push new code for the first time and when you start receiving a lot of concurrent requests—can cause cold starts. On average, AWS allows you to run up to 100 concurrent Lambda requests, and if you require more capacity, you can apply to AWS for a limit increase.
00:25:17.180 For this demonstration, I set up RunScope to time the performance of a vanilla Node-based hello world function against an mruby version. In my tests, I maintained an average response time of around 70 to 75 milliseconds for the mruby version, which was generally between five to six milliseconds slower.
00:25:50.650 Examining the Lambda execution timings reveals some notable information: the mruby function was consistently about four milliseconds slower than a baseline Node function. This timing is purely for the execution of the handler function itself, separate from any API Gateway or network effects. Furthermore, it’s important to mention that mruby is a slimmed-down version of Ruby with limited features; if you try to use all the functionality Ruby affords, you'll be dealing with a larger executable and potentially longer load times during cold starts.
00:26:27.270 In summary, during a cold start, time is required to load the code onto the Lambda container, which affects the total cold start time. To wrap up, AWS provides a robust suite of building blocks including Lambda, API Gateway, and their database services. Serverless Framework layers some structure to manage these components without coding directly in the browser or manipulating APIs directly. Then, your unique application logic provides the magic.
00:27:06.300 Thank you!
Explore all talks recorded at RailsConf 2016
+102