Frontend Development
Client/Server Architecture: Past, Present, & Future

Summarized using AI

Client/Server Architecture: Past, Present, & Future

Mike Groseclose • May 04, 2016 • Kansas City, MO

The video features Mike Groseclose discussing the evolution of client/server architecture, focusing on the transition from monolithic applications to microservices and the importance of separating client and server responsibilities.

Key points discussed include:
- Monolithic Architecture: Groseclose describes monolithic applications, where the presentation, business logic, and database work as a single unit. He notes that while they can be efficient for smaller teams, they can become complex and lead to difficulty in scaling and managing code as the team grows.
- Scalability: He explains the concepts of vertical and horizontal scaling, highlighting the challenges of managing database performance when using a monolithic structure.
- Client/Server Flow: The typical process of client-server interaction is explained, emphasizing the tight coupling in traditional architectures. The introduction of native mobile apps has encouraged a shift in perspective, promoting more distinct client-server roles.
- Separation of Concerns: Groseclose advocates for separating the front-end and back-end, which allows each team to focus on their strengths—creating user experiences versus building reliable server-side applications.
- JavaScript and Frameworks: The evolution of JavaScript, including the role of transpilation and various frameworks like Angular, Ember, and React, is discussed. He expresses concerns about JavaScript fatigue due to rapid changes and modularity in the ecosystem.
- Transition to Microservices: As applications scale, microservices become a discussion point. Groseclose outlines the potential benefits and pitfalls of adopting a microservices architecture, including complexity in interfaces and the importance of DevOps.
- Future Directions: Concluding thoughts focus on the shifting landscape where JavaScript has become a dominant force in development, with a call to leverage frameworks effectively alongside traditional approaches like Rails.

Ultimately, Groseclose encourages developers to innovate and adapt within this evolving ecosystem, predicting that the best practices learned from past architectures will guide future developments in web applications.

Client/Server Architecture: Past, Present, & Future
Mike Groseclose • May 04, 2016 • Kansas City, MO

Client/Server Architecture: Past, Present, & Future By Mike Groseclose The client/server architecture that powers much of the web is evolving. Full stack, monolithic, apps are becoming a thing of the past as new requirements have forced us to think differently about how we build apps. New client/server architectures create a clear separation of concerns between the server and the client.

As developers, we have the ability to create the new abstractions that will power the web. Understanding the past, present, and future of the client/server help us to become more active participants in the future ecosystem for building web applications.

Help us caption & translate this video!

http://amara.org/v/JM6K/

RailsConf 2016

00:00:10.559 Hello everyone, my name is Mike Groseclose. Today, I'm going to talk to you about client/server architectures. Before we get started, a little bit about me: you can find me on most social media as micr Fusion. I'm one of the web architects at weedmaps.com, and there are a couple of other weed maps folks in the audience, too. Our main Rails application is a large, high-traffic Ruby on Rails site, and we're currently in the process of migrating it to a microservices architecture. This transition will allow us to scale according to industry demands, and by the way, we're hiring, so if you're interested, come talk to me afterward.
00:00:43.800 Okay, let's get started. The other day on Twitter, I saw Aaron Patterson mention that he was worried about the audience getting tired of puns, and this concerned me because I believe Aaron's puns are a huge part of the Rails culture. In response, I decided to do a pun fundraiser on Twitter, raising community awareness around the importance of puns while also supporting Girls Who Code because I have a 5-year-old daughter who currently thinks everything I do is boring. Surprisingly, the fundraiser was successful, and I received a lot of puns, in fact, days later people are still sending them to me. However, I am now completely punned out, so there won't be any more puns in this talk, except for the ones you see on the screen.
00:01:27.200 A few days ago, I found an app that changed my life. It's an app that allows you to swap faces with other people. I tested it out on Aaron Patterson and DHH. Since I'm one of the last speakers, I thought, why not document my trip here through the medium of face swapping? For example, meet Chris, one of my good friends and office mates, who's sitting right there. This is us on the airplane traveling to RailsConf. And here we are losing money in Vegas during our layover.
00:02:05.360 Now, back to client/server stuff. Since we're at RailsConf, I can't discuss client/server without mentioning the majestic monolith. When I refer to the monolith, I'm talking about an application where the presentation layer, business logic, and database integration are all tied together to address a single problem. There are many pros and cons to consider when developing within a monolithic architecture, and many of you might have different experiences. So, take these rules as things to consider rather than absolute truths.
00:02:40.480 Let's start with team structure when working within the monolith. We can't talk about team structure without mentioning Conway's Law, which states that the teams will produce code that reflects the structure of the organization. Personally, I've seen the monolith work well for small, tightly integrated teams, typically those under ten developers, where we all work in the same code base and have the same standups and backlog. This leads to discussing the advantages of developing within a monolith.
00:03:34.960 First, we benefit from having a single code base to work in, which allows for speed when navigating the application. This makes refactoring functionality between services easier. If you've ever worked in a modular application with multiple packages and repositories, you know how refactoring code between different services can be a pain, not only in terms of testing but also deployment. In the monolith, the cohesive code structure provides a complete view of the system, simplifying reasoning about smaller applications, thus implying a simpler architecture. Additionally, it helps maintain consistency across the code base, making it easier to enforce coding standards, linting, and code complexity analysis. However, as your team grows and more developers contribute to the same code base, you'll inevitably encounter an increasing number of merge conflicts.
00:04:34.960 Moreover, as more people work on the code, complexity can increase, especially when developers prioritize meeting deadlines over adhering to specifications or performing necessary refactoring. This leads to code that may become harder to manage over time. As a side note, I once wondered what it would be like to swap faces with a statue, or with me sporting long hair.
00:05:24.799 Now, let's explore system scalability under a monolithic architecture. We can start by vertically scaling up the system, which means increasing the capacity of our application server and database by adding more hardware. However, a point will eventually be reached where you find limitations in vertical scaling, leading us to horizontally scale our application servers. This involves creating clones of our monolith, or as Martin Fowler has described, cookie-cutting our application onto new machines.
00:06:06.679 Yet, at some point, we encounter limitations with our database. I've seen the largest RDS instance on AWS hit 100% CPU, and that is not a good situation. Unlike application servers, we cannot scale out the database in the same way due to the need for a single source of truth. However, techniques such as read replicas or sharding can help. With read replicas, you write to a primary database that replicates itself to a secondary database, allowing reads from the secondary database to lighten the load on the write server.
00:06:52.400 Next, let's examine the client flow in this monolithic architecture. Viewing a page typically involves a flow like this: you point your browser to a URL, which becomes a request from the client to the server for that page. The server returns HTML, JavaScript, and CSS, creating the content of that URL. The browser then renders the page, and any new actions initiate a fresh URL request from the client, restarting this cycle. It's crucial to note the tight coupling between the client and the server in this flow, as I will revisit this point.
00:07:32.960 Mobile native apps have pushed web apps into a new direction. Since native apps are installed rather than served at page load time, they compel us to view the client and the server as two distinct entities. We need to consider that the client can take various forms: Android, iOS, desktop apps, etc. This new requirement enforces multiple presentation tiers and necessitates creating various methods of communicating with and manipulating the state of our monolith. Up until now, integrating with external resources has been an afterthought. As a result, we often end up merely bolting an API onto the side of our systems.
00:08:16.760 Now we have this API attached to our system that begins to grow, resulting in some unsightly appendages. As our product requirements expand, we find ourselves supporting both our APIs and our legacy views. Furthermore, to keep pace with current expectations, web apps have evolved to give rise to single-page applications (SPAs). These SPAs strive to replicate the experience of native apps by providing an interface that allows users to manipulate the page without navigating away from it. This admittedly feels quite messy, but fortunately, a solution exists: separating our client concerns from our backend server responsibilities.
00:09:03.520 By doing this, we clearly delineate responsibilities. One side can focus on a static asset server serving HTML, CSS, and JavaScript, while the other side functions as an API server serving raw JSON data to the client. However, it's essential not to overlook the importance of this separation. By separating our technical concerns and refining the interface between our client and server, we've established a division of concerns that aligns with more human boundaries between data and business logic on one hand and user experience on the other. This distinction is crucial because front-end and back-end development require different mindsets, and those who switch back and forth between the two can experience significant context switching.
00:09:56.799 This new separation allows us to form two separate teams, each with distinct skill sets, focusing on what they do best. The front-end team can concentrate on creating exceptional user experiences, while the back-end team aims for reliability, scalability, and API best practices. At this point, you might be thinking about how much you miss the talks from the chef guys, who happen to be presenting at the same time. They are indeed noteworthy.
00:11:06.399 The client has effectively departed from the comfort of the monolith. What does this new world look like, now and into the future? Of course, I can't discuss the client without touching on JavaScript. I have a fondness for JavaScript, but I'm also not blind to its flaws. It has its strengths: first-class functions and asynchronous capabilities by nature. However, it also possesses numerous weaknesses that are well-known in the Ruby community.
00:13:27.679 How then do we address these weaknesses? Let's discuss transpiling. Most of us are familiar with CoffeeScript and its similarities with Ruby, including its clean syntax and ability to simplify the language while helping us avoid many pitfalls of JavaScript. Fortunately, JavaScript itself has evolved with the introduction of the latest standards, including ES6, which offers arrow functions, default assignments, templating, destructuring assignment, and tail call optimization, allowing recursion without blowing the stack. ES7 has async/await, and with Babel, we can transpile our code to ES5 so we can use the latest language features in the browser without waiting for full browser support to catch up.
00:14:51.520 Beyond Babel, there are several other transpiled languages to consider, such as TypeScript, ClosureScript, and Dart. Each of these allows us to write more elegant code that functions in today’s browsers. To summarize, the key takeaway here is that transpiling is advantageous, and I believe it will remain essential because browsers will never fully keep up with the demands for new and improved language features.
00:15:48.600 Let's shift gears and take a look at JavaScript frameworks, as we have numerous choices to consider. First up is Angular, an opinionated client-side monolith with strengths in dependency injection. Although it’s usable without TypeScript, Angular 2 recommends using it, moving away from the conventional scope in favor of component-based structures. If you've previously dabbled in Angular 1, this shift may be particularly welcomed.
00:16:42.640 Next, we have Ember, another opinionated framework packed with ES6 features via the Babel transpiler. Ember has a highly powerful router that’s so effective that React borrowed it for its ecosystem. React presents itself as an arrow without a bow, meaning it’s not very opinionated and emphasizes simplicity and declarative coding. Built on reusable components, React features a unidirectional data flow, but it’s focused solely on the view layer, necessitating pairing with tools like Redux for state management.
00:17:28.440 Beyond React, many other frameworks exist in the ecosystem. There’s Lodash, an essential toolkit for everyone, jQuery, often found in everyone’s toolkit, and a range of functional reactive libraries like Cycle.js, Bacon.js, and ReactiveX. Lastly, I want to genuinely promote lore, which we just open-sourced this week: a convention-over-configuration framework for React/Redux applications. However, if you were to ask for advice, I’d suggest trying Ember.
00:18:31.840 Transitioning to deploying our client code: in this new client-side world, we want to disconnect from worrying about servers; they should no longer pose a concern for front-end developers. One method to ensure your front end remains operational is to utilize an AWS S3 bucket for deployment. You can complement this with tools like Gulp to streamline the deployment process. One drawback, though, is that some hacks are required to operate without using hashbang URLs. Alternatively, you could leverage GitHub Pages by pushing your repository to the GH Pages branch, resulting in automatic hosting. This method is easier than setting up S3 but presents its challenges, such as issues with hashbang URLs and lack of SSL support for custom domains.
00:19:18.000 I personally recommend opting for something like surge.sh, as it enables you to push your client-side code via the command line while addressing the SSL and hashbang issues. Now, let’s discuss the convergence of native and web application development, a subject that can provoke some debate, but I feel it's necessary to address.
00:20:18.640 Tools like Cordova, Ionic 2, and React Native enable developers to create applications for native devices using traditional web tools such as HTML, CSS, and JavaScript. This raises the question: why do people continue writing native apps? Historically, many have been hesitant to adopt these tools due to performance concerns, as JavaScript doesn't emulate native performance well. However, this perception is changing. For instance, we used React Native to build PocketComp, and it functioned remarkably well.
00:21:09.480 So why should we consolidate native app development with web technologies? Firstly, it eliminates the specialization often required in native app development—no one wants to code in Objective-C. This integration also fosters consistency across platforms. Additionally, such convergence allows for the sharing of components and code across the codebase, and as I mentioned, performance issues are dwindling.
00:21:54.480 Before we move away from the client side, I want to quickly mention something you often hear within the JavaScript community: the concept of JavaScript fatigue. I believe a significant part of this fatigue comes from the excessive modularity present within the ecosystem, combined with rapid changes in the framework landscape. While modularity is beneficial, I also see the accompanying churn in interfaces leading to frustration in the community.
00:22:15.760 So how do we mitigate these frustrations? A sensible approach is to be aware that working with cutting-edge tools comes with the likelihood of changes in interfaces. This is especially true right now within the React community. Additionally, it’s critical to lock your dependency versions; I've encountered numerous instances where a minor patch update wreaked havoc on backward compatibility.
00:22:45.760 To summarize, the client side revolves around creating human experiences, and we must allow technology to facilitate rather than hinder the process of crafting those experiences. As we navigate the growing pains of this rapidly evolving ecosystem, choices become plentiful; however, a lack of strong opinions can leave things feeling modular. It’s certainly time for more face-swap pictures—including ones from Justin, Alex, and JCK.
00:23:06.000 Consequently, the night before last, we went to Joe's Kansas City Barbecue before karaoke. It was incredible barbecue!
00:23:08.720 Now let’s turn our attention to the server side, beginning with monolithic APIs. Monolithic APIs hold many advantages akin to those we discussed regarding monoliths earlier, but the main difference is that we can eliminate certain downsides by disentangling the client from these systems.
00:23:15.440 We can rather easily create a monolithic API using Rails 5's API mode, which retains many beneficial aspects, such as convention over configuration but offers a lighter weight alternative to a full Rails application.
00:23:18.440 Now, when we discuss scaling a monolithic API, the process parallels scaling a conventional monolith, understanding that the pros and cons remain largely unchanged.
00:23:21.440 As your team expands, however, a conversation about microservices becomes essential. Microservices bring similar challenges as modularity within the JavaScript community, necessitating stability in your interfaces and clear definitions before you begin to utilize them.
00:24:07.440 I personally believe that scaling or utilizing microservices fundamentally serves to optimize your system while helping your organization scale, but we all know to be cautious about premature optimization.
00:24:57.440 So why should we be wary of adopting microservices too quickly? For starters, interfaces become much more complex and harder to modify once established, and since the monolith allows for shared services and tools, you'll find that added effort is required in microservices to facilitate code sharing.
00:25:44.440 Latency between systems can become a more significant issue than before; instead of calling methods from your own codebase, you might be fetching data across the wire. Moreover, companies often underestimate the role of DevOps in these scenarios. If you’re already feeling like a few systems aren’t receiving enough attention from DevOps, consider how challenging it could be when scaling up to many interconnected systems.
00:26:35.440 As for the DevOps side of things, there’s the complexity of monitoring system health and predicting potential failures, as well as the ability to recover once problems arise. Lastly, developers often have a tendency to build a system and then move on to the next project. In a microservices architecture, it’s crucial to maintain consistency across all your systems, abiding by the philosophy that you are only as strong as your weakest link.
00:27:18.640 All of this requires considerable effort, especially since there’s often a pattern of building and forgetting when it comes to microservices. Yet, inevitably, your applications will reach a point where microservices will be necessary for scaling effectively.
00:27:51.640 Let’s take a moment to look high-level at how one might structure microservices within an application. You can have a series of domain-driven models grouped by bounded contexts, creating smaller applications or services with well-defined interfaces between them. These microservices should communicate with each other and the outside world using lightweight protocols such as REST, ensuring that you remain connected to the web rather than functioning as isolated systems.
00:28:37.440 Each microservice should have its own database or databases to reinforce autonomy. Thus, you’ll end up with finely grained, atomic services Each with a well-normalized database. The issue then arises: we don’t want the client to be burdened with excessive querying just to retrieve necessary data. For example, to gather all circles belonging to a triangle of a certain color, one would first have to query the triangle service to obtain the triangle ID, and then query the circles with that ID.
00:29:26.440 Instead, we might consider using a denormalized data store, such as ElasticSearch or Neo4j, to facilitate more straightforward data retrieval. Furthermore, we can introduce a layer of composition services on top, enabling us to shape data in a manner that's more advantageous for consumption. This approach echoes Conway’s law: each service can possess its own team and data storage solution, enhancing flexibility in technology choices.
00:30:03.440 Before we close, I want to talk about the future. I initially had slides discussing service list architecture and backend as a service—the kind I’ve been building at ster.com.
00:30:19.799 However, after attending an incredible talk yesterday, I reconsidered my presentation. If you’re curious about the talk that inspired this change, it was presented by an individual whose insights were so impactful that I felt compelled to modify my own.
00:31:02.640 In summary, the talk essentially stated that Ruby is no longer seen as the most desirable language, and instead, JavaScript has taken that title through Node.js. We're witnessing a decline in Ruby discussions, job postings, and consequently the number of new projects in Ruby, leading to fewer people carrying the Ruby torch.
00:31:51.440 Though I don’t run a consulting agency, I do work extensively with Node.js. So, why did I rethink my talk to focus on this? It's because I love the Ruby community and believe we have a narrative worth promoting. I strongly feel we can reshape the conversation: we need to start splitting our client and server concerns apart. For instance, you can use that shiny new JavaScript framework alongside a Rails API, which we can rapidly set up for you. If you’re concerned about scaling, know that Rails can support microservices just like Node.js, and while it's true that other languages may be faster, in the bulk of scenarios, response time delays often originate from network latency between microservices or between the server and database.
00:32:55.680 As such, it's essential to utilize Rails beyond the monolith, initiating discussions about how we can leverage Rails for scaling architectures effectively. In conclusion, we stand on the brink of creating whole new possibilities, and every day I look ahead to the challenges awaiting us. I hope you share that excitement and, in the words of DHH, let's go out and make a significant impact in the universe.
00:33:34.680 Thank you!
Explore all talks recorded at RailsConf 2016
+106