ArrrrCamp 2015
Decouple All The Things: Asynchronous Messaging Keeps It Simple

Summarized using AI

Decouple All The Things: Asynchronous Messaging Keeps It Simple

Kerstin Puschke • October 01, 2015 • Ghent, Belgium

In the talk titled 'Decouple All The Things: Asynchronous Messaging Keeps It Simple', Kerstin Puschke explores the benefits of asynchronous messaging in enhancing system stability and maintainability within distributed architectures. The focus is on how synchronous communication leads to tight coupling between components, which can complicate information flow, particularly when client applications are temporarily down. Asynchronous messaging is presented as a superior alternative to ensure information is relayed efficiently. Key points discussed include:

- Disadvantages of Synchronous Communication: Synchronous methods, such as REST API calls, can lead to timeouts if client applications are down or busy, leading to missed updates and increased dependencies.

- Introduction to Advanced Message Queuing Protocol (AMQP): Puschke introduces AMQP as a middleware solution that allows for flexible and loosely coupled communication between different application components, enabling a publish-subscribe pattern and task queues.

- Message Flow Explanation: Messages are produced by a master data application and routed through a message broker, allowing consumers to receive updates without needing knowledge of the producers. This architecture facilitates the addition of new applications without modifying existing ones.

- Handling Different Client Needs: Not all client applications require the same information; thus, different messages can be tailored for specific needs, helping clients subscribe only to relevant updates.

- Commands and Task Queues: Asynchronous messaging can also improve user experience by allowing back-end processes to be handled without blocking the user interface. For instance, image processing can happen in parallel as user uploads occur, enhancing responsiveness.

- Example Use Cases: Puschke shares her experiences with using asynchronous messaging for tracking events, migrating user data, and testing new implementations without user disruption, demonstrating the versatility and robustness of this approach.

- Conclusions: The talk emphasizes the need for decoupling applications in a distributed system through asynchronous messaging, which ultimately leads to reduced complexity and improved system performance and reliability.
Overall, the main takeaway is that incorporating asynchronous messaging into system designs fosters a more resilient, efficient, and maintainable architecture.

Decouple All The Things: Asynchronous Messaging Keeps It Simple
Kerstin Puschke • October 01, 2015 • Ghent, Belgium

If a customer changes their address, it's often not enough to update their master data record. E.g. the component processing customer orders might have to learn about the update in order to ship to the correct address. In a distributed architecture, you have to spread information about a master data update to several client apps. You can do this via REST, but if a client app is temporarily down or too busy to accept requests, it's going to miss the information. Adding a new client app requires changes to the master app. Asynchronous messaging helps to avoid such a tight coupling of components. And it offers much more than a simple action trigger: parallelizing computing-heavy tasks, load testing, or migrating existing components to new services are some of the possibilities explored in this talk. You're going to learn how to get started with asynchronous messaging from within Ruby, and how it helps you to keep your codebase clean and your overall system stable as well as maintainable.

Help us caption & translate this video!

http://amara.org/v/H495/

ArrrrCamp 2015

00:00:08.630 Communication is complicated. If you're inviting friends and family to a party, you might ask them to reply to the invitation so that you can learn who's going to attend and plan accordingly. Unfortunately, it requires quite a bit of effort to process all those replies, including one from an anonymous person telling you that they are bringing an unknown number of kids to the party, or your friend Ayda, who might be a bit indecisive by accepting and regretting at the same time. So, you might be better off not asking for a reply at all, simply sending out the invitation and dealing with the people who show up. Machine-to-machine communication isn't that much different. If your HTTP server tells you, 'Hey, I'm a teapot,' it adheres to the RFC, but it's likely pretty useless unless you're trying to use your server for making coffee. Again, it might be better not to ask for a reply or response and simply proceed with other tasks.
00:00:48.360 For the rest of the talk, I'm going to show you how to improve the maintainability and stability of your overall system by choosing the right means of communication for each job, which often means adding asynchronous communication to the mix. I'll highlight some disadvantages of synchronous communication, introduce you to the Advanced Message Queuing Protocol, and discuss its two main use cases: the publish-subscribe pattern and the task queue. If time allows, I’ll provide some example code and a quick live demo, and I will finally wrap things up.
00:01:21.630 Let's look at synchronous communication. A very common pattern in machine-to-machine communication is what's called a master data update. Imagine a setup where you have a master data application responsible for managing information about your business partners, surrounded by several independent components handling their orders and support contracts. These client applications also need to work with data about your business partners. If a business partner goes out of business, this crucial information results in an update to their master data record. However, this is not sufficient because you also need to cancel their pending orders and put their support contracts on hold. Therefore, the master data application must effectively spread this update to ensure the orders and support contracts are informed of these changes.
00:02:00.780 As a web developer, when I think about machine-to-machine communication, REST APIs immediately come to mind. While it’s possible to build a REST API to disseminate information about the master data update, there are significant downsides. For instance, if your orders application is temporarily down, the master data app will make a REST request to notify it about the business partner going out of business, but this request could time out. If the orders application comes back online, it will miss all critical information. Adding a new application, like one that handles invoices, complicates matters further because this new app will never learn of the business partner's status unless you modify the master data application's code to make an additional REST request to inform the new invoices application. This tight coupling occurs because HTTP follows a one-to-one routing protocol, while what you really want is one-to-many routing, leading to redundancy in your system.
00:03:45.840 Master data updates are not limited to the world of e-commerce. In my work for a social networking site like 'Let's Sing' (letssing.com), we face similar challenges. Our core application manages the primary user data and is connected to various other applications powering different sections of our platform, such as events and jobs. When a user requests information about a specific event, like a conference, the events app can provide immediate data like event location and description. However, to render a proper attendee list, it must access core user data, including names, job titles, and companies. If a user updates their core data, such as a job title, it requires the events app to know about these changes to invalidate its cache. Using REST presents challenges in this context, which is why we employ asynchronous messaging for these operations.
00:04:52.940 Specifically, we use an asynchronous messaging system based on the Advanced Message Queuing Protocol (AMQP). A quick search for AMQP will lead you to amqp.org, a resource that delves into creating a capable, multi-vendor communications ecosystem that enhances commerce and innovation, ultimately transforming business operations. For a more user-friendly description, Wikipedia defines it as an open standard for a protocol for message-oriented middleware. This middleware functions as a message broker, essentially an intermediary connecting message producers and consumers. AMQP defines a wire-level protocol and data format, allowing clients that comply with this format to be compatible, regardless of their implementation.
00:06:06.200 This compatibility enables you to bring together components founded on completely different technology stacks. Client implementations for sending and receiving messages are available in virtually all programming languages, and there are numerous broker implementations to select from. We use RabbitMQ, a popular open-source option, but there are certainly other options, including cloud services offering AMQP infrastructure.
00:06:43.160 Let’s examine the message flow. There is a producer or publisher that sends a message, connects to the message broker, and drops this message into an exchange. The exchange is responsible for routing messages—copying them into one or more message queues—where they can then be consumed by the consumer application. Depending on your setup, this can be arranged as a push or pull system. In our example, the producer is the master data application that has received an update, while the consumer is the client app wanting to learn about these updates. The producer drops a message into the exchange without needing to know anything about the consumer receiving it. If the consumer has issues invalidating its cache, the producer cannot assist and mostly does not concern itself with that as it sends the message, moves on, and continues with other tasks.
00:08:06.540 In practice, you typically have multiple producers and consumers collaborating within this system. Each exchange can bind to various message queues, and a single message queue can be accessed by multiple consumer instances. It's good to note that the primary components of a message consist of its payload and routing key. The routing key is a string used for directing the message to the appropriate message queues, while the payload carries your application data. In our case, we put text in the payload and format it as JSON, given our work with web applications.
00:09:33.060 You can also include structured application-specific data within messages, and they can have headers and annotations along the way. However, the payload itself is immutable, meaning you can define its properties, such as assigning unique identifiers or running validity checks on incoming messages. If a producer connects to the message broker and finds the exchange it intends to use isn't available, it can simply create one. But, initially, this exchange has no messages to route, causing it to drop all incoming messages. If the producer also creates a message queue and binds it to that exchange, it can begin sending messages, which will wait in the queue until the consumer connects to the broker and processes them. Hence, the consumer can create the queue and bind it to the exchange, preparing it to accept messages from the get-go.
00:11:25.200 It’s important that your application code manages the setup and configuration of exchanges, queues, and their bindings. Your application has the responsibility to clean up or remove unused queues or exchanges. If a queue is still receiving messages while its consumer has disappeared, these messages will pile up to the extent that they can cause operational issues for the broker. If a consumer temporarily goes down, the messages will remain in the queue until the consumer comes back online, allowing it to consume all that information. This enables consumers to learn about master data updates—even if there's a temporary failure on their end.
00:12:46.500 Adding a new consumer is straightforward within this system. When a new consumer connects to the broker, it creates its own queue and binds it to the exchange. Consequently, the exchange will copy messages about the master data updates into both queues, ensuring that all consumers receive the same information without needing to modify the producer code. This is unlike HTTP, which utilizes one-to-one relationships, while AMQP presents the opportunity for one-to-many routing. Consumers can effectively join the system at their leisure, creating their queues to determine which messages they wish to receive, leading to a well-functioning publish-subscribe pattern.
00:14:02.930 Let's revisit the master data example. The master data application must inform various client apps about the updates it receives. Each of these clients may have different information needs. For instance, one client may only be concerned with master data records that are removed, such as when a user deletes their account, requiring that client app to be informed accordingly. However, if that same user updates their data, this client does not need to know. In contrast, if a client caches the master data records, it similarly needs to learn about updates to invalidate its cache. Some clients may even maintain a full mirror of the master data and need the updated data to refresh that mirror.
00:15:51.040 To accommodate these differing requirements, we employ various AMQP messages. For example, we send a message with a routing key ending in 'user.deleted' when a user deletes their account. The payload for this message is simply the user ID, as there’s not much to convey about the user who closed their account. All our applications subscribe to this message, so when it arrives, they clean up any data they have related to this user. We also have a different message with a routing key like 'profile.updated', sent when a user updates information, such as a job title. The consumers of this message can check their cached data and invalidate it as necessary.
00:17:19.240 The payload of the updated message includes the user ID and the fields that have been changed. If a consumer application mirrors the master data records for users, however, it may want to verify that it has the most up-to-date data, which is why we do not include new updated data in the message payload. This design acknowledges the fact that we're dealing with a distributed system where the order of messages is not guaranteed. In situations where a user updates the same field multiple times in quick succession, the last message received may not correspond with the latest update, leading to an inconsistency in our mirror. Therefore, we adopt a commutative approach by only including the identities of the fields that have changed while leaving it to the consumer applications to fetch the most recent data.
00:19:11.990 To complement our messaging setup, we also provide a REST API that consistently delivers the latest data, ensuring that if a consumer learns an update has occurred but requires the new data, it can follow up with a REST request to fetch that information. This method has the added benefit of creating smaller payloads, which is more manageable for the broker, especially when dealing with a large volume of messages. The amount of information we include in messages is just enough for consumers to assess whether they need to make a follow-up REST call, typically including only the field names.
00:20:57.210 In a distributed system, you also cannot rely on a guarantee of exactly one message delivery. You may encounter duplicate messages, especially if a producer is recovering from a connection failure. This producer might not be able to determine whether a previously sent message was successfully received by the broker. To ensure at least once delivery, a well-behaved producer will typically resend the message, which could result in duplicate messages received by the consumer. As such, it’s advisable to employ idempotent message handling to prevent any issues from this behavior.
00:22:52.430 Earlier, we focused on consumers and how they connect to the system. Now, let’s consider producers, who can also join the system by connecting to the broker and sending messages using a consistent routing key. In this scenario, messages from multiple producers could populate the same queue, allowing a single consumer to process them. This approach is useful when tracking a widespread event. For example, after integrating an expensive third-party service, we wanted to monitor our usage, which could have required us to implement tracking in every application, thus complicating our data analysis. Instead, we made our applications message producers. Whenever one of our apps interacts with the third-party service, it drops a message, which goes to a central queue consumed by a simple tracker application that increments a counter, providing a centralized tracking mechanism.
00:24:31.750 I’ve discussed multiple producers and consumers that can connect to the system. In the master data update example, there was one producer, the master data application sending messages to various consumers that needed the updates. In the tracking example, we have multiple producers generating events to monitor. Technically, you can send messages from multiple producers to multiple consumers in AMQP, which handles this very well. However, you should approach it carefully. Although publishers and consumers don't need a thorough understanding of each other, they must agree on message payload content, which often takes the form of JSON, just like a REST API response. Eventually, breaking changes to this payload will arise, and coordinating these changes across multiple producers and consumers can be a challenging experience.
00:26:08.890 Moreover, your business logic can often be better represented by having several different messages with distinct routing keys, some dealing with one-to-many routing, while others utilize one-to-one. The messages we've discussed so far primarily serve as notifications, indicating that something has occurred. In contrast, command messages invoke an action; they instruct the consumer to perform a task on the producer's behalf. For example, when a user uploads a new profile image, we must process that image, generating thumbnails for various sizes and uploading them to our servers—a task that requires considerable time and should not keep users waiting. Therefore, our user-facing front-end sends an AMQP message stating that an image is to be processed, returning control to the user immediately.
00:27:42.770 In this structure, the processor, which manages image processing, performs the heavy lifting without obstructing user activities, leading to better response times for the front-end. Users can still upload images, and the messages will remain in the queue until the image processing service comes back online if it experiences downtime. Furthermore, this setup increases reliability compared to a basic background job since you can leverage persistence features that message brokers offer.
00:29:50.840 For even greater decoupling, you might establish a dedicated consumer just for image processing, allowing the application to manage concurrent image uploads. In the event of a sudden surge in uploads, while messages will remain in the queue longer, it will only delay the processing of these images. The rest of the platform continues to function smoothly, unaffected. If image processing becomes a bottleneck, you can add additional consumer instances to process images in parallel efficiently. The consumer code remains straightforward, as it only needs to handle the processing of one message at a time.
00:30:52.320 When discussing message distribution, it’s noteworthy that load balancing of messages does not occur across queues. Instead, messages are distributed directly to the consumer instances, based on how you design your application and setup. Balancing can be done using round-robin methods or other strategies. This approach proves fruitful beyond time-consuming tasks; it can also be beneficial for frequently executed tasks. For instance, if you need to migrate millions of users, instead of running a one-by-one migration script that might take a long time, you can create a quick script that simply sends an AMQP message for each user to be migrated.
00:31:50.310 This migration trigger script completes quickly, meaning you won’t spend much time managing it. The messages then route to a queue where they are consumed by one or more instances performing the actual migration, facilitating seamless transitions when upgrading implementations without user-facing downtime. For instance, we recently switched the format of our profile images from rectangular to square, necessitating image creation in the new format for all users. We built a migration trigger that sent AMQP messages for each user while the old system continued processing user uploads without interruption. Therefore, users could still upload images to the old system, and whenever they did, we ensured a message was sent to the migration queue.
00:34:05.520 This approach ensured the creation of new images in the format based on the users' most recent uploads while they had access to the old system. The message handling needed to be idempotent to ensure that users could potentially appear in the migration queue multiple times without complication. Additionally, when replacing a significant component of your system with a new implementation, comprehensive testing is essential. In the past, when replacing our user profile front-end, we leveraged AMQP to test the system. Since the underlying data was years old, chronicling every edge case in a test plan was unfeasible.
00:36:05.360 Instead, we modified the old implementation, so every time a request was made, it sent an AMQP message regarding that action. Simultaneously, a simple consumer executed a shadow call by mirroring requests made to the new system. Users continued to interact with the old system seamlessly while we gathered live traffic data for the new application. We learned if the new implementation could handle the traffic it would encounter when going live and discovered several bugs in our new implementation that we could fix ahead of the actual launch.
00:37:43.200 Now, let me present some straightforward example code. I'm using the Bunny gem, creating a connection to the message broker, establishing a channel for separate connections within one TCP/IP connection. Even if I'm unsure whether the queue already exists or not, I create the queue called 'awesome_task_queue.' After setting up a payload (a timestamp, for example), I publish the message using the default exchange. RabbitMQ already acknowledges the queue exists, and upon sending, the message is logged. Thereafter, I set up a simple consumer that connects to the broker and waits for messages to process. This consumer acknowledges received messages right after processing, allowing for continuous consumption. Observe that the implementation allows different consumer instances to process at their own pace, ensuring messages are balanced effectively. The demonstration highlights all these principles in action, solidifying your understanding of how AMQP operates in real-time.
Explore all talks recorded at ArrrrCamp 2015
+12