Get an improved navigation experience with a Chrominium based browser.
Dismiss
Ruby Video
Talks
Speakers
Events
Leaderboard
Sign in
Talks
Speakers
Events
Use
Analytics
Sign in
Suggest modification to this talk
Title
Description
Data consistency in a microservice architecture can be a challenge, especially when your team grows to include data producers from data engineers, analysts, and others. How do we enable external processes to load data onto data stores owned by several applications while honoring necessary Rails callbacks? How do we ensure data consistency across the stack? Over the last five years, Doximity has built an elegant system that allows dozens of teams across our organization to independently load transformed data through our rich domain models while maintaining consistency. I'd like to show you how!
Date
Summary
Markdown supported
In the video titled "Effective Data Synchronization between Rails Microservices," Austin Story, a tech lead at Doximity, shares insights from the company's journey in managing data synchronization in a growing microservice architecture. As organizations expand, maintaining data consistency becomes a complex challenge, especially with multiple teams involved, including application developers and data engineers. Austin outlines the evolution of Doximity's data synchronization strategies and presents a Kafka-based solution that has allowed their teams to work independently while respecting the business logic essential to their applications. Key points include: - **Background on Doximity**: A Rails-based platform that has grown significantly over the past 11 years. It serves over 70% of U.S. Physicians, providing various services like telehealth and continuing medical education. - **Need for Effective Data Syncing**: As the company grew, synchronizing data across multiple Rails microservices became increasingly difficult. Ensuring that data teams and application teams remained aligned while managing complex data needs was a central theme. - **Initial Approaches**: Various methods were attempted to handle data synchronization, such as granting direct database access, which posed risks to application integrity and data logic adherence. An admin UI for RESTful interactions offered some improvements but was eventually deemed inadequate as the organization expanded. - **Advent of Kafka**: The final architecture embraces Kafka, a distributed event streaming platform, which effectively separates data producers (data teams) from consumers (application teams). This allowed each side to operate independently at their own pace. - **Operational Framework**: Doximity developed a structured operation system that consists of messages with attributes allowing independent processing and updating of data. This system has facilitated over 7.7 billion data updates since its implementation. Overall, Austin emphasizes the importance of integrating data processing independently and safely to achieve seamless data synchronization that respects existing business logic. The Kafka implementation at Doximity exemplifies a scalable and effective approach to managing complex data ecosystems, underlining how careful architectural planning and the right tools can lead to successful microservice operations.
Suggest modifications
Cancel