Talks
Cached Rest Models
Summarized using AI

Cached Rest Models

by Filipe Abreu

In this presentation titled 'Cached Rest Models' at Ruby Unconf 2018, Filipe Abreu, a developer from Zinc, discusses a gem he created during a hack week aimed at simplifying the integration of API calls, specifically for user profile data. The talk highlights the challenges faced when dealing with multiple APIs to aggregate user information across various applications.

Key Points:
- Background: Filipe introduces himself and his experience while working at Zinc, emphasizing the company's hack week initiative that encourages creativity and learning.
- Hack Week Project: He describes his project called Cached Rest Models, which aims to streamline the way user data is retrieved and cached without relying on traditional Active Record models.
- Problem Description: Filipe explains the issue of needing to call multiple APIs for a single user profile, specifically noting the complexity involved in API management particularly with privacy laws.
- Gem Functionality: The gem maintains a local cache using Redis, allowing applications to share cache data, which leads to improved performance by avoiding repetitive API calls.
- Implementation: He showcases simple code examples and functionalities, demonstrating how attributes for user models can be created dynamically and can incorporate REST API calls with added caching capabilities.
- Advantages: Key advantages include reduced data retention requirements within applications and improved response times. The cache can be configured to expire at set intervals, ensuring data freshness without necessitating a complex data management system.
- Inspiration and Future Development: Filipe reflects on the inspiration behind the gem, stemming from an older Rails tool that has been deprecated. He encourages contributions to the ongoing development of the gem, emphasizing its utility for smaller applications.
- Conclusion: While the gem is not fully ready for production and Zinc utilizes a more integrated solution with Kafka, Filipe expresses hope that his work will help others encountering similar challenges. Additionally, he invites job seekers to consider opportunities at Zinc.

This presentation serves as a practical insight into creating efficient data handling mechanisms in Ruby applications, focusing on the use of caching and API integration effectively.

00:00:13.400 Well, thank you for having me. My name is Filipe Abreu, and I work for this company here, Zinc, in case you don't know us.
00:00:16.049 I've been working there for six months. By the way, I'm originally from Brazil, and before that, I was working at a company called Dino Research, which was also an amazing situation.
00:00:30.689 Zinc is a fantastic company, and one nice thing we do is hack weeks. Essentially, we have a whole week to work on any project we want, with the goal of learning or improving something inside the company. I've been lucky because, in my two weeks out of six months working there, I've been able to participate in two hack weeks. In one of those hack weeks, I did a project mostly on my own. Today, though I'm here presenting it to you, I apologize that I didn't have much prepared; it was a talk I decided to do this morning on a whim. Well, let's get into it and hope I don’t embarrass myself or the company.
00:01:01.500 During this hack week, I created a user profile page for my music. To build this page, we had to call several different APIs. Zinc has hundreds of developers working on various applications, and some of these applications are distributed, so we have a bunch of API calls to make. The main API is for the user profile, which returns data like name, current position, company name, city, skills, and professional experience. This information all comes from a single API.
00:01:29.619 However, when it comes to skills, I have to call a different API for that. I don't know why it's set up this way, but it is. In several applications I've seen, we need the user data to show this information. We have a gem that helps with this. The gem is designed to aggregate all the necessary API calls needed to build this user data. It also maintains a copy of this data in our local database. Imagine, especially with the new privacy laws in Europe, having to manage user data from dozens of applications with millions of rows of databases—it would be a mess.
00:02:03.890 Moreover, when we need to refresh user data or say we need a new column for user data in our application, we have to refresh this data for over 30 million users, which takes a long time to process. I decided to work on this project, which I call Cached Rest Models. It was previously named Cached Zinc Cached Rest Models, but I decided to simplify the name so everyone could easily use it. The goal is straightforward: to have a model object that doesn't need to be an Active Record model; it can be any class.
00:02:31.259 This model can be composed with attributes, and these attributes will retrieve data from web services while allowing methods inside your model. Using this gem is quite simple: you just install it and then extend from the Cached Rest Model base. Then, you can create your own Cached Rest Model class. I've named mine CacheUser for my application.
00:02:54.110 You create attributes, and you can generate your own modules inside this CacheUser Attributes module. This allows you to add methods that will simply call the API. It may seem a bit rough, but I'm trying to improve that part. After setting up your attributes modules, you include them in your Cached Model, which enables you to work with composition. A single class can then have attributes sourced from various APIs.
00:03:07.560 Let me show you some simple code. This code is using cached calls and a Redis store for caching. I’ve implemented some meta-programming to set the key name, alongside the base key used for the REST calls. You also need to set the host within your base model.
00:03:29.600 Essentially, this is the Cached Rest Model Attributes class. What I’m doing here is calling an API and wrapping it around cache. It's pretty straightforward, right? With this implementation, you can compose your data models with REST API calls. The benefit of having these API calls cached while using a single database is significant.
00:03:48.480 Let’s say I have several different applications, and all these applications are using the same API. I don't have to call these APIs multiple times. Instead, I can have a singular cache that is shared among all those applications, leading to a dramatic increase in speed since, with REST READ, we do not face the usual performance issues. If you're leveraging Redis or another caching backend, it significantly enhances performance as it operates in-memory.
00:04:06.430 Let me show you an example of how I create an interview, using the attributes in that model. These attributes, which I termed 'Top Halves,' are being fetched from the user profile data. While the current implementation is a bit clunky, I acknowledge it is something I need to refactor.
00:04:28.000 The benefit of this approach is that I don't have to retain user data within my application. Instead, I can rely on the cache to handle it, expiring it when needed. For example, when I include these methods into my class, I gain access to various profiles. These profiles will retrieve data from the API. If I've previously called the data—let's say in my own application—it's merely fetched from the cache since the endpoint serves as the cache key.
00:04:58.480 You can configure the expiration of the cache to a suitable timeframe. For us, setting the cached user perspective to expire in a week was good enough. This part of the implementation is simple, but the idea is to keep it configurable. The back-end should also leverage the same cache.
00:05:27.540 I released this gem today, transferring it from our internal repository to a public repository where anyone can view or contribute to it. The main goal, as I mentioned, is to reduce reliance on retaining data from external applications—this is no longer necessary. You can depend on the cache to handle that for you.
00:06:03.440 The advantage of using Redis or any other background cache tool is that it allows sharing cached data among multiple applications without the constraints of maintaining consistent data as you would require for a single database. While it’s important to be careful when dealing with data to ensure consistency, in the caching scenario, I can simply expire the data and then fetch updated information from my API.
00:06:38.690 This implementation was a quick hack aimed at solving a problem. It was not adopted within our company, as we already have a comprehensive caching system using Kafka that manages more integrated and complex data requirements. However, it is a useful solution for small applications—if it’s helpful for others, that’s fantastic, and contributions are always welcome.
00:07:05.440 This gem was slightly inspired by a gem that used to exist in Rails—Active Resource—which has been removed for a long time. It allowed users to create models tied to REST endpoints with built-in methods for creation, updatation, and validations. Since it has been removed, I saw a gap in the market for a well-maintained gem that helps people work with microservices easily, instead of using raw HTTP calls and manually handling JSON data parsing.
00:07:45.600 My intent is to create an object-oriented system that lets developers use methods easily and support validations, ultimately improving the experience when integrating with APIs. This gem is still under development; it’s not fully ready for production yet, but I hope it gives you insight into what we are working on during Hack Week 16.
00:08:15.410 Also, I want to mention that we are hiring, so if you're interested in working with us, feel free to approach me or Toby, and I can help you get a job at Zinc.
Explore all talks recorded at Ruby Unconf 2018
+9