Summarized using AI

Giving your Heroku App highly-available PostgreSQL

Jake Varghese • May 18, 2018 • Pittsburgh, PA • Talk

In this video, Jake Varghese from Compose explains how to migrate a PostgreSQL database to a highly available (HA) service on IBM Cloud. The presentation highlights the importance of maintaining database availability for applications and outlines the process to achieve it. Key points discussed include:

  • Introduction to High Availability: Varghese introduces the concept of high availability for PostgreSQL databases and explains that Compose uses an application called Governor in conjunction with a distribution coordinator called CD to manage the leader-follower structure of database nodes.

  • Usage of HA Proxy: An HA Proxy load balancer is discussed, which connects to the database nodes and manages requests seamlessly during failover events, ensuring that applications experience zero downtime.

  • Patroni Enhancement: Varghese explains a newer fork of the Governor called Patroni, which incorporates Zookeeper and Console as distribution coordinators, enhancing stability and performance for PostgreSQL deployments.

  • Demo Preparation: The session proceeds to a live demonstration that involves migrating data from a Heroku PostgreSQL instance to Compose's HA PostgreSQL service, using an AI application that places uploaded images into Star Trek captain posters for a lighthearted touch.

  • Data Migration Steps: The migration process is elaborated, showing steps to export existing data from Heroku and import it into the IBM Cloud PostgreSQL setup. Important commands such as PG Restore are highlighted, alongside necessary configuration adjustments, such as removing the Heroku PostgreSQL add-on to avoid conflicts with database URLs.

  • Data Integrity Checks: Varghese emphasizes the importance of checking connections and ensuring that the migration process has completed successfully without errors.

  • Final Steps and Conclusion: After completing the staging migration, the specific steps to migrate to the production database are demonstrated, reiterating the ease of switching from staging to production without downtime. He concludes by mentioning the capability to create additional resources, such as Elasticsearch, to enhance application functionality connected to the PostgreSQL database.

Overall, the key takeaway is the streamlined process of migrating and maintaining a highly available PostgreSQL database on IBM Cloud, ensuring that applications remain operational with no downtime during transitions.

Giving your Heroku App highly-available PostgreSQL
Jake Varghese • May 18, 2018 • Pittsburgh, PA • Talk

Giving your Heroku App highly-available PostgreSQL by Jake Varghese

As you’re building out your app, you know how important it is to have your database up at all times. You need your PostgreSQL database to be HA. Jake from Compose is going to show you how to migrate your data to a HA PG service in less than 20 minutes.

This is a sponsored talk by Compose, an IBM company

RailsConf 2018

00:00:11 My name is Jake, and I'm with Compose. Today, I'm going to show you how to migrate your app over to IBM Cloud's highly-available PostgreSQL.
00:00:16 Before we talk about how to do that, I want to discuss how we actually attain high availability. The way we do this is with an application we created called Governor.
00:00:31 Governor sits on top of etcd, which is our distribution coordinator. Etcd is connected to PostgreSQL nodes in a leader-follower format. Etcd is responsible for maintaining this leader-follower structure.
00:00:48 We also use Write Ahead Logging (WAL) stream replication to keep the nodes in sync. In front of the two nodes, we have a highly-available proxy load balancer.
00:01:02 This load balancer is connected to both nodes and to the Governor, so it knows which one is the leader and which one is the follower. If any changes happen to the nodes, the proxy will hold the request until the new leader is established.
00:01:17 This means your app won't have any idea that a failover just occurred. That's how we ensure zero downtime.
00:01:24 Your app connects to the highly-available proxy node. We released this into open source a while back, and it has been forked a couple of times.
00:01:37 One of the newest forks that came out is called Patroni. Patroni builds upon Governor by allowing you to use Zookeeper and Consul as your distribution coordinator.
00:01:42 It is very stable and quite good. We actually use Patroni for all our PostgreSQL deployments.
00:01:49 So, that's how we attain high availability. Now, let me show you the demo that we created to migrate data.
00:01:54 This demo is an artificial intelligence application that is totally real and not made up at all. What this AI does is take an image and insert it into one of four Star Trek captain posters.
00:02:12 It takes your image and places it there. So, let's take a look at what this looks like. I'll go ahead and upload an image here.
00:02:21 Bam! That's Picard as me. Obviously, I'm more of a Sisko fan, but this one's not that bad. I think I'll keep this as my wallpaper.
00:02:35 Then, we have Kirk; I'm not that fond of that one, although I do like the hairline. It looks a lot nicer. Finally, we go to Janeway.
00:02:51 Clearly, the AI needs some tweaks. So, that's our app. Luckily, we don't have to worry about tweaking the AI for what we want to show you today.
00:03:06 For today, we are going to migrate data from the production Captain Lee database over to Compose Cloud's PostgreSQL.
00:03:17 The first thing we're going to do is pull a backup from Heroku and capture that down, so let's go ahead and capture that.
00:03:26 Now, we will download it. Hopefully, it doesn't take too long.
00:03:35 So, the awesome images that I showed you before—those are totally real. They were made by an artificial intelligence for sure, not by one of our awesome designers.
00:03:39 Now that we'll have the download ready, I'm going to take you over to our PostgreSQL configuration details page. Let me get there quickly.
00:03:57 Here's the details page for the PostgreSQL deployment we set up under IBM Cloud. Here's your connection string.
00:04:05 One thing I want to point out is that our deployments come with an entire server, so you can create as many databases as you want on it. We load all the PostgreSQL deployments with a default database called 'compose'.
00:04:16 This default database contains a few items like your admin interface and some other essentials. I have already created the 'Captain Lee Staging' and 'Captain Lee Production' databases.
00:04:28 Now we can go ahead and use PG_restore to restore the backup.
00:04:33 Here we have the standard PG_restore command with a couple of options for no owners. Here’s the host from the connection string.
00:04:46 All of our deployments use non-standard ports for security reasons, so you'll need to use the -P option for the port specification. The username will be admin.
00:05:04 We’re going to import into 'Captain Lee Staging'. Let me copy and paste the password since it's going to ask me for that.
00:05:16 Cool. Now that's importing. One thing we need to do on the staging side is drop the Heroku PostgreSQL add-on.
00:05:21 The reason for this is that the DATABASE_URL config variable is automatically attached to any add-on that's used, so you can't reconfigure it with the add-on still present.
00:05:35 We'll go ahead and remove that real quick. Now, let's double-check that.
00:05:48 Alright, the DATABASE_URL variable is gone. Now we can go ahead and set the DATABASE_URL.
00:05:53 I will use essentially the same connection string, but change the last part to 'Captain Lee Staging'.
00:06:09 It's still processing the data, almost done.
00:06:14 Cool, we are on the sync. Now we can set this… oh, I need to set it up.
00:06:25 Let's double-check that. Alright, the database is already set. So, let's also go into the console and make sure the connection is up and running.
00:06:40 Cool, the import is done. Perfect! Now we have data, and everything is connected.
00:06:50 For our last check, we're going to go to 'Captain Lee Staging' and refresh to make sure we get no errors. Perfect! Staging test is done.
00:07:04 Now all we have to do is pull the stuff into production. That's going to be pretty easy since we already did staging.
00:07:17 All I have to do is change this part right here to 'Production' and let that import. Just a quick password.
00:07:37 Alright, we'll do the exact same thing with the 'Captain Lee Production' Heroku instance.
00:07:40 Now let's double-check that. Alright, the DATABASE_URL variable is gone again. Now we can set it up.
00:07:53 Again, we're going to use the same connection string, except we will change this to 'Production'.
00:08:06 Perfect! Now let's go ahead and run the console to see how far we got.
00:08:14 Okay, not too bad.
00:08:19 Now let's double-check with the console.
00:08:32 Perfect! Now we've copied production data over. Let's just double-check one more time in the app.
00:08:42 That's it! We just migrated all the data over to the new deployment without any downtime.
00:08:55 You do have to change the config variable, which will give you some downtime, but filling up the databases doesn't require switching over immediately.
00:09:13 You can wait until everything is done and then switch over. I don’t know how quickly your big Heroku app will switch all the config variables across the servers.
00:09:33 It should be zero downtime, from what I recall of Heroku.
00:09:45 I think you can sync one of our deployments into another deployment, but I need to double-check on that.
00:10:00 The way we set up the configuration, I’m not sure how well that will work because one will be a leader and the other a follower.
00:10:15 Coming in from another leader could throw everything off, so I’m not too sure about that.
00:10:24 However, hit me up at the booth, and I can ask one of the tech guys when we get there.
00:10:38 So, this is done. You see up here where it says 'search'? There isn't much to search right now.
00:10:50 Let's say I want to add a search; if it were super simple, I would just scan across the PostgreSQL tables.
00:10:59 However, if it's anything more complex than that, I'm probably going to want my own indexer.
00:11:11 You can come into Cloud and create a resource, let's say, ElasticSearch.
00:11:31 Once it's created, you would have an overview of your ElasticSearch deployment, including the connection strings you can use.
00:11:43 You would then use these strings to populate the `ES_URL` or `elasticsearch URL` config variable for your app.
00:11:55 That's basically all there is to migrating your app over. If anyone has any more questions, let me know.
00:12:08 Thanks for your attention, and have a great day!
Explore all talks recorded at RailsConf 2018
+103