From 588919965350beefc08d8e382de727eb21295b0a Mon Sep 17 00:00:00 2001 From: gennyble Date: Wed, 13 Mar 2024 05:32:02 -0500 Subject: march 13th, 2024 this is what was published on the 10th, here. https://amble.quest/notice/AfhzCKhLrynnNg5Qsi --- served/words/akkoma-postgres-migration.html | 156 ++++++++++++++++++++++++++++ 1 file changed, 156 insertions(+) create mode 100644 served/words/akkoma-postgres-migration.html (limited to 'served/words/akkoma-postgres-migration.html') diff --git a/served/words/akkoma-postgres-migration.html b/served/words/akkoma-postgres-migration.html new file mode 100644 index 0000000..e7c804b --- /dev/null +++ b/served/words/akkoma-postgres-migration.html @@ -0,0 +1,156 @@ +--- +template=post +title=Akkoma Postgres Migration +style=/styles/post.css +style=writing.css +#Summary A retelling of how I migrated my Akkoma instance's Postgres database and the troubles I faced. +#Publish 2023-10-18 +--- + +\(i'm going to say Pleroma a lot here where Akkoma might + be correct for newly installed software, but my instance is + a few years old and this is more of a telling-of-events than + a guide) + +
+ TL;DR; if you migrated your Akkoma's postgres and now you're getting timeouts +

+ It might need a reindex. Use psql to connect + to the database and run REINDEX DATABASE akkoma;. + This might take awhile. +

+
+ +
+ +Recently I went about trying to get the services running on +my VPS to be happy in a gig of RAM. I did not achieve this, +but I found a solution that worked nearly as well. + +I wanted to try to scale my VPS, on the "Linode 4GB" plan, back down to a Nanode. It +started it's life as a Nanode but Akkoma - well, Pleroma then - +was greatly displeased with this and pegged my CPU at 100%. Since +my CPU usage lately peaks at 30% and averages 18%, this no longer +seems to be the case. + +To re-nanode, I had to fit in 1G of memory. +I managed to shave the 110M I needed +by asking systemd-journald to stop using 80M of memory +(it seemed to ignore my 10M plea, but it dropped by 30M so whatever), +telling Postgres to max use 100M, and disabling things that +I as not actively using anymore. + +I didn't specifically want to learn the ins-and-outs of Postgres +performance tuning, so I used pgtune +to give me the right config lines for 100M. It worked well! + +This was all for naught, though, because I couldn't get my +disk to fit under 25G, which was also a requirement of nanodeisation that I'd +forgotten about. The database itself was 9.9G! You can +Prune old remote posts +but I didn't really want to do that yet. It seems like the right +way to go, but I had one more trick. + +

Two of Them?

+ +I have to keep a separate VPS around for another thing, and it gets +half a percent of CPU usage, which is... not a lot. All it does is serve +a single-page static site through Nginx. I could almost +certainly put this on the same server as all my things, but +I like having the separation. + +This does mean that I pay for almost an entire Nanode to do +very nearly nothing. + +By putting Postgres on it I'd lose the different-machine aspect +of the separation, but gain so much disk space and memory. The +single-page-static is still on a separate public IP which is +good enough for me! + +

Postgres Migration

+ +(more of a recount of events than a guide, but written guidlike? just pay mind to the commands and you'll be fine) + +Install Postgres on the new server. It doesn't have to be the +same major version since we're going to dump and restore the +database which is +the recommended upgrade method anyway. +Don't forget to run initdb and give your data +directory with the -D flag. Run it under the +postgres user. + +Now create the database and role that you'll use. In my experience +these have to match the database you're migrating from. I followed +the Akkoma database restore/move +docs and ended up using psql, again under the postgres user, to run +CREATE USER akkoma WITH ENCRYPTED PASSWORD '<database-password>'; and +CREATE DATABASE akkoma OWNER akkoma;. (well, i replaced akkoma with pleroma and later used alter queries to change them, but that's because my database is old) + +After that was ready, I used my firewall of choice (ufw) to +allow the servers to talk using their private IPs (yay same datacenter). After that was done, I ran +this command pg_dump -U akkoma -C akkoma | ssh dynamo "sudo psql -U akkoma -d akkoma" +and waited. +dynamo being the host of the new postgres server and owner of a spot in my .ssh/config. + +A Note:
+you can directly do pg_dump ... | psql ... but the Postgres upgrade +docs say you need to use the new psql version to upgrade, and the old server was missing that +binary. Instead of seeing if psql 13 would work or if I could get psql 15 working there, I +pipped it over ssh. + +It completed quicker than I thought, the command only took 21 minutes!, and all seemed well. + +

All Was Not Well

+ +First, to prevent Akkoma from receiving activites that may +be lost if I have to revert, I disallowed everything on 80/443 +except to my own IP so I could see if the web interface was working. +Yeah my website'd be down for a bit but it was whatever. (i think i could've + edited the nginx config to the same effect, but this was easier) + +I edited my /etc/pleroma/config.exs to point +to the new postgres server and started Akkoma, but new-Postgres didn't +see a connection? Oh, I edited the wrong config and it was still +connecting to the local Postgres. + +I deleted /etc/pleroma, so I'd stop getting confused by +it, and edited the correct file: /opt/pleroma/config/prod.secret.exs +(this is because I'm a From Source install). + +Aaaand it didn't work. Turns out it was trying to connect to it's own private IP +because copy-paste can be hard sometimes. Glad I stopped old-Postgres. + +Fixing that, I finally saw connections on the other machine. New problem: Akkoma +timesout the query after 15000ms (15 seconds) because it was taking too long. what? +and nothing is loading? ahhh. + +per the Akkoma docs from earlier, I ran some commands to try and cleanup +the database. I'm a +From Source install, so I can mix pleroma.database vacuum analyze +which did not help so I tried it again with full instead +of analyze. This also did not help. + +I think what I was looking for was Akkoma to throw a fit as evidence that +something weird happened during the transfer, but nothing went wrong. + +So I was out of ideas. I am a Postgres novice and I'm out of luck. What +does someone like me do when out of luck? Past the error into Google of course! +Maybe I should've done that from the start, right, but I don't get +many results for Akkoma or Pleroma normally. + +So to google I went! And pasted timed out because it queued and checked out the connection for longer than 15000ms + +and then I read +a comment from al2o3cr that said: + +
+

Usually that's an indication of database issues, from missing indexes to queries that need optimization.

+
+ +"Missing indexes" there caught my eye. It made a lot of sense to me. It's +taking so long because it's either digging through the 2.5 million activities +in the database, or it's trying to reindex the thing (both?). A quick +google later and I ran REINDEX akkoma; from psql which literally +fixed all of my problems. + +That's it! take care and don't forget to reindex after your migration. \ No newline at end of file -- cgit 1.4.1-3-g733a5