about summary refs log tree commit diff
path: root/served/words/akkoma-postgres-migration.html
diff options
context:
space:
mode:
Diffstat (limited to 'served/words/akkoma-postgres-migration.html')
-rw-r--r--served/words/akkoma-postgres-migration.html156
1 files changed, 156 insertions, 0 deletions
diff --git a/served/words/akkoma-postgres-migration.html b/served/words/akkoma-postgres-migration.html
new file mode 100644
index 0000000..e7c804b
--- /dev/null
+++ b/served/words/akkoma-postgres-migration.html
@@ -0,0 +1,156 @@
+---
+template=post
+title=Akkoma Postgres Migration
+style=/styles/post.css
+style=writing.css
+#Summary A retelling of how I migrated my Akkoma instance's Postgres database and the troubles I faced.
+#Publish 2023-10-18
+---
+
+\<i>(i'm going to say Pleroma a lot here where Akkoma might
+	be correct for newly installed software, but my instance is
+	a few years old and this is more of a telling-of-events than
+	a guide)</i>
+
+<details class="tldr">
+	<summary>TL;DR; if you migrated your Akkoma's postgres and now you're getting timeouts</summary>
+	<p>
+		It might need a reindex. Use <code>psql</code> to connect
+		to the database and run <code>REINDEX DATABASE akkoma;</code>.
+		This might take awhile.
+	</p>
+</details>
+
+<hr/>
+
+Recently I went about trying to get the services running on
+my VPS to be happy in a gig of RAM. I did not achieve this,
+but I found a solution that worked nearly as well.
+
+I wanted to try to scale my VPS, on the "Linode 4GB" plan, back down to a Nanode. It
+started it's life as a Nanode but Akkoma - well, Pleroma then -
+was greatly displeased with this and pegged my CPU at 100%. Since
+my CPU usage lately peaks at 30% and averages 18%, this no longer
+seems to be the case.
+
+To re-nanode, I had to fit in 1G of memory.
+I managed to shave the 110M I needed
+by asking <code>systemd-journald</code> to stop using 80M of memory
+<i>(it seemed to ignore my 10M plea, but it dropped by 30M so whatever)</i>,
+telling Postgres to max use 100M, and disabling things that
+I as not actively using anymore.
+
+I didn't specifically want to learn the ins-and-outs of Postgres
+performance tuning, so I used <a href="https://pgtune.leopard.in.ua/">pgtune</a>
+to give me the right config lines for 100M. It worked well!
+
+This was all for naught, though, because I couldn't get my
+disk to fit under 25G, which was also a requirement of nanodeisation that I'd
+forgotten about. The database itself was 9.9G! You can
+<a href="https://docs.akkoma.dev/stable/administration/CLI_tasks/database/#prune-old-remote-posts-from-the-database">Prune old remote posts</a>
+but I didn't really want to do that yet. It seems like the right
+way to go, but I had one more trick.
+
+<h2 id="two-of-them">Two of Them?</h2>
+
+I have to keep a separate VPS around for another thing, and it gets
+half a percent of CPU usage, which is... not a lot. All it does is serve
+a single-page static site through Nginx. I could almost
+certainly put this on the same server as all my things, but
+I like having the separation.
+
+This does mean that I pay for almost an entire Nanode to do
+very nearly nothing. 
+
+By putting Postgres on it I'd lose the different-machine aspect
+of the separation, but gain so much disk space and memory. The
+single-page-static is still on a separate public IP which is
+good enough for me!
+
+<h3 id="setup-postgres">Postgres Migration</h3>
+
+<i>(more of a recount of events than a guide, but written guidlike? just pay mind to the commands and you'll be fine)</i>
+
+Install Postgres on the new server. It doesn't have to be the
+same major version since we're going to dump and restore the
+database which is
+<a href="https://www.postgresql.org/docs/current/upgrading.html">the recommended upgrade method anyway</a>.
+Don't forget to run <code>initdb</code> and give your data
+directory with the <code>-D</code> flag. Run it under the
+postgres user.
+
+Now create the database and role that you'll use. In my experience
+these have to match the database you're migrating from. I followed
+the <a href="https://docs.akkoma.dev/stable/administration/backup/#restoremove">Akkoma database restore/move</a>
+docs and ended up using psql, again under the postgres user, to run
+<code>CREATE USER akkoma WITH ENCRYPTED PASSWORD '&lt;database-password&gt;';</code> and
+<code>CREATE DATABASE akkoma OWNER akkoma;</code>. <i>(well, i replaced akkoma with pleroma and later used alter queries to change them, but that's because my database is old)</i>
+
+After that was ready, I used my firewall of choice (ufw) to
+allow the servers to talk using their private IPs <i>(yay same datacenter)</i>. After that was done, I ran
+this command <code>pg_dump -U akkoma -C akkoma | ssh dynamo "sudo psql -U akkoma -d akkoma"</code>
+and waited.
+<i>dynamo</i> being the host of the new postgres server and owner of a spot in my .ssh/config.
+
+A Note:<br/>
+you can directly do <code>pg_dump ... | psql ...</code> but the Postgres upgrade
+docs say you need to use the new psql version to upgrade, and the old server was missing that
+binary. Instead of seeing if psql 13 would work or if I could get psql 15 working there, I
+pipped it over ssh.
+
+It completed quicker than I thought, the command only took 21 minutes!, and all seemed well.
+
+<h3 id="all-was-not-well">All Was Not Well</h3>
+
+First, to prevent Akkoma from receiving activites that may
+be lost if I have to revert, I disallowed everything on 80/443
+except to my own IP so I could see if the web interface was working.
+Yeah my website'd be down for a bit but it was whatever. <i>(i think i could've
+	edited the nginx config to the same effect, but this was easier)</i>
+
+I edited my <code>/etc/pleroma/config.exs</code> to point
+to the new postgres server and started Akkoma, but new-Postgres didn't
+see a connection? Oh, I edited the wrong config and it was still
+connecting to the local Postgres.
+
+I deleted <code>/etc/pleroma</code>, so I'd stop getting confused by
+it, and edited the <i>correct</i> file: <code>/opt/pleroma/config/prod.secret.exs</code>
+<i>(this is because I'm a From Source install)</i>.
+
+Aaaand it didn't work. Turns out it was trying to connect to it's own private IP
+because copy-paste can be hard sometimes. Glad I stopped old-Postgres.
+
+Fixing that, I finally saw connections on the other machine. New problem: Akkoma
+timesout the query after 15000ms (15 seconds) because it was taking too long. what?
+and nothing is loading? ahhh.
+
+per the Akkoma docs from earlier, I ran some commands to try and cleanup
+the database. I'm a
+From Source install, so I can <code>mix pleroma.database vacuum analyze</code>
+which did <i>not help</i> so I tried it again with <code>full</code> instead
+of <code>analyze</code>. This also did not help.
+
+I think what I was looking for was Akkoma to throw a fit as evidence that
+something weird happened during the transfer, but nothing went wrong.
+
+So I was out of ideas. I am a Postgres novice and I'm out of luck. What
+does someone like me do when out of luck? Past the error into Google of course!
+Maybe I should've done that from the start, right, but I don't get
+many results for Akkoma or Pleroma normally.
+
+So to google I went! And pasted <q>timed out because it queued and checked out the connection for longer than 15000ms</q>
+
+and then I read
+<a href="https://elixirforum.com/t/timed-out-because-it-queued-and-checked-out-the-connection-for-longer-than-15000ms/34793/4">a comment from al2o3cr</a> that said:
+
+<blockquote>
+	<p>Usually that's an indication of database issues, from missing indexes to queries that need optimization.</p>
+</blockquote>
+
+"Missing indexes" there caught my eye. It made a lot of sense to me. It's
+taking so long because it's either digging through the 2.5 million activities
+in the database, or it's trying to reindex the thing <i>(both?)</i>. A quick
+google later and I ran <code>REINDEX akkoma;</code> from psql which literally
+fixed all of my problems.
+
+That's it! take care and don't forget to reindex after your migration.
\ No newline at end of file