diff options
author | gennyble <gen@nyble.dev> | 2024-03-13 05:32:02 -0500 |
---|---|---|
committer | gennyble <gen@nyble.dev> | 2024-03-13 05:33:32 -0500 |
commit | 588919965350beefc08d8e382de727eb21295b0a (patch) | |
tree | b523e54ff73907b40f754f81ac6e6117dce56e9c /served/words/atom.xml | |
parent | 0b94c2293df9df5c1ff5307d2f169c3c30c02bc6 (diff) | |
download | ∞-588919965350beefc08d8e382de727eb21295b0a.tar.gz ∞-588919965350beefc08d8e382de727eb21295b0a.zip |
march 13th, 2024
this is what was published on the 10th, here. https://amble.quest/notice/AfhzCKhLrynnNg5Qsi
Diffstat (limited to 'served/words/atom.xml')
-rw-r--r-- | served/words/atom.xml | 196 |
1 files changed, 196 insertions, 0 deletions
diff --git a/served/words/atom.xml b/served/words/atom.xml new file mode 100644 index 0000000..d8a38d5 --- /dev/null +++ b/served/words/atom.xml @@ -0,0 +1,196 @@ +<?xml version="1.0" encoding="utf-8"?> +<feed xmlns="http://www.w3.org/2005/Atom"> + + <title>gennyble's writing</title> + <subtitle>Technical writing; project updates; weeknotes</subtitle> + <updated>2024-03-02T01:42:00-06:00</updated> + + <link rel="self" href="https://nyble.dev/words/atom.xml" type="application/atom+xml" /> + <id>https://nyble.dev/words/atom.xml</id> + + <author> + <name>gennyble</name> + <email>gen@nyble.dev</email> + </author> + + + <entry> + <title>Akkoma Postgres Migration</title> + <link href="https://nyble.dev/words/akkoma-postgres-migration.html" rel="alternate" type="text/html" /> + <id>https://nyble.dev/atom/writing-1/akkoma-postgres-migration</id> + + <published>2023-10-18T23:16:00-05:00</published> + <updated>2023-10-18T23:16:00-05:00</updated> + + <content type="html"> +<p> +<i>(i'm going to say Pleroma a lot here where Akkoma might + be correct for newly installed software, but my instance is + a few years old and this is more of a telling-of-events than + a guide)</i> +</p> +<details class="tldr"> + <summary>TL;DR; if you migrated your Akkoma's postgres and now you're getting timeouts</summary> + <p> + It might need a reindex. Use <code>psql</code> to connect + to the database and run <code>REINDEX DATABASE akkoma;</code>. + This might take awhile. + </p> +</details> +<hr/> +<p> +Recently I went about trying to get the services running on +my VPS to be happy in a gig of RAM. I did not achieve this, +but I found a solution that worked nearly as well. +</p> +<p> +I wanted to try to scale my VPS, on the "Linode 4GB" plan, back down to a Nanode. It +started it's life as a Nanode but Akkoma - well, Pleroma then - +was greatly displeased with this and pegged my CPU at 100%. Since +my CPU usage lately peaks at 30% and averages 18%, this no longer +seems to be the case. +</p> +<p> +To re-nanode, I had to fit in 1G of memory. +I managed to shave the 110M I needed +by asking <code>systemd-journald</code> to stop using 80M of memory +<i>(it seemed to ignore my 10M plea, but it dropped by 30M so whatever)</i>, +telling Postgres to max use 100M, and disabling things that +I as not actively using anymore. +</p> +<p> +I didn't specifically want to learn the ins-and-outs of Postgres +performance tuning, so I used <a href="https://pgtune.leopard.in.ua/">pgtune</a> +to give me the right config lines for 100M. It worked well! +</p> +<p> +This was all for naught, though, because I couldn't get my +disk to fit under 25G, which was also a requirement of nanodeisation that I'd +forgotten about. The database itself was 9.9G! You can +<a href="https://docs.akkoma.dev/stable/administration/CLI_tasks/database/#prune-old-remote-posts-from-the-database">Prune old remote posts</a> +but I didn't really want to do that yet. It seems like the right +way to go, but I had one more trick. +</p> +<h2 id="two-of-them">Two of Them?</h2> +<p> +I have to keep a separate VPS around for another thing, and it gets +half a percent of CPU usage, which is... not a lot. All it does is serve +a single-page static site through Nginx. I could almost +certainly put this on the same server as all my things, but +I like having the separation. +</p> +<p> +This does mean that I pay for almost an entire Nanode to do +very nearly nothing. +</p> +<p> +By putting Postgres on it I'd lose the different-machine aspect +of the separation, but gain so much disk space and memory. The +single-page-static is still on a separate public IP which is +good enough for me! +</p> +<h3 id="setup-postgres">Postgres Migration</h3> +<i>(more of a recount of events than a guide, but written guidlike? just pay mind to the commands and you'll be fine)</i> +<p> +Install Postgres on the new server. It doesn't have to be the +same major version since we're going to dump and restore the +database which is +<a href="https://www.postgresql.org/docs/current/upgrading.html">the recommended upgrade method anyway</a>. +Don't forget to run <code>initdb</code> and give your data +directory with the <code>-D</code> flag. Run it under the +postgres user. +</p> +<p> +Now create the database and role that you'll use. In my experience +these have to match the database you're migrating from. I followed +the <a href="https://docs.akkoma.dev/stable/administration/backup/#restoremove">Akkoma database restore/move</a> +docs and ended up using psql, again under the postgres user, to run +<code>CREATE USER akkoma WITH ENCRYPTED PASSWORD '&lt;database-password&gt;';</code> and +<code>CREATE DATABASE akkoma OWNER akkoma;</code>. <i>(well, i replaced akkoma with pleroma and later used alter queries to change them, but that's because my database is old)</i> +</p> +<p> +After that was ready, I used my firewall of choice (ufw) to +allow the servers to talk using their private IPs <i>(yay same datacenter)</i>. After that was done, I ran +this command <code>pg_dump -U akkoma -C akkoma | ssh dynamo "sudo psql -U akkoma -d akkoma"</code> +and waited. +<i>dynamo</i> being the host of the new postgres server and owner of a spot in my .ssh/config. +</p> +<p> +A Note:<br/> +you can directly do <code>pg_dump ... | psql ...</code> but the Postgres upgrade +docs say you need to use the new psql version to upgrade, and the old server was missing that +binary. Instead of seeing if psql 13 would work or if I could get psql 15 working there, I +pipped it over ssh. +</p> +<p> +It completed quicker than I thought, the command only took 21 minutes!, and all seemed well. +</p> +<h3 id="all-was-not-well">All Was Not Well</h3> +<p> +First, to prevent Akkoma from receiving activites that may +be lost if I have to revert, I disallowed everything on 80/443 +except to my own IP so I could see if the web interface was working. +Yeah my website'd be down for a bit but it was whatever. <i>(i think i could've + edited the nginx config to the same effect, but this was easier)</i> +</p> +<p> +I edited my <code>/etc/pleroma/config.exs</code> to point +to the new postgres server and started Akkoma, but new-Postgres didn't +see a connection? Oh, I edited the wrong config and it was still +connecting to the local Postgres. +</p> +<p> +I deleted <code>/etc/pleroma</code>, so I'd stop getting confused by +it, and edited the <i>correct</i> file: <code>/opt/pleroma/config/prod.secret.exs</code> +<i>(this is because I'm a From Source install)</i>. +</p> +<p> +Aaaand it didn't work. Turns out it was trying to connect to it's own private IP +because copy-paste can be hard sometimes. Glad I stopped old-Postgres. +</p> +<p> +Fixing that, I finally saw connections on the other machine. New problem: Akkoma +timesout the query after 15000ms (15 seconds) because it was taking too long. what? +and nothing is loading? ahhh. +</p> +<p> +per the Akkoma docs from earlier, I ran some commands to try and cleanup +the database. I'm a +From Source install, so I can <code>mix pleroma.database vacuum analyze</code> +which did <i>not help</i> so I tried it again with <code>full</code> instead +of <code>analyze</code>. This also did not help. +</p> +<p> +I think what I was looking for was Akkoma to throw a fit as evidence that +something weird happened during the transfer, but nothing went wrong. +</p> +<p> +So I was out of ideas. I am a Postgres novice and I'm out of luck. What +does someone like me do when out of luck? Past the error into Google of course! +Maybe I should've done that from the start, right, but I don't get +many results for Akkoma or Pleroma normally. +</p> +<p> +So to google I went! And pasted <q>timed out because it queued and checked out the connection for longer than 15000ms</q> +</p> +<p> +and then I read +<a href="https://elixirforum.com/t/timed-out-because-it-queued-and-checked-out-the-connection-for-longer-than-15000ms/34793/4">a comment from al2o3cr</a> that said: +</p> +<blockquote> + <p>Usually that's an indication of database issues, from missing indexes to queries that need optimization.</p> +</blockquote> +<p> +"Missing indexes" there caught my eye. It made a lot of sense to me. It's +taking so long because it's either digging through the 2.5 million activities +in the database, or it's trying to reindex the thing <i>(both?)</i>. A quick +google later and I ran <code>REINDEX akkoma;</code> from psql which literally +fixed all of my problems. +</p> +<p> +That's it! take care and don't forget to reindex after your migration. +</p> + </content> + </entry> + +</feed> \ No newline at end of file |