about summary refs log tree commit diff
path: root/served/words
diff options
context:
space:
mode:
Diffstat (limited to 'served/words')
-rw-r--r--served/words/akkoma-postgres-migration.html156
-rw-r--r--served/words/atom.xml196
-rw-r--r--served/words/words.html16
-rw-r--r--served/words/writing.css54
4 files changed, 422 insertions, 0 deletions
diff --git a/served/words/akkoma-postgres-migration.html b/served/words/akkoma-postgres-migration.html
new file mode 100644
index 0000000..e7c804b
--- /dev/null
+++ b/served/words/akkoma-postgres-migration.html
@@ -0,0 +1,156 @@
+---
+template=post
+title=Akkoma Postgres Migration
+style=/styles/post.css
+style=writing.css
+#Summary A retelling of how I migrated my Akkoma instance's Postgres database and the troubles I faced.
+#Publish 2023-10-18
+---
+
+\<i>(i'm going to say Pleroma a lot here where Akkoma might
+	be correct for newly installed software, but my instance is
+	a few years old and this is more of a telling-of-events than
+	a guide)</i>
+
+<details class="tldr">
+	<summary>TL;DR; if you migrated your Akkoma's postgres and now you're getting timeouts</summary>
+	<p>
+		It might need a reindex. Use <code>psql</code> to connect
+		to the database and run <code>REINDEX DATABASE akkoma;</code>.
+		This might take awhile.
+	</p>
+</details>
+
+<hr/>
+
+Recently I went about trying to get the services running on
+my VPS to be happy in a gig of RAM. I did not achieve this,
+but I found a solution that worked nearly as well.
+
+I wanted to try to scale my VPS, on the "Linode 4GB" plan, back down to a Nanode. It
+started it's life as a Nanode but Akkoma - well, Pleroma then -
+was greatly displeased with this and pegged my CPU at 100%. Since
+my CPU usage lately peaks at 30% and averages 18%, this no longer
+seems to be the case.
+
+To re-nanode, I had to fit in 1G of memory.
+I managed to shave the 110M I needed
+by asking <code>systemd-journald</code> to stop using 80M of memory
+<i>(it seemed to ignore my 10M plea, but it dropped by 30M so whatever)</i>,
+telling Postgres to max use 100M, and disabling things that
+I as not actively using anymore.
+
+I didn't specifically want to learn the ins-and-outs of Postgres
+performance tuning, so I used <a href="https://pgtune.leopard.in.ua/">pgtune</a>
+to give me the right config lines for 100M. It worked well!
+
+This was all for naught, though, because I couldn't get my
+disk to fit under 25G, which was also a requirement of nanodeisation that I'd
+forgotten about. The database itself was 9.9G! You can
+<a href="https://docs.akkoma.dev/stable/administration/CLI_tasks/database/#prune-old-remote-posts-from-the-database">Prune old remote posts</a>
+but I didn't really want to do that yet. It seems like the right
+way to go, but I had one more trick.
+
+<h2 id="two-of-them">Two of Them?</h2>
+
+I have to keep a separate VPS around for another thing, and it gets
+half a percent of CPU usage, which is... not a lot. All it does is serve
+a single-page static site through Nginx. I could almost
+certainly put this on the same server as all my things, but
+I like having the separation.
+
+This does mean that I pay for almost an entire Nanode to do
+very nearly nothing. 
+
+By putting Postgres on it I'd lose the different-machine aspect
+of the separation, but gain so much disk space and memory. The
+single-page-static is still on a separate public IP which is
+good enough for me!
+
+<h3 id="setup-postgres">Postgres Migration</h3>
+
+<i>(more of a recount of events than a guide, but written guidlike? just pay mind to the commands and you'll be fine)</i>
+
+Install Postgres on the new server. It doesn't have to be the
+same major version since we're going to dump and restore the
+database which is
+<a href="https://www.postgresql.org/docs/current/upgrading.html">the recommended upgrade method anyway</a>.
+Don't forget to run <code>initdb</code> and give your data
+directory with the <code>-D</code> flag. Run it under the
+postgres user.
+
+Now create the database and role that you'll use. In my experience
+these have to match the database you're migrating from. I followed
+the <a href="https://docs.akkoma.dev/stable/administration/backup/#restoremove">Akkoma database restore/move</a>
+docs and ended up using psql, again under the postgres user, to run
+<code>CREATE USER akkoma WITH ENCRYPTED PASSWORD '&lt;database-password&gt;';</code> and
+<code>CREATE DATABASE akkoma OWNER akkoma;</code>. <i>(well, i replaced akkoma with pleroma and later used alter queries to change them, but that's because my database is old)</i>
+
+After that was ready, I used my firewall of choice (ufw) to
+allow the servers to talk using their private IPs <i>(yay same datacenter)</i>. After that was done, I ran
+this command <code>pg_dump -U akkoma -C akkoma | ssh dynamo "sudo psql -U akkoma -d akkoma"</code>
+and waited.
+<i>dynamo</i> being the host of the new postgres server and owner of a spot in my .ssh/config.
+
+A Note:<br/>
+you can directly do <code>pg_dump ... | psql ...</code> but the Postgres upgrade
+docs say you need to use the new psql version to upgrade, and the old server was missing that
+binary. Instead of seeing if psql 13 would work or if I could get psql 15 working there, I
+pipped it over ssh.
+
+It completed quicker than I thought, the command only took 21 minutes!, and all seemed well.
+
+<h3 id="all-was-not-well">All Was Not Well</h3>
+
+First, to prevent Akkoma from receiving activites that may
+be lost if I have to revert, I disallowed everything on 80/443
+except to my own IP so I could see if the web interface was working.
+Yeah my website'd be down for a bit but it was whatever. <i>(i think i could've
+	edited the nginx config to the same effect, but this was easier)</i>
+
+I edited my <code>/etc/pleroma/config.exs</code> to point
+to the new postgres server and started Akkoma, but new-Postgres didn't
+see a connection? Oh, I edited the wrong config and it was still
+connecting to the local Postgres.
+
+I deleted <code>/etc/pleroma</code>, so I'd stop getting confused by
+it, and edited the <i>correct</i> file: <code>/opt/pleroma/config/prod.secret.exs</code>
+<i>(this is because I'm a From Source install)</i>.
+
+Aaaand it didn't work. Turns out it was trying to connect to it's own private IP
+because copy-paste can be hard sometimes. Glad I stopped old-Postgres.
+
+Fixing that, I finally saw connections on the other machine. New problem: Akkoma
+timesout the query after 15000ms (15 seconds) because it was taking too long. what?
+and nothing is loading? ahhh.
+
+per the Akkoma docs from earlier, I ran some commands to try and cleanup
+the database. I'm a
+From Source install, so I can <code>mix pleroma.database vacuum analyze</code>
+which did <i>not help</i> so I tried it again with <code>full</code> instead
+of <code>analyze</code>. This also did not help.
+
+I think what I was looking for was Akkoma to throw a fit as evidence that
+something weird happened during the transfer, but nothing went wrong.
+
+So I was out of ideas. I am a Postgres novice and I'm out of luck. What
+does someone like me do when out of luck? Past the error into Google of course!
+Maybe I should've done that from the start, right, but I don't get
+many results for Akkoma or Pleroma normally.
+
+So to google I went! And pasted <q>timed out because it queued and checked out the connection for longer than 15000ms</q>
+
+and then I read
+<a href="https://elixirforum.com/t/timed-out-because-it-queued-and-checked-out-the-connection-for-longer-than-15000ms/34793/4">a comment from al2o3cr</a> that said:
+
+<blockquote>
+	<p>Usually that's an indication of database issues, from missing indexes to queries that need optimization.</p>
+</blockquote>
+
+"Missing indexes" there caught my eye. It made a lot of sense to me. It's
+taking so long because it's either digging through the 2.5 million activities
+in the database, or it's trying to reindex the thing <i>(both?)</i>. A quick
+google later and I ran <code>REINDEX akkoma;</code> from psql which literally
+fixed all of my problems.
+
+That's it! take care and don't forget to reindex after your migration.
\ No newline at end of file
diff --git a/served/words/atom.xml b/served/words/atom.xml
new file mode 100644
index 0000000..d8a38d5
--- /dev/null
+++ b/served/words/atom.xml
@@ -0,0 +1,196 @@
+<?xml version="1.0" encoding="utf-8"?>
+<feed xmlns="http://www.w3.org/2005/Atom">
+
+	<title>gennyble's writing</title>
+	<subtitle>Technical writing; project updates; weeknotes</subtitle>
+	<updated>2024-03-02T01:42:00-06:00</updated>
+
+	<link rel="self" href="https://nyble.dev/words/atom.xml" type="application/atom+xml" />
+	<id>https://nyble.dev/words/atom.xml</id>
+
+	<author>
+			<name>gennyble</name>
+			<email>gen@nyble.dev</email>
+	</author>
+
+	
+	<entry>
+		<title>Akkoma Postgres Migration</title>
+		<link href="https://nyble.dev/words/akkoma-postgres-migration.html" rel="alternate" type="text/html" />
+		<id>https://nyble.dev/atom/writing-1/akkoma-postgres-migration</id>
+
+		<published>2023-10-18T23:16:00-05:00</published>
+		<updated>2023-10-18T23:16:00-05:00</updated>
+
+		<content type="html">
+&lt;p&gt;
+&lt;i&gt;(i&apos;m going to say Pleroma a lot here where Akkoma might
+	be correct for newly installed software, but my instance is
+	a few years old and this is more of a telling-of-events than
+	a guide)&lt;/i&gt;
+&lt;/p&gt;
+&lt;details class=&quot;tldr&quot;&gt;
+	&lt;summary&gt;TL;DR; if you migrated your Akkoma&apos;s postgres and now you&apos;re getting timeouts&lt;/summary&gt;
+	&lt;p&gt;
+		It might need a reindex. Use &lt;code&gt;psql&lt;/code&gt; to connect
+		to the database and run &lt;code&gt;REINDEX DATABASE akkoma;&lt;/code&gt;.
+		This might take awhile.
+	&lt;/p&gt;
+&lt;/details&gt;
+&lt;hr/&gt;
+&lt;p&gt;
+Recently I went about trying to get the services running on
+my VPS to be happy in a gig of RAM. I did not achieve this,
+but I found a solution that worked nearly as well.
+&lt;/p&gt;
+&lt;p&gt;
+I wanted to try to scale my VPS, on the &quot;Linode 4GB&quot; plan, back down to a Nanode. It
+started it&apos;s life as a Nanode but Akkoma - well, Pleroma then -
+was greatly displeased with this and pegged my CPU at 100%. Since
+my CPU usage lately peaks at 30% and averages 18%, this no longer
+seems to be the case.
+&lt;/p&gt;
+&lt;p&gt;
+To re-nanode, I had to fit in 1G of memory.
+I managed to shave the 110M I needed
+by asking &lt;code&gt;systemd-journald&lt;/code&gt; to stop using 80M of memory
+&lt;i&gt;(it seemed to ignore my 10M plea, but it dropped by 30M so whatever)&lt;/i&gt;,
+telling Postgres to max use 100M, and disabling things that
+I as not actively using anymore.
+&lt;/p&gt;
+&lt;p&gt;
+I didn&apos;t specifically want to learn the ins-and-outs of Postgres
+performance tuning, so I used &lt;a href=&quot;https://pgtune.leopard.in.ua/&quot;&gt;pgtune&lt;/a&gt;
+to give me the right config lines for 100M. It worked well!
+&lt;/p&gt;
+&lt;p&gt;
+This was all for naught, though, because I couldn&apos;t get my
+disk to fit under 25G, which was also a requirement of nanodeisation that I&apos;d
+forgotten about. The database itself was 9.9G! You can
+&lt;a href=&quot;https://docs.akkoma.dev/stable/administration/CLI_tasks/database/#prune-old-remote-posts-from-the-database&quot;&gt;Prune old remote posts&lt;/a&gt;
+but I didn&apos;t really want to do that yet. It seems like the right
+way to go, but I had one more trick.
+&lt;/p&gt;
+&lt;h2 id=&quot;two-of-them&quot;&gt;Two of Them?&lt;/h2&gt;
+&lt;p&gt;
+I have to keep a separate VPS around for another thing, and it gets
+half a percent of CPU usage, which is... not a lot. All it does is serve
+a single-page static site through Nginx. I could almost
+certainly put this on the same server as all my things, but
+I like having the separation.
+&lt;/p&gt;
+&lt;p&gt;
+This does mean that I pay for almost an entire Nanode to do
+very nearly nothing. 
+&lt;/p&gt;
+&lt;p&gt;
+By putting Postgres on it I&apos;d lose the different-machine aspect
+of the separation, but gain so much disk space and memory. The
+single-page-static is still on a separate public IP which is
+good enough for me!
+&lt;/p&gt;
+&lt;h3 id=&quot;setup-postgres&quot;&gt;Postgres Migration&lt;/h3&gt;
+&lt;i&gt;(more of a recount of events than a guide, but written guidlike? just pay mind to the commands and you&apos;ll be fine)&lt;/i&gt;
+&lt;p&gt;
+Install Postgres on the new server. It doesn&apos;t have to be the
+same major version since we&apos;re going to dump and restore the
+database which is
+&lt;a href=&quot;https://www.postgresql.org/docs/current/upgrading.html&quot;&gt;the recommended upgrade method anyway&lt;/a&gt;.
+Don&apos;t forget to run &lt;code&gt;initdb&lt;/code&gt; and give your data
+directory with the &lt;code&gt;-D&lt;/code&gt; flag. Run it under the
+postgres user.
+&lt;/p&gt;
+&lt;p&gt;
+Now create the database and role that you&apos;ll use. In my experience
+these have to match the database you&apos;re migrating from. I followed
+the &lt;a href=&quot;https://docs.akkoma.dev/stable/administration/backup/#restoremove&quot;&gt;Akkoma database restore/move&lt;/a&gt;
+docs and ended up using psql, again under the postgres user, to run
+&lt;code&gt;CREATE USER akkoma WITH ENCRYPTED PASSWORD &apos;&amp;lt;database-password&amp;gt;&apos;;&lt;/code&gt; and
+&lt;code&gt;CREATE DATABASE akkoma OWNER akkoma;&lt;/code&gt;. &lt;i&gt;(well, i replaced akkoma with pleroma and later used alter queries to change them, but that&apos;s because my database is old)&lt;/i&gt;
+&lt;/p&gt;
+&lt;p&gt;
+After that was ready, I used my firewall of choice (ufw) to
+allow the servers to talk using their private IPs &lt;i&gt;(yay same datacenter)&lt;/i&gt;. After that was done, I ran
+this command &lt;code&gt;pg_dump -U akkoma -C akkoma | ssh dynamo &quot;sudo psql -U akkoma -d akkoma&quot;&lt;/code&gt;
+and waited.
+&lt;i&gt;dynamo&lt;/i&gt; being the host of the new postgres server and owner of a spot in my .ssh/config.
+&lt;/p&gt;
+&lt;p&gt;
+A Note:&lt;br/&gt;
+you can directly do &lt;code&gt;pg_dump ... | psql ...&lt;/code&gt; but the Postgres upgrade
+docs say you need to use the new psql version to upgrade, and the old server was missing that
+binary. Instead of seeing if psql 13 would work or if I could get psql 15 working there, I
+pipped it over ssh.
+&lt;/p&gt;
+&lt;p&gt;
+It completed quicker than I thought, the command only took 21 minutes!, and all seemed well.
+&lt;/p&gt;
+&lt;h3 id=&quot;all-was-not-well&quot;&gt;All Was Not Well&lt;/h3&gt;
+&lt;p&gt;
+First, to prevent Akkoma from receiving activites that may
+be lost if I have to revert, I disallowed everything on 80/443
+except to my own IP so I could see if the web interface was working.
+Yeah my website&apos;d be down for a bit but it was whatever. &lt;i&gt;(i think i could&apos;ve
+	edited the nginx config to the same effect, but this was easier)&lt;/i&gt;
+&lt;/p&gt;
+&lt;p&gt;
+I edited my &lt;code&gt;/etc/pleroma/config.exs&lt;/code&gt; to point
+to the new postgres server and started Akkoma, but new-Postgres didn&apos;t
+see a connection? Oh, I edited the wrong config and it was still
+connecting to the local Postgres.
+&lt;/p&gt;
+&lt;p&gt;
+I deleted &lt;code&gt;/etc/pleroma&lt;/code&gt;, so I&apos;d stop getting confused by
+it, and edited the &lt;i&gt;correct&lt;/i&gt; file: &lt;code&gt;/opt/pleroma/config/prod.secret.exs&lt;/code&gt;
+&lt;i&gt;(this is because I&apos;m a From Source install)&lt;/i&gt;.
+&lt;/p&gt;
+&lt;p&gt;
+Aaaand it didn&apos;t work. Turns out it was trying to connect to it&apos;s own private IP
+because copy-paste can be hard sometimes. Glad I stopped old-Postgres.
+&lt;/p&gt;
+&lt;p&gt;
+Fixing that, I finally saw connections on the other machine. New problem: Akkoma
+timesout the query after 15000ms (15 seconds) because it was taking too long. what?
+and nothing is loading? ahhh.
+&lt;/p&gt;
+&lt;p&gt;
+per the Akkoma docs from earlier, I ran some commands to try and cleanup
+the database. I&apos;m a
+From Source install, so I can &lt;code&gt;mix pleroma.database vacuum analyze&lt;/code&gt;
+which did &lt;i&gt;not help&lt;/i&gt; so I tried it again with &lt;code&gt;full&lt;/code&gt; instead
+of &lt;code&gt;analyze&lt;/code&gt;. This also did not help.
+&lt;/p&gt;
+&lt;p&gt;
+I think what I was looking for was Akkoma to throw a fit as evidence that
+something weird happened during the transfer, but nothing went wrong.
+&lt;/p&gt;
+&lt;p&gt;
+So I was out of ideas. I am a Postgres novice and I&apos;m out of luck. What
+does someone like me do when out of luck? Past the error into Google of course!
+Maybe I should&apos;ve done that from the start, right, but I don&apos;t get
+many results for Akkoma or Pleroma normally.
+&lt;/p&gt;
+&lt;p&gt;
+So to google I went! And pasted &lt;q&gt;timed out because it queued and checked out the connection for longer than 15000ms&lt;/q&gt;
+&lt;/p&gt;
+&lt;p&gt;
+and then I read
+&lt;a href=&quot;https://elixirforum.com/t/timed-out-because-it-queued-and-checked-out-the-connection-for-longer-than-15000ms/34793/4&quot;&gt;a comment from al2o3cr&lt;/a&gt; that said:
+&lt;/p&gt;
+&lt;blockquote&gt;
+	&lt;p&gt;Usually that&apos;s an indication of database issues, from missing indexes to queries that need optimization.&lt;/p&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;
+&quot;Missing indexes&quot; there caught my eye. It made a lot of sense to me. It&apos;s
+taking so long because it&apos;s either digging through the 2.5 million activities
+in the database, or it&apos;s trying to reindex the thing &lt;i&gt;(both?)&lt;/i&gt;. A quick
+google later and I ran &lt;code&gt;REINDEX akkoma;&lt;/code&gt; from psql which literally
+fixed all of my problems.
+&lt;/p&gt;
+&lt;p&gt;
+That&apos;s it! take care and don&apos;t forget to reindex after your migration.
+&lt;/p&gt;
+		</content>
+	</entry>
+	
+</feed>
\ No newline at end of file
diff --git a/served/words/words.html b/served/words/words.html
new file mode 100644
index 0000000..c33515d
--- /dev/null
+++ b/served/words/words.html
@@ -0,0 +1,16 @@
+---
+template=post
+title=Writing
+style=/styles/post.css
+---
+
+If I write about tech things, you'll be able to find those things here.
+
+The writing is part of the website <a href="/atom.xml">Atom Feed</a>.
+
+<section class="written" style="clear: both;">
+	<a href="akkoma-postgres-migration.html">Akkoma Postgres Migration</a>
+	<p>
+		A retelling of how I migrated my Akkoma instance's Postgres database and the troubles I faced.
+	</p>
+</section>
diff --git a/served/words/writing.css b/served/words/writing.css
new file mode 100644
index 0000000..6056768
--- /dev/null
+++ b/served/words/writing.css
@@ -0,0 +1,54 @@
+h2, h3, h4, h5 {
+	margin-top: 32px;
+}
+
+blockquote {
+	border-left: 4px solid var(--text);
+	padding: 8px 8px 8px 12px;
+	margin: 0 auto;
+	background-color: var(--background-dim);
+	color: var(--text);
+	font-style: italic;
+}
+
+blockquote p {
+	margin: 0;
+}
+
+code {
+	background-color: var(--background-dim);
+}
+
+details.tldr {
+	width: 100%;
+	margin: 1rem auto;
+}
+
+details.tldr {
+	color: var(--text-dim);
+
+	padding: 8px;
+
+	border-radius: 8px;
+	border: 1px dashed var(--text-dim);
+}
+
+details[open].tldr summary {
+	padding-bottom: 8px;
+	margin-bottom: 8px;
+	border-bottom: 1px solid var(--text-dim);
+}
+
+details.tldr p {
+	margin: 0;
+}
+
+q {
+	background-color: var(--background-dim);
+	font-style: italic;
+	padding: 4px;
+}
+
+q::before, q::after {
+	content: none;
+}