<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<author>
	<name>sirjofri</name>
	<email>sirjofri@sirjofri.de</email>
</author>
<link rel="self" href="https://sirjofri.de/changeblog.xml"/>
<rights>© Copyright 2026 sirjofri</rights>
<id>https://sirjofri.de/</id>
<title>changeblog</title>
<updated>2026-01-10T14:55:52+01:00</updated>
<entry>
	<title>Building and Running Purgatorio on Windows 11</title>
	<id>https://sirjofri.de/changeblog/1768049814/</id>
	<link href="https://sirjofri.de/changeblog/1768049814/"/>
	<updated>2026-01-10T13:56:54+01:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>Building and Running Purgatorio on Windows 11</h2>
		<b>Sat, 10 Jan 26 13:56:54 CET</b>
		</header>
		<!-- end defs -->
				<p>I wanted to try building purgatorio (inferno) on a Windows system, and with some effort I got it compile using Visual Studio.
				Even more than that, I was able to set up Visual Studio in a way that I can use IntelliSense and the debugger.
				</p>
				<p>Here are a few notes:
				</p>
		<ul>
		  <li>make sure that git <b>doesn't change line endings to CRLF!</b> This is important!
		  <ul>
		    <li>to "fix" files later, change the file and use <code>git restore</code> to restore it after setting <code>git config core.autocrlf false</code>
		  </ul>
		  </li>
		  <li>it's easiest to open folder in VS, then open the programmer command line. This sets up <code>%PATH%</code> etc.
		  <ul>
		    <li>Tools → Command Line → Developer Command Prompt
		  </ul>
		  </li>
		  <li><b>Do not</b> try to run <code>makemk.sh</code> etc. Just use the NT binaries that are shipped with the repo (<code>Nt/386/bin</code>).
		  <li>adjust <code>mkconfig</code>
		  <li>set <code>%PATH%</code> to include <code>Nt/386/bin</code>: <code>set PATH=%PATH%;...</code>
		  <li>mkfiles: cl/link/lib args replace <code>-</code> with <code>/</code> — mostly optional
		  <ul>
		    <li>the tools sometimes print warnings that the dash options will be deprecated. I think for the future we should change the mkfiles for Nt. <code>/</code> <i>is</i> the windows way for options.
		  </ul>
		  </li>
		  <li><code>libinterp</code>: first manually <code>mk *.h</code> (individual headers)
		  <li><code>utils/mkfile</code>: windows can't build mk properly:
		  <ul>
		    <li>multiple arguments are bundled to a single argument, which can't be understood by cl
		    <li>mk is already built and likely won't change, so you can just comment out mk in the mkfile (<code>NOTPLAN9</code>)
		    <li>I assume the issue is this line: <code>CFLAGS=$CFLAGS -I../include -DROOT'="'$ROOT'"'</code>, and <code>$CFLAGS</code> is included as a single option, not a list of options.
		  </ul>
		</ul>
			<section><!-- 3 -->
				<h4>Setting up Visual Studio</h4>
		<p>		<p>Store these files in the root folder of the project:
				</p>
			<section><!-- 4 -->
				<h5><code>launch.vs.json</code>: Needed for launching. Pick <code>emu</code> in the selection.</h5>
		</p>
		<p><code><pre>
		{
		  "version": "0.2.1",
		  "configurations": [
		    {
		      "name": "emu",
		      "project": "emu\\Nt\\iemu.exe",
		      "args": [ "-r", "${workspaceRoot}", "/dis/wm/wm.dis" ]
		    }
		  ]
		}
		</pre></code>
			</section><!-- 4 -->
			<section><!-- 4 -->
				<h5><code>CppProperties.json</code>: Needed for IntelliSense.</h5>
		</p>
		<p><code><pre>
		{
		  "configurations": [
		    {
		      "inheritEnvironments": [
		        "msvc_x86"
		      ],
		      "name": "x86-Debug",
		      "includePath": [
		        "${env.INCLUDE}",
		        "${workspaceRoot}\\**"
		      ],
		      "defines": [
		        "WIN32",
		        "_DEBUG",
		        "UNICODE",
		        "_UNICODE"
		      ],
		      "intelliSenseMode": "windows-msvc-x86"
		    }
		  ]
		}
		</pre></code>
			</section><!-- 4 -->
			<section><!-- 4 -->
				<h5><code>tasks.vs.json</code>: If you want right-click → build for mkfiles.</h5>
		</p>
		<p><code><pre>
		{
		  "version": "0.2.1",
		  "tasks": [
		    {
		      "taskLabel": "runmk",
		      "appliesTo": "mkfile",
		      "contextType": "build",
		      "type": "launch",
		      "command": "mk",
		      "args": [ "install" ],
		      "envVars": {
		        "PATH": "${workspaceRoot}\\Nt\\386\\bin;${env.PATH}"
		      }
		    }
		  ]
		}
		</pre></code>
		<!-- END everything -->
			</section><!-- 4 -->
			</section><!-- 3 -->
			</section><!-- 2 -->
			</section><!-- 1 -->
		</article>
<!-- END everything -->
		</p>
]]></content>
</entry>

<entry>
	<title>Encrypted File Store on Plan 9 using cryptsetup</title>
	<id>https://sirjofri.de/changeblog/1755538728/</id>
	<link href="https://sirjofri.de/changeblog/1755538728/"/>
	<updated>2025-08-18T19:38:48+02:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>Encrypted File Store on Plan 9 using cryptsetup</h2>
		<b>Mon, 18 Aug 25 19:38:48 CES</b>
		</header>
		<!-- end defs -->
				<p>Sometimes you just need a little writable filesystem that is encrypted and stored in a single file.
				Turns out there are multiple ways to do that, besides the obvious ones.
				</p>
				<p>This post describes a simple way to do that using cryptsetup and gefs.
				It is worth noting that I won't go into details about configuring gefs to do exactly what you want.
				Also, gefs is still considered experimental and should be used with care.
				Especially if you store sensitive data in that file, you should have a proper backup.
				</p>
			<section><!-- 3 -->
				<h4>Cryptsetup</h4>
		<p>		<p>Using gefs on a file is trivial, so we start with the more complicated things: cryptsetup.
				Cryptsetup uses fs(3) to expose the unencrypted file as a simple disk filesystem.
				The stored file itself is encrypted.
				First, we have to create a file we can use.
				We use <code>dd</code> for that:
				</p>
		<code>dd -if /dev/zero -bs 1024 -count 524288 &gt; mydisk</code>
				<p>This generates a file <code>mydisk</code> with a size of roughly 500 MB (512 * 1024 = 524288).
				You can use <code>hoc</code> to calculate the perfect size for you.
				</p>
				<p>Note that gefs has a minimum file size requirement.
				</p>
				<p>We want to encrypt this file with cryptsetup.
				To do that, we first initialize the file, then make it available in <code>/dev/fs</code>:
				</p>
		<code><pre>
		</p>
	<section><!-- 1 -->
		<h2>set up file for encryption. Set password.</h2>

		disk/cryptsetup -f mydisk
		</p>
	</section><!-- 1 -->
	<section><!-- 1 -->
		<h2>make file available as /dev/fs/mydisk</h2>

		disk/cryptsetup -i mydisk
		</pre></code>
				<p>After doing that, the decrypted disk file will be available as <code>/dev/fs/mydisk</code>.
				</p>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>gefs</h4>
		</p>
		<p>		<p>With our virtual disk available in <code>/dev/fs/mydisk</code>, let's use it:
				</p>
		<code><pre>
		</p>
	</section><!-- 1 -->
	<section><!-- 1 -->
		<h2>ream the disk, with $user as the owner</h2>

		gefs -f /dev/fs/mydisk -r $user
		</p>
	</section><!-- 1 -->
	<section><!-- 1 -->
		<h2>srv the disk as /srv/mydisk and /srv/mydisk.cmd</h2>

		gefs -f /dev/fs/mydisk -n mydisk
		</pre></code>
				<p>With that set up, we can mount the disk and use it:
				</p>
		<code><pre>
		mount -c /srv/mydisk /n/mydisk
		</p>
	</section><!-- 1 -->
	<section><!-- 1 -->
		<h2>do something</h2>

		</pre></code>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>Shutting down the filesystem</h4>
		</p>
		<p>		<p>To shut down the disk and remove it from <code>/dev/fs</code>, we first have to remove the only process that accesses the file <code>/dev/fs/mydisk</code> by shutting down gefs, then we can remove it from <code>/dev/fs</code>.
				</p>
		<code></pre>
		unmount /srv/mydisk
		</p>
	</section><!-- 1 -->
	<section><!-- 1 -->
		<h2>stop gefs</h2>

		echo halt &gt; /srv/mydisk.cmd
		</p>
	</section><!-- 1 -->
	<section><!-- 1 -->
		<h2>remove from /dev/fs</h2>

		echo del mydisk &gt; /dev/fs/ctl
		</pre></code>
				<p>When gefs is still running while you remove the disk from <code>/dev/fs</code>, fs(3) will wait until the file is not used anymore, and then remove it.
				</p>
				<p>Regarding actual use: I haven't used this system yet.
				It is possible that it's very slow, but I doubt that.
				Gefs could eat your data, so have a good backup solution.
		<!-- END everything -->
				</p>
			</section><!-- 3 -->
			</section><!-- 2 -->
			</section><!-- 1 -->
		</article>
<!-- END everything -->
		</p>
	</section><!-- 1 -->
]]></content>
</entry>

<entry>
	<title>ChromaGun 2: Dye Hard</title>
	<id>https://sirjofri.de/changeblog/1740304625/</id>
	<link href="https://sirjofri.de/changeblog/1740304625/"/>
	<updated>2025-02-23T10:57:05+01:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>ChromaGun 2: Dye Hard</h2>
		<b>Sun, 23 Feb 25 10:57:05 CET</b>
		</header>
		<!-- end defs -->
				<p><b>Disclaimers</b>:
				Right now, there's only the demo freely available, so the review is about that only.
				Furthermore, since ChromaGun is about colors, it <i>has</i> a specific colorblind mode, which I didn't test, and I'm also not the right person to test this.
				</p>
				<p>ChromaGun 2 (Demo!) is a puzzle platformer which surprised me with a good story and challenging puzzles, especially for a demo.
				Often enough, the story and the complexity of the puzzles in a demo are reduced to a bare minimum, not showing too much to make people buy the game.
				In this case however, the demo is perfect for exploring what the game is about.
				It even has enough replay value due to achievements and collectibles.
				</p>
				<p>The demo teases a great and deep <b>story</b> which can't hide behind other great puzzle platformers, most notably <i>Portal</i>.
				While the story shares some similarities with Portal (“Testing”), it is clearly not a clone.
				In the story, the player is guided through different dimensions with very distinct styles.
				Each dimension has its own sections with puzzles to solve, and a different narrator.
				Judging from the demo, it seems like there's not much choice the player has about the story; there are no branches, no different endings.
				It would be easy to spoil more parts of the story, but it's even easier to say:
				Just play it—it's worth it!
				</p>
				<p><b>Gameplay</b>-wise, it looks very similar to the first part <i>ChromaGun</i>, though I can't really tell as I haven't played it.
				The player has the ability to color specific plates and objects in the level using the Chroma Gun.
				These colored plates will behave like magnets for various gameplay objects with the same color.
				The challenge is to color the correct plates in the correct order to manipulate the level so we can finish this section and continue.
				</p>
				<p>The colors themselves are oriented around three primary colors (red, yellow, blue) and their secondary colors (orange, green, magenta).
				Those secondary colors can be achieved by mixing the primary colors accordingly.
				</p>
				<p>It's worth noting that at some point the plates and objects are “overcolored” and just turn black, deactivating any magnetic behavior.
				This, and the fact that some objects can't change their color, is used as an additional puzzle challenge.
				</p>
				<p>Throughout the demo there are challenges of varying degrees.
				While some challenges can be solved very easily by just looking around and coloring two objects, others require careful thinking and also thinking outside the box.
				One challenge even let me pause playing to continue at a later point.
				Yes, the demo is not just made for a single testing session—you can enjoy it multiple times!
				</p>
				<p>The <b>graphics</b> of the game are kept simple, there's no over-realism or hyper-realism.
				The image is clear to read, there are no strange lighting artifacts or blurring happening.
				Various effects and animated objects fill the otherwise almost sterile levels with life.
				And did I notice some very well known Niagara effect somewhere?
				</p>
				<p>Other than that, it's hard to judge the game graphics based on the demo and in total.
				Each dimension has its own art and graphical style, one going into a more organic direction and the other looks like a comic.
				Each style however looks very well thought out and matches the dimension.
				</p>
				<p>As a game developer, it would be quite interesting to take a look behind the scenes and see how they achieved those different styles.
				And of course, I noticed a few things that I'd have done differently.
				</p>
				<p><b>In total</b>, the ChromaGun 2: Dye Hard demo is a wonderful demo that showcases what a demo in 2025 can be.
				It gives a good taste of the story, and has a lot of replay value for a demo by making use of achievements and collectibles.
				The gameplay covers a nice range of easy and complex puzzles, I sure hope they can find the right balance in the final game.
				Pixel Maniacs, good job!
				</p>
		<ul>
		<li><a href="https://store.steampowered.com/app/2982340/ChromaGun_2_Dye_Hard/">ChromaGun 2: Dye Hard on Steam</a></li>
		</ul>
				<p>Comment:
		<a href="https://pleroma.envs.net/notice/ArPM0sC5Xp2tGkgQgS">Fediverse Post</a>
		<!-- END everything -->
				</p>
		</article>
<!-- END everything -->
]]></content>
</entry>

<entry>
	<title>Mail Server DKIM</title>
	<id>https://sirjofri.de/changeblog/1740150466/</id>
	<link href="https://sirjofri.de/changeblog/1740150466/"/>
	<updated>2025-02-21T16:07:46+01:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>Mail Server DKIM</h2>
		<b>Fri, 21 Feb 25 16:07:46 CET</b>
		</header>
		<!-- end defs -->
				<p>Some mail providers want it, others demand it: DKIM.
				</p>
				<p>Upas is quite an old mail system, but it <i>has</i> dkim support.
				However, documentation for upas in general is rare, so I'll try to note down how to sign your outgoing mail in a 9front mail system.
				This post ist not only for you, but also for me in five years.
				</p>
			<section><!-- 3 -->
				<h4>Theory: DKIM on Plan 9</h4>
		<p>		<p>Upas is distributed with an additional tool <code>upas/dkim</code>, which we will use here.
				The tool expects the private key in factotum.
				How you get the key into the factotum is up to you as it depends on various factors.
				I'll just show you which key to generate and how to use it.
				</p>
				<p>DKIM uses your domain and a specific <i>selector</i> as an identifier.
				While it is pretty clear what the domain is, the selector is just a name for a specific key.
				It is possible to have multiple DKIM keys, and this is sometimes needed when rotating your keys.
				</p>
				<p>Everything else is just calling <code>dkim</code> in your <code>remotemail</code>.
				</p>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>Implementation</h4>
		</p>
		<p>		<p>To generate keys, run the following commands:
				</p>
		<code><pre>
		auth/rsagen -b 2048 -t 'service=dkim role=sign hash=sha256 domain=example.com'
		  > dkimprivatekey
		auth/rsa2asn1 -f spki dkimprivatekey | auth/pemencode DKIM >dkimpubkey
		</pre></code>
				<p>This will generate the private key you should feed into the factotum, as well as a public key file in PEM format.
				</p>
				<p>We don't need the PEM format specifically, but it's an easy way to create a Base64 encoded version of the public key, which is what we need.
				Just forget about the specific and only copy the key itself to the DNS entry.
				</p>
				<p>The DNS entry must be a TXT entry named <code>SELECTOR._domainkey.example.com</code> with the content: <code>v=DKIM1; k=rsa; p=YOURPUBLICKEY</code>.
				</p>
				<p>This DNS entry will be used by the receiving servers to verify your mail.
				Keep note of the <i>SELECTOR</i> as it is the name of this specific key, and you'll use it to tell the receiving server which key you used for signing.
				</p>
				<p>To sign your mails, open your <code>/mail/lib/remotemail</code> file and edit the call to <code>smtp</code> with something similar to this:
				</p>
		<code><pre>
		/bin/upas/smtp -f -C -s -h $fd $addr $sender $*
		   | /bin/upas/dkim -s SELECTOR -d example.com
		   | /bin/upas/smtp -C -s -h $fd $addr $sender $*
		</pre></code>
				<p>You can see, your mail is processed by two calls to <code>smtp</code>, with a call to <code>dkim</code> in between.
				The first call doesn't <i>send</i> the mail, it only processes it (the <code>-f</code> flag) to add additional headers.
				</p>
				<p>The call to <code>dkim</code> then processes the headers and adds the DKIM signature header to your mail.
				</p>
				<p>Last, the second call to <code>smtp</code> finally sends the processed mail to the receiving server.
				</p>
				<p>Comment:
		<a href="https://pleroma.envs.net/notice/ArLe4cGFkHavYUj4lM">Fediverse Post</a>
		<!-- END everything -->
				</p>
			</section><!-- 3 -->
			</section><!-- 2 -->
			</section><!-- 1 -->
		</article>
<!-- END everything -->
		</p>
]]></content>
</entry>

<entry>
	<title>HTTPS on 9front</title>
	<id>https://sirjofri.de/changeblog/1629103016/</id>
	<link href="https://sirjofri.de/changeblog/1629103016/"/>
	<updated>2021-08-16T10:36:56+02:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>HTTPS on 9front</h2>
		<b>Mon, 16 Aug 21 10:36:56 CES</b>
		</header>
		<!-- end defs -->
				<p>I was able to switch my website hosting to 9front completely!
				This is thanks to ori's aclient (acme client which works with letsencrypt).
				This change makes my website deployment much easier since I'm fully writing on 9front and also deploying to 9front.
				The only thing missing is that the source repository is on github, but that's fine since we also have git9.
				</p>
				<p>I'm writing this short blog post on my smartphone via drawterm for android, and I'm sure we'll get a proper documentation for how aclient works (btw the man page is very well documented), so I'll just add some short note about how to use the certificate with tcp80 and tlssrv.
				</p>
				<p><code>/rc/bin/service/tcp443</code>:
		<code><pre>
		#!/bin/rc
		/bin/tlssrv -c/sys/lib/tls/acmed/mydomain.tls.crt /bin/tcp80
		</pre></code>
		<!-- END everything -->
				</p>
		</article>
<!-- END everything -->
]]></content>
</entry>

<entry>
	<title>Automatically save sent files in “Sent”</title>
	<id>https://sirjofri.de/changeblog/1608301892/</id>
	<link href="https://sirjofri.de/changeblog/1608301892/"/>
	<updated>2020-12-18T15:31:32+01:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>Automatically save sent files in “Sent”</h2>
		<b>Fri, 18 Dec 20 15:31:32 CET</b>
		</header>
		<!-- end defs -->
				<p>Since it is what many mail clients do it might be helpful for other people to have this.
				As for me, I like to have all my outgoing mails automatically saved in a <i>Sent</i> directory, so that's what I want to do.
				</p>
				<p>There are multiple ways to do this.
				I want to present a very simple way without sending your mail to yourself or something like that.
				</p>
				<p>Outgoing mail is processed via <code>upas/send</code> or <code>/mail/box/$user/pipefrom</code>, if that exists.
				We use this feature to build or own little filter script for outgoing mails.
				</p>
				<p>The script itself is very simple.
				We just need a temporary place to store the mail message, then save it in the <code>Sent</code> directory and forward it to the normal send routines:
				</p>
		<code><pre>
		#!/bin/rc
		<p>rfork en
		</p>
	<section><!-- 1 -->
		<h2>see /sys/src/cmd/upas/filterkit/pipefrom.sample .</h2>

		bind -c /mail/tmp /tmp
		TMP=/mail/tmp/mine.$pid
		</p>
		<p>cat &gt; $TMP
		</p>
		<p>/bin/upas/mbappend /mail/box/&lt;yourusername&gt;/Sent &lt; $TMP
		/bin/upas/send $* &lt; $TMP
		</pre></code>
				<p>Of course you need to create the directory <code>/mail/box/<yourusername>/Sent</code> and exchange <code><yourusername></code>
				with your <code>$user</code> and make this script executable.
				</p>
				<p>The <code>Sent</code> directory needs read and write access for your own user, which should be fine with the defaults.
				</p>
				<p>If you want unauthenticated users to send mails to that directory you need to make this directory world-writable.
				</p>
				<p>With these adjustments you can send mails with acme/marshal and they are automatically saved in your <code>Sent</code> mailbox.
		<!-- END everything -->
				</p>
		</article>
<!-- END everything -->
		</p>
	</section><!-- 1 -->
]]></content>
</entry>

<entry>
	<title>9front on Lenovo Thinkpad Twist</title>
	<id>https://sirjofri.de/changeblog/1608028434/</id>
	<link href="https://sirjofri.de/changeblog/1608028434/"/>
	<updated>2020-12-15T11:33:54+01:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>9front on Lenovo Thinkpad Twist</h2>
		<b>Tue, 15 Dec 20 11:33:54 CET</b>
		</header>
		<!-- end defs -->
				<p>A few weeks ago I removed archlinux from my remaining machine.
				I noticed how the new lenovo keyboards aren't good and the trackpoint is crap.
				That's why I still prefer the Thinkpad T61, even without battery.
				</p>
				<p>Anyways, I'll try to describe the process of the installation.
				The installation itself went according to the FQA, I'll just add some notes.
				</p>
			<section><!-- 3 -->
				<h4>Process</h4>
		<p>		<p>First I had to disable UEFI completely and switch to legacy BIOS.
				I know 9front can handle UEFI somehow, but I never got it working on any machine.
				To make 9front work with legacy BIOS I had to change the SSD layout from GPT to MBR.
				This was possible, just remove all partitions and use the command line to create DOS partitions.
				Then the SSD was detected as MBR/non-GPT and I could proceed with default installation.
				</p>
				<p>After installation I needed to get WIFI working.
				Thanks to 9front developers I was able to use BSD drivers as documented in the FQA.
				In my case I just grabbed the <code>iwn-2030</code> drivers, placed them in <code>/lib/firmware</code> and built the kernel from scratch.
				</p>
				<p>Enabling ACPI in the <code>plan9.ini</code> and starting <code>aux/acpi</code> went without errors, only <code>bad opcode</code> warnings showed up.
				Still, everything works as expected, so I didn't investigate further.
				</p>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>Issues and Troubleshooting</h4>
		</p>
		<p>		<p>I had exactly <b>two</b> issues.
				The first is the <code>bad opcode</code> as described earlier.
				</p>
				<p>The second is a big surprise.
				Backlight controls work out of the box!
				I know older machines handle this directly without using the operating system, but this was a modern machine.
				Still it worked with only one tradeoff:
				</p>
				<p>It always prints <i>lapicerrors</i> on the console.
				I didn't find a good way to disable them, so I just added a hidden window in my riostart that just <code>cat /dev/kprint</code> so the errors don't fill the screen.
				</p>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>Bonus: conntosrv</h4>
		</p>
		<p>		<p>As a bonus I have a small script that saves me lots of installation time.
				I have a server with my <code>$home</code> directory, including some configuration in my <code>lib/profile</code>.
				On my terminals (laptops) I just work in my <code>$home</code> like it was right there on my machine.
				</p>
				<p>To make this happen I placed the little script in my terminal's <code>/cfg/$sysname/conntosrv</code>
				and called in the <code>/cfg/$sysname/termrc</code>.
				</p>
				<p>The script contains:
		<code><pre>
		#!/bin/rc
		</p>
		<p>echo -n 'connect to server: '
		server=`{read}
		</p>
		<p>if(~ $#server 0){
			echo not connecting to services >[1=2]
			exit
		}
		</p>
		<p>if(! test -e /net/dns)
			ndb/dns -r
		</p>
		<p>auth/factotum
		for(i in $server){
			rimport -Cc $i /n/$i
		}
		bind -c '/n/'^$server(1)^'/usr/'^$user /usr/$user
		</pre></code>
				</p>
				<p>As you can see, the scrips connects to all servers you input at the prompt.
				It takes the first to be your <code>$home</code>, all others are imported to <code>/n</code>.
		<!-- END everything -->
				</p>
			</section><!-- 3 -->
			</section><!-- 2 -->
			</section><!-- 1 -->
		</article>
<!-- END everything -->
		</p>
]]></content>
</entry>

<entry>
	<title>Restrict RCPU User Access to Groups</title>
	<id>https://sirjofri.de/changeblog/1596011563/</id>
	<link href="https://sirjofri.de/changeblog/1596011563/"/>
	<updated>2020-07-29T10:32:43+02:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>Restrict RCPU User Access to Groups</h2>
		<b>Wed, 29 Jul 20 10:32:43 CES</b>
		</header>
		<!-- end defs -->
				<p>This is how to restrict user access to groups.
				You can use this to enable
		<code>rcpu</code>
				access for all users of a specific group.
				All other groups will not be allowed.
				</p>
		To allow access only to <code>sys</code>
		group members: adjust your
		<code>/rc/bin/service/tcp17019</code>
		<code><pre>
		#!/bin/rc
		userfile=/adm/users
		fn useringroup{
			grep $1 $userfile | {
				found=0
				while(~ $found 0 && line=`:{read}){
					if(~ $line(2) $2){
						found=1
					}
				}
				if(~ $found 1)
					status=''
				if not
					status='not found'
			}
		}
		if(~ $#* 3){
			netdir=$3
			remote=$2!`{cat $3/remote}
		}
		fn server {
			~ $#remote 0 || echo -n $netdir $remote &gt;/proc/$pid/args
			rm -f /env/'fn#server'
			. &lt;{n=`{read} && ! ~ $#n 0 && read -c $n} &gt;[2=1]
		}
		exec tlssrv -a /bin/rc -c 'useringroup $user sys && server'
		</pre></code>
		This checks if the user is in group <code>sys</code>
		and only then calls the <code>server</code> function.
		Otherwise the connection is terminated.
				<p>This is especially useful if you want a CPU server to expose filesystems <i>and</i> have cpu access for administrators only.
		<!-- END everything -->
				</p>
		</article>
<!-- END everything -->
]]></content>
</entry>

<entry>
	<title>lib/profile quick hack</title>
	<id>https://sirjofri.de/changeblog/1594885496/</id>
	<link href="https://sirjofri.de/changeblog/1594885496/"/>
	<updated>2020-07-16T09:44:56+02:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>lib/profile quick hack</h2>
		<b>Thu, 16 Jul 20 09:44:56 CES</b>
		</header>
		<!-- end defs -->
				<p>Some smaller change that can change your life.
				</p>
				<p>There are reasons why you not run <i>rio</i> in your lib/profile. For me the main reason would be: You can no longer use
		<code>rcpu -c commands</code>
				in your shell. Rio opens and there you are, stuck in front of a gray background.
				</p>
				<p>My solution:
		<code><pre>
		case cpu
	<section><!-- 1 -->
		<h2>… lots of stuff …</h2>

		   rcpucmd=`{cat /mnt/term/env/cmd >[2]/dev/null}
		   if(~ $#rcpucmd 0)
		      rio
	</section><!-- 1 -->
	<section><!-- 1 -->
		<h2>… lots of stuff …</h2>

		</pre></code>
				</p>
		Now I can <code>rcpu</code> and have my rio, or
		<code>rcpu -c command</code>
		and run the command without leaving my shell.
		<!-- END everything -->
		</article>
<!-- END everything -->
	</section><!-- 1 -->
]]></content>
</entry>

<entry>
	<title>Mail Server Configuration</title>
	<id>https://sirjofri.de/changeblog/1594881674/</id>
	<link href="https://sirjofri.de/changeblog/1594881674/"/>
	<updated>2020-07-16T08:41:14+02:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>Mail Server Configuration</h2>
		<b>Thu, 16 Jul 20 08:41:14 CES</b>
		</header>
		<!-- end defs -->
				<p>Recently I installed my mail server on 9front. Most of the time I followed the guide in the FQA, but still there are things to explain. In this document I'll go through the section of the FQA and annotate things.
				</p>
				<p>Right at the beginning the FQA mentions how the executing user needs write permissions for the mailboxes. This is
				<i>very important</i>!
				If upas can't write the mailboxes the mail server will <i>not</i> accept incoming mail!
				</p>
				<p>In my setup I can skip all DNS stuff, because I have my DNS hosted somewhere else. Make sure to add proper MX records as well as (at least) an SPF record.
				</p>
			<section><!-- 3 -->
				<h4>/mail/lib/smtpd.conf</h4>
		<p>		<p>To make things short, here are the necessary lines in my setup. The server handles authenticated incoming mail for sending to other providers as well as incoming mail for local accounts.
				</p>
		<code><pre>
		defaultdomain    sirjofri.de
		norelay          on
		verifysenderdom  on
		saveblockedmsg   off
		ourdomains       sirjofri.de
		</pre></code>
				<p>Note that the server is no relay for unauthenticated/untrusted requests, it will still relay if you authenticate.
				</p>
				<p>At this point it might be a good idea to check your user password.
				Use
		<code>auth/changeuser</code>
				to add <i>Inferno/POP secrets</i> to your user accounts. Use these passwords to authenticate to the smtp server.
				</p>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>/mail/lib/rewrite</h4>
		</p>
		<p>		<p>The program that handles sending mail uses this file to rewrite mail addresses. This file is responsible for filtering out local mail as well as sending other mails to the mailer.
				</p>
				<p>In my setup I added three aliases:
		<code><pre>
		pOsTmAsTeR    alias postmaster
		aBuSe         alias abuse
		wEbMaStEr     alias webmaster
		</pre></code>
				</p>
				<p>Use regular expressions to define your domain:
		<code><pre>
		\\l!(.*)                alias \\1
		\\l\\.sirjofri\.de!(.*)   alias \\1
		sirjofri.de!(.*)       alias \\1
		</pre></code>
				</p>
				<p>For translating mails I added one more rule for mail address <i>tags</i>. These tags are in the form of <i>user+tag@example.com</i>. Official specifications say that everything behind that “+” must be ignored, but it can be used to automatically sort incoming mail into folders. I do this, by the way, so I describe here, how.
				</p>
				<p>We need rules for those plus signs:
		<code><pre>
		\\"(.+)\\+(.*)\\"  translate "echo `{/bin/upas/aliasmail '\\1'}^'+\\2'"
		</p>
	<section><!-- 1 -->
		<h2>The other translate rules are default</h2>

		</pre></code>
				</p>
				<p>For delivering local mails, I added extra rules:
		<code><pre>
		local!(.+)\\+(.+)  |  "/bin/test -d /mail/box/\\1/\\2 \\&\\& /bin/upas/mbappend /mail/box/\\1/\\2 || /bin/upas/mbappend /mail/box/\\1/mbox"
		local!"(.+)\+(.+)  |  "/bin/test -d /mail/box/\\1/\\2 \\&\\& /bin/upas/mbappend /mail/box/\\1/\\2 || /bin/upas/mbappend /mail/box/\\1/mbox"
		</p>
	</section><!-- 1 -->
	<section><!-- 1 -->
		<h2>leave the other rules untouched.</h2>

		</pre></code>
				</p>
				<p>With this settings, mails to user+<i>tag</i> will be checked. If a mailbox folder for <i>tag</i> exists, mail is sent to this folder. Otherwise it is sent to the user's default inbox.
				<b>Note:</b>
				I tested, but this <i>does not work</i> with aliased mail. If my aliasmail changes <i>userA</i> to <i>userB</i>, mails to <i>userA+tag</i> will be rejected! If you know how I can make this work, feel free to send me a mail.
				</p>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>/mail/lib/names.local</h4>
		</p>
		<p>		<p>This file is pretty easy. Just add your alias mail addresses:
		<code><pre>
		postmaster  sirjofri
		webmaster   sirjofri
		abuse       sirjofri
		</pre></code>
				</p>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>/mail/lib/remotemail</h4>
		</p>
		<p><code></pre>
		#!/bin/rc
		shift
		sender=$1
		shift
		addr=$1
		shift
		fd=`{/bin/upas/aliasmail -f $sender}
		switch($fd){
		case *.*
		    ;
		case *
		    fd=sirjofri.de
		}
		exec /bin/upas/smtp -h $fd $addr $sender $*
		</pre></code>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>SMTP over TLS</h4>
		</p>
		<p>		<p>I don't use port 587. I use 25 for this. Mail servers relay mails to this port by default, so it makes sense.
				</p>
		<code>/rc/bin/service/tcp25</code>
		<code><pre>
		#!/bin/rc
		user=`{cat /dev/user}
		exec /bin/upas/smtpd -f -E -r -c /sys/lib/tls/cert -n $3
		</pre></code>
				<p>Don't forget to create your TLS certificate!
				</p>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>IMAP4 over TLS</h4>
		</p>
		<p>		<p>I did this exactly like the FQA. See there.
				</p>
			</section><!-- 3 -->
			<section><!-- 3 -->
				<h4>No.</h4>
		</p>
		<p>		<p>At this point I stopped. I did not configure ratfs and have no spam handling right now. It doesn't really matter for me, because nobody knows me and I don't use that mail address to register anywhere.
				</p>
				<p>Links:
		<ul>
		<li><a href="https://fqa.9front.org/fqa7.html#7.7">FQA 7.7</a></li>
		</ul>
		<!-- END everything -->
				</p>
			</section><!-- 3 -->
			</section><!-- 2 -->
			</section><!-- 1 -->
		</article>
<!-- END everything -->
		</p>
	</section><!-- 1 -->
]]></content>
</entry>

<entry>
	<title>Guided Replica</title>
	<id>https://sirjofri.de/changeblog/1593621046/</id>
	<link href="https://sirjofri.de/changeblog/1593621046/"/>
	<updated>2020-07-01T18:30:46+02:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>Guided Replica</h2>
		<b>Wed, 1 Jul 20 18:30:46 CES</b>
		</header>
		<!-- end defs -->
				<p>Today I installed
				<i>replica</i>(1)
				on my VPS. I noticed that I can write some helper scripts around it
				and here they are.
				</p>
				<p>You can download them from
		<code>https://sirjofri.de/files/guidedreplica</code>.
				</p>
				<p>You can install it like that:
				</p>
		<code><pre>
	<section><!-- 1 -->
		<h2>bind your client $home to /n/rclient</h2>

	</section><!-- 1 -->
	<section><!-- 1 -->
		<h2>bind your server $home to /n/rserver</h2>

		hget https://sirjofri.de/files/guidedreplica/guidedreplica.rc | rc
	</section><!-- 1 -->
	<section><!-- 1 -->
		<h2>follow the prompts</h2>

		</pre></code>
				<p>This will also install two helper scripts to
		<code>$home/bin/rc/replica/</code>.
				Reproto copies one proto over the other. You can choose which one you want to keep.
				Reupdate is helpful if there are update-update errors. It should automatically solve them (untested, but should work).
				</p>
				<p><b>Update:</b>
				<i>replica</i>(1)
				has issues. Often it does a bad job tracking changes, leaving removed files there and vice versa. I never encountered data loss, only inconsistencies in the copies.
				</p>
				<p>Many people use
				<i>mkfs</i>(8),
				which does not overwrite changed files. At some point I will build some scripts around it and use that instead of
				<i>replica</i>(1).
				</p>
				<p>(Files:
		<code>https://sirjofri.de/files/guidedreplica/README</code>,
		<code>https://sirjofri.de/files/guidedreplica/guidedreplica.rc</code>)
		<!-- END everything -->
				</p>
		</article>
<!-- END everything -->
	</section><!-- 1 -->
]]></content>
</entry>

<entry>
	<title>9front on Netcup VPS</title>
	<id>https://sirjofri.de/changeblog/1593448779/</id>
	<link href="https://sirjofri.de/changeblog/1593448779/"/>
	<updated>2020-06-29T18:39:39+02:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>9front on Netcup VPS</h2>
		<b>Mon, 29 Jun 20 18:39:39 CES</b>
		</header>
		<!-- end defs -->
				<p>Today I installed 9front on a Netcup VPS. Here are some notes if you want to do it yourself.
				</p>
				<p>I used the smallest VPS option. Currently, that's “VPS 200 G8”. It costs like 2.69 Euro, but you might be able to find some way to make it cheaper.
				</p>
				<p>After ordering it might take some time until the server is up and ready.
				By default debian was installed in a GPT, we can ignore that.
				</p>
		Before we can install our custom ISO we first must upload it somewhere.
		This is done via FTP (you get the access data from the SCP), I used windows default file explorer (<code>ftp://user@address</code>, enter password).
		Copy the 9front ISO in <code>/cdrom</code>.
		This will take some time.
				<p>Meanwhile you can delete the virtual disk and create a new one. You need your SCP password for this.
				This step is necessary to remove the GPT. Of course you could manually reformat the disk, but deleting the disk will save time.
				</p>
				<p>In the settings you can virtually insert the iso as a DVD and verify the boot order (DVD first).
				Start up the machine and switch to the web VNC display.
				</p>
				<p>At this point you can proceed with the default 9front installation described in the fqa.
				Don't forget to install the MBR and activate the partition.
				Otherwise there are no additional special steps besides manually configuring the
		<code>/lib/ndb/local</code>
				after installation.
				In my case I made an auth server.
				</p>
				<p>Currently it seems to work fine. I installed the machine today, so there might be some issues I didn't find yet.
		<!-- END everything -->
				</p>
		</article>
<!-- END everything -->
]]></content>
</entry>

<entry>
	<title>changeblog feed — social media²</title>
	<id>https://sirjofri.de/changeblog/1592917245/</id>
	<link href="https://sirjofri.de/changeblog/1592917245/"/>
	<updated>2020-06-23T15:00:45+02:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>changeblog feed — social media²</h2>
		<b>Tue, 23 Jun 20 15:00:45 CES</b>
		</header>
		<!-- end defs -->
				<p>RSS is still a thing.
				</p>
				<p>Yes, there are more modern alternatives, like Atom or fancy json feeds. What I want to say is, feeds are still a thing.
				</p>
				<p>That's why you are now able to read my changeblog as an Atom feed.
				</p>
				<p>Now I just need to find enough time to write my posts.
		<!-- END everything -->
				</p>
		</article>
<!-- END everything -->
]]></content>
</entry>

<entry>
	<title>I use 9front</title>
	<id>https://sirjofri.de/changeblog/1590105600/</id>
	<link href="https://sirjofri.de/changeblog/1590105600/"/>
	<updated>2020-05-22T02:00:00+02:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>I use 9front</h2>
		<b>Fri, 22 May 20 02:00:00 CES</b>
		</header>
		<!-- end defs -->
				<p>Today I want to share with you, that I use the plan9 distribution “9front” as my main computer.
				</p>
				<p>Of course there are things that are almost impossible to do there, for example: all gamedev related stuff. This is of course an issue, because I am a game developer. I still have my windows machine with relevant tools, so I can still fiddle around with those complex things.
				</p>
				<p>For gaming I also use my windows machine or some game console. Yes, there are a few games on plan9 systems.
				</p>
				<p>Also most online services use javascript and heavy styling of webpages, so I also use a modern computer with a modern browser. Mothra is fine for doing basic research stuff, but in 2020 it's almost impossible to actually do things on the web.
				</p>
				<p>Anyways, let me tell you that I don't really miss anything on plan9. I can write documents, check my email stuff, chat with people, and step by step it becomes more usable. The community is helpful and provides more applications. The system runs stable, the user interface is consistent and good to look at. Colors don't jump in your eye and want to kill you and there's catclock(1), our friendly companion.
		<!-- END everything -->
				</p>
		</article>
<!-- END everything -->
]]></content>
</entry>

<entry>
	<title>Revived</title>
	<id>https://sirjofri.de/changeblog/1578614400/</id>
	<link href="https://sirjofri.de/changeblog/1578614400/"/>
	<updated>2020-01-10T01:00:00+01:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>Revived</h2>
		<b>Fri, 10 Jan 20 01:00:00 CET</b>
		</header>
		<!-- end defs -->
				<p>I updated my website to Uberspace 7, but not only this: I changed the whole webpage to make it more nine-friendly.
				</p>
				<p>My whole webpage management system is completely 9 based. I use oridb's git9 implementation and plan9 tools, mk, sed, cat, …
				</p>
				<p>I also decided to change the main language of the website to English.
		<!-- END everything -->
				</p>
		</article>
<!-- END everything -->
]]></content>
</entry>

<entry>
	<title>Hacknet Game Review (German)</title>
	<id>https://sirjofri.de/changeblog/1528455600/</id>
	<link href="https://sirjofri.de/changeblog/1528455600/"/>
	<updated>2018-06-08T13:00:00+02:00</updated>
	<content type="html"><![CDATA[<!-- end defs -->
		<article>
		<header>
		<h2>Hacknet Game Review (German)</h2>
		<b>Fri, 8 Jun 18 13:00:00 CES</b>
		</header>
		<!-- end defs -->
				<p><i>Hacknet</i> ist ein storylastiger Hacking-Simulator.
				Obwohl das „Hacking“ doch recht weit von der Realität entfernt ist, kommen manche Prozesse dem „echten“ Hacken doch sehr nahe.
				So muss der Spieler manche Dinge lernen (oder bereits im Vorfeld wissen), sich die Verwendung neuer Tools anlernen und diese Tools geschickt kombinieren, um an sein Ziel zu kommen.
				Manchmal reicht selbst das nicht aus, es gibt immer einen Hacker, der besser ist als man selbst.
				Dies ist vom Spiel aber gewollt und dient dem Fortschritt der Story.
				</p>
				<p>Überhaupt ist das „Hacking“ recht einfach – es soll ja auch Spaß machen; der
				Schwierigkeitsgrad ändert sich Abschnittsweise und dient dem Spielerlebnis.
				An manchen Passagen gibt es auch ein gewisses Frustpotenzial, was jedoch der Realität noch näher rückt und das Spielvergnügen nicht mildert.
				Das stetige Befehle-eingeben und Systeme analysieren behindert das Erleben der Story im Übrigen überhaupt nicht, eher im Gegenteil:
				Die Tätigkeit lässt einen noch viel mehr in die Rolle des Protagonisten schlüpfen.
				Der Protagonist ist übrigens ein namenloser Hacker, der mit keiner Persönlichkeit ausgestattet ist.
				Für manche Spiele mag das ein Nachteil sein, da man nicht in die Rolle des Helden schlüpfen kann, in <i>Hacknet</i> jedoch lädt es den Spieler dazu ein, sich selbst in das Spiel hineinzusetzen.
				Selber zu hacken, statt den Helden hacken zu lassen.
				</p>
				<p>Die Story von <i>Hacknet</i> hat auch vieles zu bieten:
				Fortschritt, Rivalität und die mysteriösen Handlungen eines großen Technik-Konzerns bieten dem Spieler Herausforderung und den Wunsch, mit seinem neu erlangten Können die Spielwelt zu verändern.
				Mehrere Nebenquests dienen dazu, sich in die Spielwelt einzufinden und die Arbeitsweise der Hackergruppen kennenzulernen.
				</p>
				<p>Während des Spielens fiel mir auf, dass das Spiel durchaus an der vierten Wand gekratzt hat.
				Manchmal verschwamm die Grenze zwischen Spiel und Realität.
				Dafür ist die Länge des Spiel jedoch sehr hilfreich:
				Es ist trotz seiner Tiefe nicht sehr lange.
				Und der Preis scheint angemessen.
				Insgesamt ein sehr schönes Spiel, das den Spieler in ein technisches Abenteuer lockt und mit seinem Immersionspotenzial überrascht.
				</p>
				<p>(Ursprünglich gepostet auf Steam.)
		<!-- END everything -->
				</p>
		</article>
<!-- END everything -->
]]></content>
</entry>

</feed>
