summaryrefslogtreecommitdiff
path: root/journey_home_application_deployment.html
diff options
context:
space:
mode:
authorAki <please@ignore.pl>2021-07-25 19:17:40 +0200
committerAki <please@ignore.pl>2021-07-25 19:17:40 +0200
commitad76e9b885c9b9692074cf5b8b880cb79f8a48e0 (patch)
tree603ebe1a1dbcd9251c84c1c954b7b4dc5b986cc3 /journey_home_application_deployment.html
downloadignore.pl-ad76e9b885c9b9692074cf5b8b880cb79f8a48e0.zip
ignore.pl-ad76e9b885c9b9692074cf5b8b880cb79f8a48e0.tar.gz
ignore.pl-ad76e9b885c9b9692074cf5b8b880cb79f8a48e0.tar.bz2
Initialized website as git repository
Diffstat (limited to 'journey_home_application_deployment.html')
-rw-r--r--journey_home_application_deployment.html95
1 files changed, 95 insertions, 0 deletions
diff --git a/journey_home_application_deployment.html b/journey_home_application_deployment.html
new file mode 100644
index 0000000..d4b2e81
--- /dev/null
+++ b/journey_home_application_deployment.html
@@ -0,0 +1,95 @@
+<!doctype html>
+<html lang="en">
+<meta charset="utf-8">
+<meta name="viewport" content="width=device-width, initial-scale=1">
+<meta name="author" content="aki">
+<meta name="tags" content="linux, unix, file hierarchy, system administration">
+<link rel="icon" type="image/png" href="cylo.png">
+<link rel="stylesheet" type="text/css" href="style.css">
+
+<title>Journey /home - Application Deployment</title>
+
+<nav><p><a href="https://ignore.pl">ignore.pl</a></p></nav>
+
+<article>
+<h1>Journey /home - Application Deployment</h1>
+<p class="subtitle">Published on 2020-05-29 01:27:00+02:00</p>
+<img src="journey_home_application_deployment-1.png" alt="mountains and stuff">
+<p>File hierarchy in Linux is a mess. However, this time I won't discuss why it is so. Instead, I've mentioned it, so
+that we don't feel bad after what we'll do in here. It's a mess, and it's our little mess that we can shape to our
+needs. However we like. Especially, if we keep it consistent.
+<p>I've been using various ways to put applications up and running on my server. I let systemd to handle init and
+service management for me for around three years now. As of files, I used different ways of structuring my public
+content that should be available via the HTTP or FTP server. It usually was a sftp jail somewhat like
+<code>/{var/http,srv}/<u>domain.com</u></code>.
+<p>Lately, I wanted to do something fresh, so I thought: "Let's move everything to <code>/home</code>!" I couldn't find
+any convincing reason against it, and there were few nice points to have it implemented. Now then, how does it look
+like?
+<p>As usual, for each service or domain I create a new account. I have a skeleton for home directory ready, that sets
+it up to look similar to this:</p>
+<ul>
+ <li>.config/
+ <ul>
+ <li>bash/bashrc
+ <li>nginx/
+ <li>systemd/user/
+ </ul>
+ <li>.cache/
+ <li>.ssh/config <!-- curse you .ssh, you should be in .config -->
+ <li>.local/
+ <ul>
+ <li>bin/
+ <li>share/
+ <li>lib/
+ </ul>
+</ul>
+<p>It tries to look like it follows <a href="https://specifications.freedesktop.org/basedir-spec/latest/">XDG Base
+Directory Specification</a>. Don't be fooled, though. It's close but the purposes are quite different (also
+<code>.ssh</code>, <em>grrr</em>). This little structure allows me to assume that I have all needed directories already
+in place, and my deployment script doesn't need to care about it.
+<p>Speaking off deployment. Obviously, I automated it. Any binaries that are meant to be run go to
+<code>.local/bin/</code>, configuration files go to <code>.config/<u>application</u>/</code>, cache and temporary files
+land in <code>.cache/<u>application</u>/</code>. Everything feels quite straight-forward. The difference is in where
+the actual data goes to. It's really up to you and how you configure the service. In case of HTTP I like to have a
+subdirectory called <code>public/</code> which serves me as a root. For gitolite, I have the usual
+<code>repositories</code> subdirectory. For fossil, I have <code>fossils</code>, and so on and on. You get the idea.
+<p>Most of the times, I want to run some kind of application as a service. I use systemd's
+<a href="https://www.freedesktop.org/software/systemd/man/user@.service.html">user services</a>. I place unit files
+in the <code>.config/systemd/user/</code>. It's not my personal preference. Systemd expects them to be there. Once they
+are in place I enable and start them. To make them work properly as a service I enable lingering, so that the services
+are not bound to the presence of user sessions, and they act like we expect them to:</p>
+<pre>
+# loginctl enable-linger <u>username</u>
+</pre>
+<p>My script handles deployment of the binary and associated unit file if needed. It's very convenient. Of course,
+one could automate deployment to any file hierarchy, so what else do I get from this setup?
+<p>First off, similarly to containers, the changes done by deployment don't propagate to the system. The application,
+data associated with it, all are bound to this single directory. It's not only that you might avoid mess in the system,
+but in case you want to get rid off the application. It's way easier. No need to keep track of your manual edits, files
+you added here and there. Delete the user, delete the directory, and it's clear.
+<p>The deployment doesn't need elevated privileges. Once you have created the user and enabled lingering for it, there
+is no need for using root anymore. One obstacle could be propagating the configuration files to nginx. I've solved it
+with a script that needs elevated privileges, and can be used with sudo. To make it work I added following in global
+nginx config:</p>
+<pre>
+http {
+ include /home/*/.config/nginx/*.conf;
+}
+</pre>
+<p>This begs for issues, so the script first runs <code>nginx -t</code>. If the configuration files are bad, then it
+overwrites them with backed-up copies that are sure to work. If there is none, it changes the name, so that it won't
+match the include pattern. If the configuration files are all OK then it reloads nginx, and copies the them as a back-up
+that will be used if next deployment is unsuccessful. The users can run the script with: <code>sudo nreload</code>.
+<p>It's kinda subjective, but for me it was easier to automate the processes for creating new users, deploying the
+applications, and withholding those applications from the server. The file structure is trimmed compared to the usual
+mess with files all over the place. Don't get me wrong. It's not that <code>/etc</code> + <code>/srv</code> is highly
+complicated. It's just that I usually end up needing two or three different approaches to file hierarchy, and it becomes
+messy very soon. This way gives me very pleasant experience when needing to quickly deploy something for a test and
+delete it soon after. I guess container manager like Docker would do, but it feels like an overkill for something that
+is dealt with using four 30-line shell scripts.
+<p>All in all, it seems the points are: always automate your repeated activities, no matter where you put your stuff
+try to keep it reasonably structured, systemd has user services, and they can be used in various ways. I feel like I
+could do the same in <code>/srv</code> instead of <code>/home</code>. Does it really matter? This way I didn't need to
+modify <code>adduser</code>...
+</article>
+<script src="https://stats.ignore.pl/track.js"></script>