From ad76e9b885c9b9692074cf5b8b880cb79f8a48e0 Mon Sep 17 00:00:00 2001 From: Aki Date: Sun, 25 Jul 2021 19:17:40 +0200 Subject: Initialized website as git repository --- journey_home_application_deployment.html | 95 ++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) create mode 100644 journey_home_application_deployment.html (limited to 'journey_home_application_deployment.html') diff --git a/journey_home_application_deployment.html b/journey_home_application_deployment.html new file mode 100644 index 0000000..d4b2e81 --- /dev/null +++ b/journey_home_application_deployment.html @@ -0,0 +1,95 @@ + + + + + + + + + +Journey /home - Application Deployment + + + +
+

Journey /home - Application Deployment

+

Published on 2020-05-29 01:27:00+02:00

+mountains and stuff +

File hierarchy in Linux is a mess. However, this time I won't discuss why it is so. Instead, I've mentioned it, so +that we don't feel bad after what we'll do in here. It's a mess, and it's our little mess that we can shape to our +needs. However we like. Especially, if we keep it consistent. +

I've been using various ways to put applications up and running on my server. I let systemd to handle init and +service management for me for around three years now. As of files, I used different ways of structuring my public +content that should be available via the HTTP or FTP server. It usually was a sftp jail somewhat like +/{var/http,srv}/domain.com. +

Lately, I wanted to do something fresh, so I thought: "Let's move everything to /home!" I couldn't find +any convincing reason against it, and there were few nice points to have it implemented. Now then, how does it look +like? +

As usual, for each service or domain I create a new account. I have a skeleton for home directory ready, that sets +it up to look similar to this:

+ +

It tries to look like it follows XDG Base +Directory Specification. Don't be fooled, though. It's close but the purposes are quite different (also +.ssh, grrr). This little structure allows me to assume that I have all needed directories already +in place, and my deployment script doesn't need to care about it. +

Speaking off deployment. Obviously, I automated it. Any binaries that are meant to be run go to +.local/bin/, configuration files go to .config/application/, cache and temporary files +land in .cache/application/. Everything feels quite straight-forward. The difference is in where +the actual data goes to. It's really up to you and how you configure the service. In case of HTTP I like to have a +subdirectory called public/ which serves me as a root. For gitolite, I have the usual +repositories subdirectory. For fossil, I have fossils, and so on and on. You get the idea. +

Most of the times, I want to run some kind of application as a service. I use systemd's +user services. I place unit files +in the .config/systemd/user/. It's not my personal preference. Systemd expects them to be there. Once they +are in place I enable and start them. To make them work properly as a service I enable lingering, so that the services +are not bound to the presence of user sessions, and they act like we expect them to:

+
+# loginctl enable-linger username
+
+

My script handles deployment of the binary and associated unit file if needed. It's very convenient. Of course, +one could automate deployment to any file hierarchy, so what else do I get from this setup? +

First off, similarly to containers, the changes done by deployment don't propagate to the system. The application, +data associated with it, all are bound to this single directory. It's not only that you might avoid mess in the system, +but in case you want to get rid off the application. It's way easier. No need to keep track of your manual edits, files +you added here and there. Delete the user, delete the directory, and it's clear. +

The deployment doesn't need elevated privileges. Once you have created the user and enabled lingering for it, there +is no need for using root anymore. One obstacle could be propagating the configuration files to nginx. I've solved it +with a script that needs elevated privileges, and can be used with sudo. To make it work I added following in global +nginx config:

+
+http {
+	include /home/*/.config/nginx/*.conf;
+}
+
+

This begs for issues, so the script first runs nginx -t. If the configuration files are bad, then it +overwrites them with backed-up copies that are sure to work. If there is none, it changes the name, so that it won't +match the include pattern. If the configuration files are all OK then it reloads nginx, and copies the them as a back-up +that will be used if next deployment is unsuccessful. The users can run the script with: sudo nreload. +

It's kinda subjective, but for me it was easier to automate the processes for creating new users, deploying the +applications, and withholding those applications from the server. The file structure is trimmed compared to the usual +mess with files all over the place. Don't get me wrong. It's not that /etc + /srv is highly +complicated. It's just that I usually end up needing two or three different approaches to file hierarchy, and it becomes +messy very soon. This way gives me very pleasant experience when needing to quickly deploy something for a test and +delete it soon after. I guess container manager like Docker would do, but it feels like an overkill for something that +is dealt with using four 30-line shell scripts. +

All in all, it seems the points are: always automate your repeated activities, no matter where you put your stuff +try to keep it reasonably structured, systemd has user services, and they can be used in various ways. I feel like I +could do the same in /srv instead of /home. Does it really matter? This way I didn't need to +modify adduser... +

+ -- cgit v1.1