summaryrefslogtreecommitdiff
path: root/journey_home_application_deployment.html
blob: d1a0248d86b223a027d7b5c3fadb58ba1f4608a7 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
<!doctype html>
<html lang="en">
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="author" content="aki">
<meta name="tags" content="linux, unix, file hierarchy, system administration">
<meta name="published-on" content="2020-05-29T01:27:00+02:00">
<link rel="icon" type="image/png" href="favicon.png">
<link rel="stylesheet" type="text/css" href="style.css">

<title>Journey /home - Application Deployment</title>

<header>
<nav><a href="https://ignore.pl">ignore.pl</a></nav>
<time>29 May 2020</time>
<h1>Journey /home - Application Deployment</h1>
</header>

<article>
<img src="journey_home_application_deployment-1.png" alt="mountains and stuff">
<p>File hierarchy in Linux is a mess. However, this time I won't discuss why it is so. Instead, I've mentioned it, so
that we don't feel bad after what we'll do in here. It's a mess, and it's our little mess that we can shape to our
needs. However we like. Especially, if we keep it consistent.
<p>I've been using various ways to put applications up and running on my server. I let systemd to handle init and
service management for me for around three years now. As of files, I used different ways of structuring my public
content that should be available via the HTTP or FTP server. It usually was a sftp jail somewhat like
<code>/{var/http,srv}/<u>domain.com</u></code>.
<p>Lately, I wanted to do something fresh, so I thought: "Let's move everything to <code>/home</code>!" I couldn't find
any convincing reason against it, and there were few nice points to have it implemented. Now then, how does it look
like?
<p>As usual, for each service or domain I create a new account. I have a skeleton for home directory ready, that sets
it up to look similar to this:</p>
<ul>
	<li>.config/
	<ul>
		<li>bash/bashrc
		<li>nginx/
		<li>systemd/user/
	</ul>
	<li>.cache/
	<li>.ssh/config <!-- curse you .ssh, you should be in .config -->
	<li>.local/
	<ul>
		<li>bin/
		<li>share/
		<li>lib/
	</ul>
</ul>
<p>It tries to look like it follows <a href="https://specifications.freedesktop.org/basedir-spec/latest/">XDG Base
Directory Specification</a>. Don't be fooled, though. It's close but the purposes are quite different (also
<code>.ssh</code>, <em>grrr</em>). This little structure allows me to assume that I have all needed directories already
in place, and my deployment script doesn't need to care about it.
<p>Speaking off deployment. Obviously, I automated it. Any binaries that are meant to be run go to
<code>.local/bin/</code>, configuration files go to <code>.config/<u>application</u>/</code>, cache and temporary files
land in <code>.cache/<u>application</u>/</code>. Everything feels quite straight-forward. The difference is in where
the actual data goes to. It's really up to you and how you configure the service. In case of HTTP I like to have a
subdirectory called <code>public/</code> which serves me as a root. For gitolite, I have the usual
<code>repositories</code> subdirectory. For fossil, I have <code>fossils</code>, and so on and on. You get the idea.
<p>Most of the times, I want to run some kind of application as a service. I use systemd's
<a href="https://www.freedesktop.org/software/systemd/man/user@.service.html">user services</a>. I place unit files
in the <code>.config/systemd/user/</code>. It's not my personal preference. Systemd expects them to be there. Once they
are in place I enable and start them. To make them work properly as a service I enable lingering, so that the services
are not bound to the presence of user sessions, and they act like we expect them to:</p>
<pre>
# loginctl enable-linger <u>username</u>
</pre>
<p>My script handles deployment of the binary and associated unit file if needed. It's very convenient. Of course,
one could automate deployment to any file hierarchy, so what else do I get from this setup?
<p>First off, similarly to containers, the changes done by deployment don't propagate to the system. The application,
data associated with it, all are bound to this single directory. It's not only that you might avoid mess in the system,
but in case you want to get rid off the application. It's way easier. No need to keep track of your manual edits, files
you added here and there. Delete the user, delete the directory, and it's clear.
<p>The deployment doesn't need elevated privileges. Once you have created the user and enabled lingering for it, there
is no need for using root anymore. One obstacle could be propagating the configuration files to nginx. I've solved it
with a script that needs elevated privileges, and can be used with sudo. To make it work I added following in global
nginx config:</p>
<pre>
http {
	include /home/*/.config/nginx/*.conf;
}
</pre>
<p>This begs for issues, so the script first runs <code>nginx -t</code>. If the configuration files are bad, then it
overwrites them with backed-up copies that are sure to work. If there is none, it changes the name, so that it won't
match the include pattern. If the configuration files are all OK then it reloads nginx, and copies the them as a back-up
that will be used if next deployment is unsuccessful. The users can run the script with: <code>sudo nreload</code>.
<p>It's kinda subjective, but for me it was easier to automate the processes for creating new users, deploying the
applications, and withholding those applications from the server. The file structure is trimmed compared to the usual
mess with files all over the place. Don't get me wrong. It's not that <code>/etc</code> + <code>/srv</code> is highly
complicated. It's just that I usually end up needing two or three different approaches to file hierarchy, and it becomes
messy very soon. This way gives me very pleasant experience when needing to quickly deploy something for a test and
delete it soon after. I guess container manager like Docker would do, but it feels like an overkill for something that
is dealt with using four 30-line shell scripts.
<p>All in all, it seems the points are: always automate your repeated activities, no matter where you put your stuff
try to keep it reasonably structured, systemd has user services, and they can be used in various ways. I feel like I
could do the same in <code>/srv</code> instead of <code>/home</code>. Does it really matter? This way I didn't need to
modify <code>adduser</code>...
</article>
<script src="https://stats.ignore.pl/track.js"></script>