From ad76e9b885c9b9692074cf5b8b880cb79f8a48e0 Mon Sep 17 00:00:00 2001 From: Aki Date: Sun, 25 Jul 2021 19:17:40 +0200 Subject: Initialized website as git repository --- LICENSE.html | 63 ++++++ archiving_with_posix_utilities-1.png | Bin 0 -> 1792 bytes archiving_with_posix_utilities-2.png | Bin 0 -> 2317 bytes archiving_with_posix_utilities-3.png | Bin 0 -> 1470 bytes archiving_with_posix_utilities.html | 238 ++++++++++++++++++++ cylo.png | Bin 0 -> 9139 bytes dear_imgui_and_love-1.png | Bin 0 -> 1404 bytes dear_imgui_and_love.html | 221 +++++++++++++++++++ difference_between_mnt_and_media-1.png | Bin 0 -> 3265 bytes difference_between_mnt_and_media.html | 77 +++++++ environments_in_lua_5_2_and_beyond-1.png | Bin 0 -> 536 bytes environments_in_lua_5_2_and_beyond-2.png | Bin 0 -> 1710 bytes environments_in_lua_5_2_and_beyond.html | 155 +++++++++++++ faq-1.png | Bin 0 -> 1474 bytes faq.html | 29 +++ flashing_lolin_nodemcu_v3-1.png | Bin 0 -> 2757 bytes flashing_lolin_nodemcu_v3.html | 46 ++++ graveyard_of_the_drawings-1.png | Bin 0 -> 3348 bytes graveyard_of_the_drawings-2.png | Bin 0 -> 2114 bytes graveyard_of_the_drawings-3.png | Bin 0 -> 2665 bytes graveyard_of_the_drawings-4.png | Bin 0 -> 1198 bytes graveyard_of_the_drawings-5.png | Bin 0 -> 1535 bytes graveyard_of_the_drawings-6.png | Bin 0 -> 2923 bytes graveyard_of_the_drawings-7.png | Bin 0 -> 1456 bytes graveyard_of_the_drawings-8.png | Bin 0 -> 2316 bytes graveyard_of_the_drawings-9.png | Bin 0 -> 3118 bytes graveyard_of_the_drawings.html | 29 +++ half_of_my_css_are_links-1.png | Bin 0 -> 1937 bytes half_of_my_css_are_links.html | 87 ++++++++ how_to_write_a_minimal_html5_document-1.png | Bin 0 -> 1174 bytes how_to_write_a_minimal_html5_document.html | 142 ++++++++++++ hunt_for_lex_and_yacc_the_dinosaur-1.png | Bin 0 -> 2901 bytes hunt_for_lex_and_yacc_the_dinosaur-2.png | Bin 0 -> 1545 bytes hunt_for_lex_and_yacc_the_dinosaur.html | 71 ++++++ index.html | 98 +++++++++ integrating_browser_into_your_environment-1.png | Bin 0 -> 1707 bytes integrating_browser_into_your_environment.html | 81 +++++++ journey_home_application_deployment-1.png | Bin 0 -> 2248 bytes journey_home_application_deployment.html | 95 ++++++++ markdown_is_bad_for_you-1.png | Bin 0 -> 3711 bytes markdown_is_bad_for_you-2.png | Bin 0 -> 1910 bytes markdown_is_bad_for_you.html | 80 +++++++ of_privacy_and_traffic_tracking-1.png | Bin 0 -> 1307 bytes of_privacy_and_traffic_tracking.html | 42 ++++ organizing_your_lua_project-1.png | Bin 0 -> 3346 bytes organizing_your_lua_project-2.png | Bin 0 -> 2192 bytes organizing_your_lua_project.html | 239 +++++++++++++++++++++ plop.html | 44 ++++ plop.png | Bin 0 -> 7754 bytes plumbing_your_own_browser-1.png | Bin 0 -> 1809 bytes plumbing_your_own_browser.html | 99 +++++++++ ...id_templating_with_shell_cat_and_envsubst-1.png | Bin 0 -> 1159 bytes stupid_templating_with_shell_cat_and_envsubst.html | 72 +++++++ style.css | 68 ++++++ ...t_introduction_to_building_with_makefiles-1.png | Bin 0 -> 982 bytes ...t_introduction_to_building_with_makefiles-2.png | Bin 0 -> 1158 bytes ...t_introduction_to_building_with_makefiles-3.png | Bin 0 -> 1941 bytes ...st_introduction_to_building_with_makefiles.html | 239 +++++++++++++++++++++ web_browsers_are_no_more-1.png | Bin 0 -> 1172 bytes web_browsers_are_no_more-2.png | Bin 0 -> 1892 bytes web_browsers_are_no_more.html | 103 +++++++++ 61 files changed, 2418 insertions(+) create mode 100644 LICENSE.html create mode 100644 archiving_with_posix_utilities-1.png create mode 100644 archiving_with_posix_utilities-2.png create mode 100644 archiving_with_posix_utilities-3.png create mode 100644 archiving_with_posix_utilities.html create mode 100644 cylo.png create mode 100644 dear_imgui_and_love-1.png create mode 100644 dear_imgui_and_love.html create mode 100644 difference_between_mnt_and_media-1.png create mode 100644 difference_between_mnt_and_media.html create mode 100644 environments_in_lua_5_2_and_beyond-1.png create mode 100644 environments_in_lua_5_2_and_beyond-2.png create mode 100644 environments_in_lua_5_2_and_beyond.html create mode 100644 faq-1.png create mode 100644 faq.html create mode 100644 flashing_lolin_nodemcu_v3-1.png create mode 100644 flashing_lolin_nodemcu_v3.html create mode 100644 graveyard_of_the_drawings-1.png create mode 100644 graveyard_of_the_drawings-2.png create mode 100644 graveyard_of_the_drawings-3.png create mode 100644 graveyard_of_the_drawings-4.png create mode 100644 graveyard_of_the_drawings-5.png create mode 100644 graveyard_of_the_drawings-6.png create mode 100644 graveyard_of_the_drawings-7.png create mode 100644 graveyard_of_the_drawings-8.png create mode 100644 graveyard_of_the_drawings-9.png create mode 100644 graveyard_of_the_drawings.html create mode 100644 half_of_my_css_are_links-1.png create mode 100644 half_of_my_css_are_links.html create mode 100644 how_to_write_a_minimal_html5_document-1.png create mode 100644 how_to_write_a_minimal_html5_document.html create mode 100644 hunt_for_lex_and_yacc_the_dinosaur-1.png create mode 100644 hunt_for_lex_and_yacc_the_dinosaur-2.png create mode 100644 hunt_for_lex_and_yacc_the_dinosaur.html create mode 100644 index.html create mode 100644 integrating_browser_into_your_environment-1.png create mode 100644 integrating_browser_into_your_environment.html create mode 100644 journey_home_application_deployment-1.png create mode 100644 journey_home_application_deployment.html create mode 100644 markdown_is_bad_for_you-1.png create mode 100644 markdown_is_bad_for_you-2.png create mode 100644 markdown_is_bad_for_you.html create mode 100644 of_privacy_and_traffic_tracking-1.png create mode 100644 of_privacy_and_traffic_tracking.html create mode 100644 organizing_your_lua_project-1.png create mode 100644 organizing_your_lua_project-2.png create mode 100644 organizing_your_lua_project.html create mode 100644 plop.html create mode 100644 plop.png create mode 100644 plumbing_your_own_browser-1.png create mode 100644 plumbing_your_own_browser.html create mode 100644 stupid_templating_with_shell_cat_and_envsubst-1.png create mode 100644 stupid_templating_with_shell_cat_and_envsubst.html create mode 100644 style.css create mode 100644 the_gentlest_introduction_to_building_with_makefiles-1.png create mode 100644 the_gentlest_introduction_to_building_with_makefiles-2.png create mode 100644 the_gentlest_introduction_to_building_with_makefiles-3.png create mode 100644 the_gentlest_introduction_to_building_with_makefiles.html create mode 100644 web_browsers_are_no_more-1.png create mode 100644 web_browsers_are_no_more-2.png create mode 100644 web_browsers_are_no_more.html diff --git a/LICENSE.html b/LICENSE.html new file mode 100644 index 0000000..2e364ed --- /dev/null +++ b/LICENSE.html @@ -0,0 +1,63 @@ + + + + + + + + + +LICENSE + + + +
+
+

IGNORE THIS LICENSE

+

Version 2, +

Copyright <Year> <Holder> +

+

It is PROHIBITED to any individual or legal entity (the "Entity") obtaining a copy of this work, be it piece of +software, music, literature or any other kind of art and expression, or its source code, associated documentation, or +any of its resources (the "Work") to use, view, run, copy, publish, distribute, disclose, sell, sublicense, merge, read, +listen to, or perform any other action, to an extent possible within the Law, on the Work or its copy, full or partial, +without an express written agreement with the copyright holder (the "Holder"). If the Holder or anyone on its behalf +takes any actions, legal or illegal, against the Entity which did not comply to this license, actions MUST be +immediately discarded. +

+ +
+ +
+

This section is not part of the license. +

This website's content is not licensed under this license unless explicitly specified. This is just a homepage of this license. +

About

+

This license is weird and it's very much meant to be like this. The reason for that is the usual: boredom. At that +time I was focused on researching the licensing of my work (although I rarely publish it) and the trends regarding this +matter. I usually worked with MIT or +GPL-2.0. Being myself, I decided to take an +entirely different approach to the licensing and push it to the extreme, while preserving my own ideas that were most +likely under heavy influence of some parts of anarchism. That's how the first version was written. Some time later I +revisited it to be more general and not just software specific. +

The license is aimed at publishers who don't care and publish for people who don't care. As long as you care or are +forced to care, this license will potentially cause and not cause any problems at the same time, unless the author wants +to cooperate with you. It's a stupid gimmick, but I would love to see how it would behave in real world. +

As of now (), it's the first time it's published and so far there are no works recorded that +use this license. In other words, it's not tested in court or even reviewed by actual lawyers (I'm but a mere code +monkey). +

Please, feel free to contact me with any feedback regarding this license or if you use it anywhere. + +

Historical

+
+

IGNORE THIS LICENSE
Version 1, +

Copyright <Year> <Holder> +

It is PROHIBITED to any person obtaining a copy of this software, its source code, associated documentation or any of +its resources (the "Software") to use, view, run, copy, publish, modify, sublicense, merge, distribute, disclose or sell +the Software or its copy, full or partial, without express written agreement with the copyright holder. However, the +copyright holder declares that they will not take any legal or illegal actions against those who did not comply to this +license. If any legal actions are taken by any party regarding licensing of the Software, they must be immediately +discarded. There is no warranty of any kind, implicit or explicit, since you are not allowed to use the Software anyway +under this license. +

+
+ diff --git a/archiving_with_posix_utilities-1.png b/archiving_with_posix_utilities-1.png new file mode 100644 index 0000000..68b100a Binary files /dev/null and b/archiving_with_posix_utilities-1.png differ diff --git a/archiving_with_posix_utilities-2.png b/archiving_with_posix_utilities-2.png new file mode 100644 index 0000000..2f31089 Binary files /dev/null and b/archiving_with_posix_utilities-2.png differ diff --git a/archiving_with_posix_utilities-3.png b/archiving_with_posix_utilities-3.png new file mode 100644 index 0000000..4a43e34 Binary files /dev/null and b/archiving_with_posix_utilities-3.png differ diff --git a/archiving_with_posix_utilities.html b/archiving_with_posix_utilities.html new file mode 100644 index 0000000..17ce7bc --- /dev/null +++ b/archiving_with_posix_utilities.html @@ -0,0 +1,238 @@ + + + + + + + + + +Archiving With POSIX Utilities + + + +
+

Archiving With POSIX Utilities

+

Published on 2020-07-22 22:30:00+02:00 +

The usual answer is tar. As you may see I intentionally linked to the +GNU Tar. If you are a *BSD user then you use some other implementation. Both of them follow and extend POSIX'es standard +for tar utility. Or so you would think. +

Right now there is no POSIX tar utility. It has been marked as legacy +already in 1997 and disappeared from the +standard soon after. It's place took a behemoth called +pax. The name gets even funnier when +you consider the rationale and the size of this thing. But pax didn't came from just tar. There was one more influencer +in here called cpio. You may know this one +if you ever tinkered with RPM packages or initramfs. +

In other words we have three utilities on today's table: tar, cpio and pax. According to +Debian's popularity contest the frequency of each being installed is in +the exact same order, with tar being at 8th place overall, cpio at 52nd, and pax at 6089th. I can't just talk about the +least popular one, so I'll explain shortly how to use each of them in your usual Linux distribution while keeping in +mind what POSIX had to tell us back in the day. + +

tar

+

Like I've already mentioned tarballs are the most popular. Not only that, they are commonly described as the easiest +to use, although the interface is something that you can find jokes about. All operations on tarballs are handled via +single tar utility.

+box +

Let's go through three basic operations: create an archive, list out the content, and extract it. Tar expects to have +first argument to match this regular expression: [rxtuc][vwfblmo]*. The first part is function, +and the second is a modifier. I'll focus only on those necessary to accomplish before-mentioned tasks. +

To create an archive you:

+
+$ tar cf ../archive.tar a_file a_directory
+
+

This will create an archive that will be located in parent directory of current working directory, and will contain +a_file and recursively a_directory. Let's map every part of the command for clarity:

+
+
tar
Call tar +
c
Create an archive +
f
Use first argument after cf as the path to the archive +
../archive.tar
Path to the archive (without f it would be treated as another file to + include in the archive) +
a_file a_directory
Files to include in the archives +
+

Now that you have an archive, you can see it's content:

+
+$ tar tf ../archive.tar
+a_file
+a_directory/
+a_directory/another_file
+
+

As you have probably guessed t function is used to write the names of files that are in the archive. +f works exactly the same way: first argument after tf is meant to point to the archive file. +

To extract everything from the archive you:

+
+$ tar xf ../archive.tar
+
+

Or add more arguments to extract selected files:

+
+$ tar xf ../archive.tar a_file
+
+

This one will extract only a_file from the archive. +

That's pretty much it about tar. The are two more functions: r that adds new file to existing archive, +and u that first tries to update the file in archive if it exists and if it doesn't then it adds it. Note, +that the usual compression options are not available in POSIX, they are an extension. + +

cpio

+

Heading off from the usual routes we encounter cpio. It's a more frequent sight than pax, but it still is quite niche +compared to tar's omnipresence. Frankly, I like this one the most because of the way it handles input of file lists. +Sadly, this also makes it slightly bothersome to use. +

Now, now, cpio operates in three modes: copy-out, copy-in and pass-through. Our goals are +still the same: to create an archive, list files inside, and extract it somewhere else and for that we'll only need the +first two modes. +

To create an archive, use the copy-out mode, as in: copy to the standard output:

+
+$ find a_file a_directory | cpio -o >../archive.cpio
+
+

This instant you probably noticed that cpio doesn't accept files as arguments. In copy-out mode it expects list of +files in standard input, and it will return the formatted archive through standard output. See a somehow step-by-step +explanation:

+
+
find a_file a_directory |
List files, directories and their content from arguments and pipe the + output to the next command +
cpio
Call cpio (duh!) +
-o
Use copy-out mode +
>../archive.cpio
Redirect standard output of cpio to a file +
+

You now have an archive file called archive.cpio in parent directory. To see its content type in:

+
+$ cpio -it <../archive.cpio
+a_file
+a_directory
+a_directory/another_file
+1 block
+
+

Nice! What's left is extraction. You do it with copy-in mode like this:

+
+$ cpio -i <../archive.cpio
+1 block
+
+

Huh? What's that? Listing files and extracting both use copy-in mode? That's right. Like "copy-out" means "copy to +standard output", "copy-in" can be understood as "copy from standard input". The t option prohibits any +files to be written or created by cpio, nonetheless archive is read from standard input and then translated to list of +files in standard output. Some extended implementations let you use t directly as sole option and imply the +copy-in mode. +

You can also use patterns when extracting to select files:

+
+$ cpio -i a_file <../archive.cpio
+1 block
+
+

You can copy nested files if you use d option:

+
+$ cpio -id a_directory/another_file <../archive.cpio
+1 block
+
+

This option tells cpio that it's allowed to create directories whenever it is necessary.

+pass-through +

Bonus! Pass-through mode can be used to copy files listed in standard input to specified directory. It doesn't create +an archive at all.

+
+$ ls ../destination
+$ ls
+a_directory  a_file
+$ find a_file a_directory | cpio -p ../destination
+0 blocks
+$ ls ../destination
+a_directory  a_file
+
+ +

pax

+

Finally, at the destination! This one lives up to the name of this post as it's still part of POSIX. The fun part is +that you probably don't even have it installed, but don't worry, I didn't have it until like two days ago. It truly +feels like a compromise forced on you and your siblings by your parents. Jokes aside, I actually started to like it, +bulky but kind of cute. +

Anyway, let's see what this coffee machine can do for us; same goals as previously. This will be confusing, because +this utility is a compromise, and so it supports both usage styles: tar-like and cpio-like. +

To create an archive you can use either:

+ +
+$ pax -wf ../archive.pax a_directory a_file
+$ find a_file a_directory | pax -wd >../archive.pax
+$ find a_file a_directory | pax -wdf ../archive.pax
+
+ +

They are equivalent. You can mix the style as much as you want, as long as it doesn't become mess it's quite handy. +As for what option does what:

+ +
+
-w
Indicates that pax will act in write mode (tar's c and cpio's -o) +
f ../archive.pax
Argument after f is the path to the archive; note that it behaves + slightly different compared to tar, it always takes next argument instead of first path that appears after flags. It + means you can't put any options between -f and the path. +
a_directory a_file +
find a_file a_directory |
Both of these accomplish the same goal of letting know pax + what files should be in archive. They are mutually exclusive! If there is at least one argument pointing to a file, + then standard input is not supposed to be read. +
d
This one is used to prevent recursively adding files that are in a directory, so that the + behaviour is the same as in cpio: +
+$ find a_file a_directory | pax -wvf ../archive.pax
+a_directory
+a_directory/another_file
+a_directory/another_file
+a_file
+pax: ustar vol 1, 4 files, 0 bytes read, 10240 bytes written.
+$ find a_directory a_file | pax -wvdf ../archive.pax
+a_directory
+a_directory/another_file
+a_file
+pax: ustar vol 1, 3 files, 0 bytes read, 10240 bytes written.
+
+
+ +

The v option is used to increase verbosity of the "error" output. You can find similar functionality in +most of command line utilities, including tar and cpio. +

To list files that are in archive you can also use both styles:

+
+$ pax <../archive.pax
+a_directory
+a_directory/another_file
+a_file
+$ pax -f ../archive.pax
+a_directory
+a_directory/another_file
+a_file
+
+

Yes, that's the default behaviour of pax and you don't need to specify any argument (in case of cpio-like style). +Sweet, isn't it? +

To extract the archive use one of:

+
+$ pax -r <../archive.pax
+$ pax -rf ../archive.pax
+
+

For selecting files to extract use the usual patterns:

+
+$ pax -r a_file -f ../archive.pax
+$ pax -r a_directory/another_file <../archive.pax
+
+

That's all of the most basic use case. There's more, for instance pax supports mode similar to the pass-through mode +we already know from the cpio. But there is something more important to mention about pax. It's supposed to easily +support various different formats. +

POSIX tells that pax should support: pax, cpio and ustar formats. I installed GNU pax and it seems to support: ar, +bcpio, cpio, sv4cpio, sc4crc, tar and ustar. The default format for my installation is ustar as you have probably +noticed in verbose output in one of the examples above. Pax format is extension for ustar, that's most likely the reason +it's usually omitted. +

You can select format with -x option, for supported formats please refer to your manual. Also note that +explicitly specifying format should be only needed when writing an archive. When reading pax can identify archive's +format efficiently:

+
+$ find a_file a_directory | cpio -o >../archive.cpio
+$ pax -vf ../archive.cpio
+-rw-rw-r--  1 ignore   ignore    0 Jul 22 22:30 a_file
+drwxrwxr-x  2 ignore   ignore    0 Jul 22 22:30 a_directory
+-rw-rw-r--  1 ignore   ignore    0 Jul 22 22:30 a_directory/another_file
+pax: bcpio vol 1, 3 files, 512 bytes read, 0 bytes written.
+
+ +

Final thoughts

+

Now then, it's time to finally wrap it all up. There is nothing left to say but remember to always check your manual, +all of those utilities have various implementations that are compliant to POSIX in various degrees. Don't be naive and +don't get tricked by them. I find pax the most reliable of them as its "novelty" and the interface that was quite +"modern" from the start resulted in decently compliant implementations. Moreover, it includes nice things one may know +from both cpio and tar. Find a moment to check it out! +

Let's pretend that ar doesn't exist. +Thank you.

+boo! +
+ diff --git a/cylo.png b/cylo.png new file mode 100644 index 0000000..0436075 Binary files /dev/null and b/cylo.png differ diff --git a/dear_imgui_and_love-1.png b/dear_imgui_and_love-1.png new file mode 100644 index 0000000..35f46ac Binary files /dev/null and b/dear_imgui_and_love-1.png differ diff --git a/dear_imgui_and_love.html b/dear_imgui_and_love.html new file mode 100644 index 0000000..bb7e928 --- /dev/null +++ b/dear_imgui_and_love.html @@ -0,0 +1,221 @@ + + + + + + + + + + +Dear ImGui and LÖVE, + + + +
+

Dear ImGui and LÖVE,

+

Published on 2020-08-14 21:47:00+02:00 +

Believe it or not, this one was requested. It's surprising considered that just recently I claimed that according to +stats nobody reads this website. It may or may not be a coincident that I have received this request over a phone. +

Anyway, Dear ImGui is a C++ library for graphical user interface. As +for LÖVE; it's sometimes called Love2D and it's a Lua framework generally meant for +game development. Today we'll learn how to create an immediate-mode graphical user interface for your love2d game with +Dear ImGui. +

+whale whale whale, what a cutie +

First, we need to have both of them on our machine (duh!). LÖVE is insanely easy to get and install on pretty much +every platform. Just visit their website and check download section. If you use a +platform that's not supported straight-away then you probably can handle it by yourself anyway. Dear ImGui, or let's +call it ImGui from now on, is a little bit trickier. You need either a Lua binding for it, or a direct binding for LÖVE. +There're both available and let's settle for the latter one: +love-imgui. +

Now then, I usually just build by myself. On Windows that's bothersome and luckily the maintainer provides binary +packages. The update frequency is rather low, so if you run into issues, you can try to use builds from more active +forks like e.g. apicici/love-imgui. At last, if you feel +adventurous, the manual build will let you down with its simplicity assuming that you have all tools already in place: +

+$ cd love-imgui
+$ mkdir build
+$ cd build
+$ cmake ..
+$ make
+
+

Once you obtained dynamic library file, place it in C-require-path. In LÖVE it defaults to ??, which +means that it will look for name passed to require with suffixed platform specific dynamic library +extension in either game's source directory, save directory or any of paths mounted with filesystem module. For you it +means, that you can just put your imgui.dll into the directory with main.lua and it should +work just fine. +

Run it and you will see that nothing will happen. That's expected. To make it work you need to "pass" selected LÖVE +callbacks to it. Luckily for us, the README file provides us with an example that we can use as bootstrap. I copied +interesting parts here for your convenience: +

+require "imgui"
+
+function love.update(dt)
+  imgui.NewFrame()
+end
+
+function love.quit()
+  imgui.ShutDown()
+end
+
+function love.textinput(text)
+  imgui.TextInput(text)
+  if not imgui.GetWantCaptureKeyboard() then
+  end
+end
+
+function love.keypressed(key, scancode, isrepat)
+  imgui.KeyPressed(key)
+  if not imgui.GetWantCaptureKeyboard() then
+  end
+end
+
+function love.keyreleased(key, scancode, isrepat)
+  imgui.KeyReleased(key)
+  if not imgui.GetWantCaptureKeyboard() then
+  end
+end
+
+function love.mousemoved(x, y, dx, dy)
+  imgui.MouseMoved(x, y)
+  if not imgui.GetWantCaptureMouse() then
+  end
+end
+
+function love.mousepressed(x, y, button)
+  imgui.MousePressed(button)
+  if not imgui.GetWantCaptureMouse() then
+  end
+end
+
+function love.mousereleased(x, y, button)
+  imgui.MouseReleased(button)
+  if not imgui.GetWantCaptureMouse() then
+  end
+end
+
+function love.wheelmoved(x, y)
+  imgui.WheelMoved(y)
+  if not imgui.GetWantCaptureMouse() then
+  end
+end
+
+

In the if not imgui.GetWant... blocks you should wrap your usual code that handles these events. It's +there to ensure that the input is not propagating to the game if one of imgui's windows has focus. +

Finally, we've reached the moment we can make some windows! You write the code for the interface inside +love.draw callback. The general documentation for ImGui can be found in +imgui.cpp, each function has a short description +next to its declaration in imgui.h. One more resource +is worth reading: imgui_demo.cpp. But still, +those are for C++. Thing is Lua binding reflects all names and things like function parameters almost directly. Let's +walk through some examples to get the gist of it! +

+local bg = { 0.3, 0.4, 0.2 }
+
+function love.draw()
+  if imgui.Button("Goodbye, world!") then
+    love.event.quit()
+  end
+
+  bg[1], bg[2], bg[3] = imgui.ColorEdit3("Background", bg[1], bg[2], bg[3])
+  love.graphics.clear(unpack(bg))
+  -- Background colour will be dynamically controlled by ColorEdit3 widget.
+
+  imgui.Render()
+end
+
+

Here's an example using two simple widgets: Button and ColorEdit3. It should illustrate how +immediate-mode GUI works and interacts with the state. One important thing to note is imgui.Render() just +before the function reaches the end. It does exactly what you would expect from it: renders the ImGui's windows to the +screen. Now, let's control ImGui's windows' behaviour: +

+local show_fps = true
+
+function love.draw()
+  if show_fps then
+    show_fps = imgui.Begin("FPS", true, {"ImGuiWindowFlags_NoCollapse"})
+    imgui.Text(string.format("FPS: %.2f", love.timer.getFPS()))
+    imgui.SetWindowSize("FPS", 0, 0)
+    imgui.End()
+  end
+  imgui.Render()
+end
+
+

By default if you create an element ImGui will create "Debug" window for you to hold all of your stuff. Of course, +that's not always desired and so Begin creates a new window. It accepts parameters: title, which serves as +an id, boolean that indicates if window should contain button for closing it, and a set of flags. In C++ flags are +handled via enums and bit-wise operators. In Lua binding use a table and put full name of the enum value. +SetWindowSize makes the window unresizable and zeroes shrinks window to the size of its content. +

+local is_a = false
+
+function love.draw()
+  if is_a then
+    imgui.PushStyleColor("ImGuiCol_Button", 0.7, 0.2, 0.2, 1)
+    imgui.PushStyleColor("ImGuiCol_ButtonHovered", 0.8, 0.3, 0.3, 1)
+    imgui.PushStyleColor("ImGuiCol_ButtonActive", 0.9, 0.1, 0.1, 1)
+    if imgui.Button("Change to B", 90, 0) then
+      is_a = false
+    end
+    imgui.PopStyleColor(3)
+  else
+    imgui.PushStyleColor("ImGuiCol_Button", 0.2, 0.7, 0.2, 1)
+    imgui.PushStyleColor("ImGuiCol_ButtonHovered", 0.3, 0.8, 0.3, 1)
+    imgui.PushStyleColor("ImGuiCol_ButtonActive", 0.1, 0.9, 0.1, 1)
+    if imgui.Button("Change to A", 90, 0) then
+      is_a = true
+    end
+    imgui.PopStyleColor(3)
+  end
+
+  if imgui.IsItemHovered() then
+    imgui.SetTooltip("Switches between A and B")
+  end
+
+  imgui.SameLine()
+  imgui.PushStyleColor("ImGuiCol_Button", 0.3, 0.3, 0.3, 1)
+  imgui.PushStyleColor("ImGuiCol_ButtonHovered", 0.3, 0.3, 0.3, 1)
+  imgui.PushStyleColor("ImGuiCol_ButtonActive", 0.3, 0.3, 0.3, 1)
+  imgui.Button("Disabled button")
+  imgui.PopStyleColor(3)
+
+  imgui.Text("Well, that's colorful")
+
+  imgui.Render()
+end
+
+

The ad hoc styling is handled with stack-based interface. You just need to find the style name in the source and push +a colour or other property. When it's no longer needed, you pop it. +

Quite a number of functions modify either the element that comes after them or the one before. Consider the usage of +IsItemHovered in the example above. It doesn't matter, which button will be drawn, the tool-tip will be +shown if user hovers over the last element that preceding if statement produced. Then there's +SameLine which makes the next element remain on the same line (surprising, isn't it?). +

+local windows = {true, true, true, true, true}
+
+local
+function draw_my_window(n)
+  if windows[n] then
+    windows[n] = imgui.Begin(string.format("Window #%d", n), true)
+    for i, v in ipairs(windows) do
+      windows[i] = imgui.Checkbox(string.format("Show 'Window #%d'", i), v)
+    end
+    imgui.End()
+  end
+end
+
+function love.draw()
+  for i=1, #windows do
+    draw_my_window(i)
+  end
+  imgui.Render()
+end
+
+

That's a bit useless, but yeah, you can link the state however you like and use the function in any way you imagine. +In C++, most elements like Checkbox take a pointer. In Lua you can't pass normal values like that in a +simple way and so you usually put a value from previous frame in the place of the pointer and you expect that the +function (the Element) will return the new values that you can use in the next frame. +

Aaand, that's about it without going into examples that'd make this post twice as long. +

+ diff --git a/difference_between_mnt_and_media-1.png b/difference_between_mnt_and_media-1.png new file mode 100644 index 0000000..d77f824 Binary files /dev/null and b/difference_between_mnt_and_media-1.png differ diff --git a/difference_between_mnt_and_media.html b/difference_between_mnt_and_media.html new file mode 100644 index 0000000..f9ffb01 --- /dev/null +++ b/difference_between_mnt_and_media.html @@ -0,0 +1,77 @@ + + + + + + + + + +Difference Between /mnt and /media + + + +
+

Difference Between /mnt and /media

+

Published on 2020-06-12 19:00:00+02:00 +

In this article I will try to answer questions like: What is the difference between /mnt and /media in FHS and +*nixes?, Why was /media added to FHS?, and perhaps What is the purpose of /mnt, and what is it for +/media? +

To be fair, I'm somehow conflicted now. For some people the answer is so simple that they don't bother to write it +down. I'm serious. It's not like I've ever been optimistic about stackoverflow or other subsites of stackexchange, but +that one answer really gave me a good chuckle. I won't link it, because it almost feels like name shaming, but hell as +long as it's there, you'll figure out your way, won't you? +

Now, now. One could use no brain at all, and say that the descriptions in the standard are different, therefore they +are different. Completely discarding things like intentions, history, purposes or use cases. They are different, because +they are different. That is in fact true. It's hard to deny something that is clearly visible. Let's take a look at +them:

+
+3.11. /media : Mount point for removable media
+This directory contains subdirectories which are used as mount points for removable media such as floppy disks, cdroms +and zip disks. +
+
+3.12. /mnt : Mount point for a temporarily mounted filesystem
+This directory is provided so that the system administrator may temporarily mount a filesystem as needed. The content of +this directory is a local issue and should not affect the manner in which any program is run. +
+

Based on those, the usual answer is: /mnt is for system administrator and /media is for the system itself. In other +words: /mnt is for you and /media is not. This answer is quite satisfying, unless you actually read it. The 3.11 does +not say a word about who mounts stuff in /media. Even the rationale section doesn't. However, it does mention the reason +why /media was added: to stop adding mount points to root (e.g. /cdrom, /floppy). +

It also mentions that e.g. /mnt/cdrom and derivatives have been used as temporary mount points for removable media. +The only reason why the /media was added is because of, and I quote, tradition of using /mnt directly as a mount point. +

This means, that the standard partially acknowledges that /mnt and /media are interchangeable and actually can be +equivalent in how they are used. The wordings of both sections are different. That's undeniable. On the other hand, the +meaning is left to be discussed. Or is it? +

Originally, there have been voices that said something among the lines of: "Ay folks, ain't those the same?" Even in +the bug report from 2003 that introduced this change +to FHS in 2.3 version. They all have been ignored. Year later, +the question was asked again, and the answer was what +we know now: /mnt is for temporary mount points which may not be media. +

What is not media? Where to draw a certain line between mounted zip file and a floppy disk? How long does a gate +needs to be to be considered a tunnel? What is the difference between "removable" and "temporary" in this particular +case? Is this really the moment we get into discussion of semantics and of what is "medium"? I feel like I already did +this pun on this blog. +

If I want to write a script that automatically and temporarily mounts a filesystem of one of my microcomputers +whenever they are available in the local network. Should that go to /media or /mnt? Is the sshfs a medium or not? +I mean, it's between my filesystem and the actual mounted things in the microcomputer filesystem, but is it really? +Are media limited to physical things like cd, floppy or usb drives? +

The difference is so ambiguous and unspecified, that it just feels completely surreal to me. Moreover, this slight +changes to understanding of what "removable media" is, and what it perhaps could become, and what people will mount in +future, was also originally pointed out. It feels like a good time to revisit those points.

+tunnel is a gate, but longer +

One more question remains. Is this separation needed in any sense? I think not. In my home system I swiftly use more +than one automatic manager for mounting various things: from USB drives to whole external devices with their own +operating system. I use the very same directory to manually manage temporary devices. I am yet to run into a +significant issue with this approach. +

To summarize, there is no particular difference between /media and /mnt in FHS. If anything /media has a defined +structure of subdirectories user can expect to find in it, and /mnt doesn't. In various implementations of FHS there +might be some conventions and traditions added on top of that like: /media is for stuff that is mounted automatically +by the system, and /mnt is for user or sysadmin to use. Overall, the use cases overlap, and it was known from the very +beginning, but it was ignored due to reasons. +

Originally, /media was added in FHS 2.3 as a part that was not included in 2.2 from 2.2-beta release. The goal was +to limit creation of temporary mount points in the /. Did it work out? I think not, considering that it could easily be +substituted by /mnt or vice versa. +

+ diff --git a/environments_in_lua_5_2_and_beyond-1.png b/environments_in_lua_5_2_and_beyond-1.png new file mode 100644 index 0000000..86d8b0c Binary files /dev/null and b/environments_in_lua_5_2_and_beyond-1.png differ diff --git a/environments_in_lua_5_2_and_beyond-2.png b/environments_in_lua_5_2_and_beyond-2.png new file mode 100644 index 0000000..b5977f8 Binary files /dev/null and b/environments_in_lua_5_2_and_beyond-2.png differ diff --git a/environments_in_lua_5_2_and_beyond.html b/environments_in_lua_5_2_and_beyond.html new file mode 100644 index 0000000..fc2b9b7 --- /dev/null +++ b/environments_in_lua_5_2_and_beyond.html @@ -0,0 +1,155 @@ + + + + + + + + + +Environments in Lua 5.2 and Beyond + + + +
+

Environments in Lua 5.2 and Beyond

+

Published on 2020-07-04 20:39:00+02:00 +

Environments are a way of dealing with various problems. Or creating them entirely on your own. Primarily, they are +used to isolate a selected part of a program. As Lua is meant to be used as an embedded language, you may find yourself +wanting to separate user created addons from more internal scripts. In short: sandboxing and overall security. +

While I will focus on environments in Lua I won't go too deeply into the implementation details. I'll try to focus +more on design and general overview with minor thoughts on syntax and the inner workings. I think you might find this +text interesting no matter if you know or are interested in Lua itself. +

Previously, we had setfenv and +related functions to do exactly that. With some tricks, debug library, and faith, you could do real magic, which is kind +of cool. However, magic can easily become arcane, thus unclear. +

With Lua 5.2 setfenv and related were removed in favour of a new approach. This one uses a simple local +variable with name _ENV. Luckily, this approach can also be fun. It has other benefits over the old one, +but the goal here is not a comparison. +

One more thing before we hop into examples: I strongly encourage to read some documents on Lambda Calculus. They +will give you quite good overview over what is happening with closures/anonymous functions/lambda expressions/whatever +they are called. It's a good foundation and an entertaining exercise. This way you can quickly draw similarities, +considering that the Calculus is easier to grasp than the most of language-syntax-related shenanigans.

+lambda +

Terms

+

Now then, let's have a simple example to define few selected terms:

+
+-- Start of chunk's body
+local OFFSET_ERROR = 0.97731
+local
+function calibrate (value, ratio, offset)
+	-- Start of function's body; not part of chunk's body
+	local real_offset = offset * OFFSET_ERROR
+	print("offset:", real_offset)
+	return value * ration + real_offset
+	-- End of function's body
+end
+-- End of chunk's body
+
+

Surprisingly, there's a lot of going on in here. First off, we have two scopes: "chunk's body", and "function's +body". "Chunk's body" has two local variables: OFFSET_ERROR (that acts as a constant), and +calibrate (a function). In turn, "function's body" has four local variables: value, +ratio, offset (those three are the arguments for the function), and real_offset +(a temporary variable I added just to show that function body may also have explicit local variable). We will call all +of those variables exactly in the way I already did: local variables. +

In addition to the local variables, "function's body" also refers to two other names. First one is +OFFSET_ERROR. We already know this one; it's a local variable from the chunk. A smaller scope that is +inside another scope can refer to their local variables as they want. They are called upvalues then. This works +on any level, no matter how deeply the scopes are nested, as long as it makes sense. It doesn't work the other way: +outer scope referring to local variable in inner scope is a no-no. +

Second external reference in "function's body" is print. We don't see it defined anywhere as a local +variable. Commonly such variables are called globals or global variables. That's how we will call them. +

Here's the part that can be slightly ambiguous: definition of "environment". I can see myself include in it all three +types of variables I mentioned, only upvalues and globals, or just globals. Depending on the situation, all of these +cases may be fitting, and looking at how other languages use the term, and how it's similar to bound variables in the +Lambda Calculus; they are all good explanations. The Lua itself uses "environment" only to refer to the method that +resolves references to globals. +

Upvalues

+

That being said, let's talk about the upvalues for a moment before we go to the globals. Upvalues are the closest +thing you can have to a bound variable from the Lambda Calculus. They always bind "by reference", simply because the +variable name is a direct reference to the variable and it's value. Whatever that means, see for yourself:

+
+local counter = 0
+local
+function increment ()
+	counter = counter + 1
+end
+increment()
+increment()
+increment()
+print(counter)
+
+

This example will print out "3". Upvalues also play huge part in garbage collection due to their nature but that's +none of the concerns of this article. +

Environment

+

Back to the main topic! Quick reminder: in Lua "environment" is used to resolve references to names that are not +local variables or upvalues. In other words it's a way to deal with free variables when the program executes. +

"Environments" are associative tables. They link global variable names to actual variables. The environments +themselves are bound to functions as upvalues called _ENV, whenever they are needed. It's done implicitly; +quietly in the background. This means that the calibrate function from the first example actually has two +upvalues: OFFEST_ERROR, and _ENV. _ENV by default took as its value a table that +was used as global environment at that time. If calibrate wouldn't use print, +_ENV wouldn't be there at all. +

This is quite important, so let me repeat. Environments are here to deal with free variables, but are bound variables +themselves.

+
+local
+function hello ()
+	print "Hello!"
+end
+local _ENV = {print = function () end}
+hello()
+
+

We create a simple local function that is meant to print out "Hello!" to standard output. After that we overwrite +current environment with a new one that contains print function that does nothing. If we call +hello, it still prints out "Hello!" like it was meant to. It's because it's bound to the original +environment, not the new one. +

In the meantime you might have noticed that environments may appear somehow interchangeable with upvalues. That's +correct to some extent, and it's because of the things I've already mentioned: ambiguity is one, dealing with free +variables while being a bound variable is two. In program execution bound variables (in our terms: upvalues) are there +to deal with free variables, and here we are doing the same thing.

+fat bird +

Usage

+

Yeah, that's all of the explanation there was. I could sum it up in: "environments are tables bound to upvalues that +resolve free variables". Cool, how and when can we use them? +

The most common use case is sandboxing or more generally: limiting things available to scripts. Let's say we develop +a program that uses Lua as a scripting language. We load all default modules from Lua for ourselves: io, +debug, string, whatever we want. However, we don't want to expose all of them to external +scripts. To do so, we prepare a table that will act as an environment for them and simply assign it as +_ENV upvalue. Most likely through load or loadfile function:

+
+local end_user_env = {
+	print = print
+}
+local script = loadfile("external.lua", nil, end_user_env)
+script()
+
+

Of course, you can do that from C API, too. This requires us to acknowledge one more thing: upvalues are stored as +a list and are indexed. For regular functions _ENV upvalue might be in any place of this list. For main +chunks, loaded external scripts, or the "chunk" from the first example _ENV is expected to be first on the +list.

+
+luaL_loadfile(L, "external.lua");
+lua_newtable(L);
+lua_pushliteral(L, "print");
+lua_getglobal(L, "print");
+lua_settable(L, -3);
+lua_setupvalue(L, -2, 1);
+lua_call(L, 0, 0);
+
+

It could also be done by prepending local _ENV = end_user_env to the external script before loading it, +but that's a hassle:

+
+local file = io.open "env.lua"
+local content = file:read "*a"
+file:close()
+content = "local _ENV = {print = print}\n" .. content
+local script = load(content)
+script()
+
+

This method of environment manipulation can be used in other cases for more in-line changes as seen in one of the +previous examples. This is the new way of making magic tricks after setfenv is gone. I'll leave this as a +topic for another time. I think the examples above are sufficient for now. From here I can expand to magic or sandboxing +details. +

+ diff --git a/faq-1.png b/faq-1.png new file mode 100644 index 0000000..5c90380 Binary files /dev/null and b/faq-1.png differ diff --git a/faq.html b/faq.html new file mode 100644 index 0000000..242b1f2 --- /dev/null +++ b/faq.html @@ -0,0 +1,29 @@ + + + + + + + + + +Frequently Asked Questions + + + +
+

Frequently Asked Questions

+

Last modified on 2021-03-07 03:44+01:00 + +

Why do the drawings have such horrendous quality?

+

I obliged myself to draw at least one of these for each page. They serve as a reminder for myself to keep enjoying +myself in things I do. Oh, and the quality... I'm just bad. + +

Why do you keep rewriting or removing your articles?

+

I treat most of the content here as speech rather than a text, and so I allow myself to correct stuff I published. +I would like to keep learning, experimenting, discussing, and reitarating on my past ideas, and so with additional +knowledge or a simple change of heart, I rewrite or remove my articles.

+ + + +
diff --git a/flashing_lolin_nodemcu_v3-1.png b/flashing_lolin_nodemcu_v3-1.png new file mode 100644 index 0000000..daceb97 Binary files /dev/null and b/flashing_lolin_nodemcu_v3-1.png differ diff --git a/flashing_lolin_nodemcu_v3.html b/flashing_lolin_nodemcu_v3.html new file mode 100644 index 0000000..e535f74 --- /dev/null +++ b/flashing_lolin_nodemcu_v3.html @@ -0,0 +1,46 @@ + + + + + + + + +Flashing LOLin NodeMCU v3 + + + +
+

Flashing LOLin NodeMCU v3

+

Published on 2020-06-29 17:58:00+02:00 +

Republishing old content. This is from when I bought and flashed my first NodeMCU clone (perhaps "loose +implementation"?) of NodeMCU v3 called LOLin v3. +

This little board is using CH304G TTL to USB converter, so if you happen to work on Windows you must make sure you +have the driver installed beforehand. Mentioned chip is quite common, and I've seen it on various Arduino, so there is +a high chance that you already have it. +

Without further notes, here is the specification:

+ +
Baud rateFlash modeFlash sizeFlash frequency +
9600 QIO 4MiB 40MHz +
+

You will need a combined firmware binary. If you are compiling it by yourself, I suggest using one of the development +branches. If you don't have environment set up there are online building services available e.g. +NodeMCU custom builds. +

To write the firmware use e.g. esptool:

+
+$ esptool.py --port /dev/ttyUSB0 --baud 9600 erase_flash
+$ esptool.py --port /dev/ttyUSB0 --baud 9600 write_flash \
+  --flash_mode qio --flash_size 4MB --flash_freq 40m \
+  0x00000 combined-firmware-file.bin
+
+

To test it you can connect to it with e.g. screen:

+
+$ screen /dev/ttyUSB0 115200
+
+

Or use your favourite terminal emulator:

+
+$ st -l /dev/ttyUSB0 115200
+
+nodemcu drawing +
+ diff --git a/graveyard_of_the_drawings-1.png b/graveyard_of_the_drawings-1.png new file mode 100644 index 0000000..22dc6b5 Binary files /dev/null and b/graveyard_of_the_drawings-1.png differ diff --git a/graveyard_of_the_drawings-2.png b/graveyard_of_the_drawings-2.png new file mode 100644 index 0000000..ff3583c Binary files /dev/null and b/graveyard_of_the_drawings-2.png differ diff --git a/graveyard_of_the_drawings-3.png b/graveyard_of_the_drawings-3.png new file mode 100644 index 0000000..c85a6b5 Binary files /dev/null and b/graveyard_of_the_drawings-3.png differ diff --git a/graveyard_of_the_drawings-4.png b/graveyard_of_the_drawings-4.png new file mode 100644 index 0000000..6546ab1 Binary files /dev/null and b/graveyard_of_the_drawings-4.png differ diff --git a/graveyard_of_the_drawings-5.png b/graveyard_of_the_drawings-5.png new file mode 100644 index 0000000..97b3921 Binary files /dev/null and b/graveyard_of_the_drawings-5.png differ diff --git a/graveyard_of_the_drawings-6.png b/graveyard_of_the_drawings-6.png new file mode 100644 index 0000000..b7e6b6f Binary files /dev/null and b/graveyard_of_the_drawings-6.png differ diff --git a/graveyard_of_the_drawings-7.png b/graveyard_of_the_drawings-7.png new file mode 100644 index 0000000..fbe3040 Binary files /dev/null and b/graveyard_of_the_drawings-7.png differ diff --git a/graveyard_of_the_drawings-8.png b/graveyard_of_the_drawings-8.png new file mode 100644 index 0000000..f507934 Binary files /dev/null and b/graveyard_of_the_drawings-8.png differ diff --git a/graveyard_of_the_drawings-9.png b/graveyard_of_the_drawings-9.png new file mode 100644 index 0000000..c4d440b Binary files /dev/null and b/graveyard_of_the_drawings-9.png differ diff --git a/graveyard_of_the_drawings.html b/graveyard_of_the_drawings.html new file mode 100644 index 0000000..2e67da6 --- /dev/null +++ b/graveyard_of_the_drawings.html @@ -0,0 +1,29 @@ + + + + + + + + + +Graveyard of the Drawings + + + +
+

Graveyard of the Drawings

+

Last modified on 2021-03-19 19:53+01:00 +

Here are the drawings I made for articles that I decided to remove. No context, no nothing. Just images. Despite +the style, I still think that it'd be a little bit of waste to just remove them along the texts and reusing them in +different articles is just lazy.

+ + + + + + + + + +
diff --git a/half_of_my_css_are_links-1.png b/half_of_my_css_are_links-1.png new file mode 100644 index 0000000..3f0f425 Binary files /dev/null and b/half_of_my_css_are_links-1.png differ diff --git a/half_of_my_css_are_links.html b/half_of_my_css_are_links.html new file mode 100644 index 0000000..8f0446a --- /dev/null +++ b/half_of_my_css_are_links.html @@ -0,0 +1,87 @@ + + + + + + + + + +Half of My CSS Are Links + + + +
+

Half of My CSS Are Links

+

Published on 2020-05-19 20:09:00+02:00 +

Lately, I've been tinkering in stylesheet of this website. My main goals are readability and +minimalism, in this order. I do put some little things to appeal to my taste, but let this one slide, please, I'm +confident that my stylesheet is one of the smallest there are. +

Now, now. I noticed a little thing in the stylesheet that got me really interested. I started to look around in my +archives, mirrors of old websites, my own older websites, unused designs, and finally into other websites that are +currently available including google, wikipedia, github, youtube, 4chan, and so on. There is one little pattern that +can be found throughout lots and lots of pages:

+
+a:link,
+a:visited {
+	color: mediumturquoise;
+	text-decoration: none;
+}
+
+a:hover,
+a:active {
+	color: turquoise;
+	text-decoration: underline;
+}
+
+
+ +Example link +
+

Obviously, not all of the pages use turquoise as link colour. They are usually blue-ish. There might be some small +variations to this, but it always comes down to getting rid off the underline, and then when user hovers over the link +to change the colour slightly and bring back the underline. Like I said - variations: colour changes w/o underline, +underline is always there, colour doesn't change but underline does the thing, and others. +

There's a significant lack of feedback for the user in it. We have more tools than that, and much much more to +express. Let's take the Wikipedia as an example because it does it's job greatly. Not only it uses all of the usual CSS +pseudo-classes, but also adds on top of them, and properly communicates nonexistent pages. The user knows immediately +if they've already seen the referenced page, because they use :visited. The user sees that they properly +clicked the link because it changes it's appearance when it's :active, not only when it's hovered. +

Those two usually skipped pseudo-classes are really informative. Hey, they can even make the page look cooler. +Especially :active. At first, it didn't make much sense to me to use it. After all, if you click a link +the whole browser reacts. Rather than discarding it, I decided to check it out by adding it to my stylesheet. And then +it clicked.

+Booom! +

No, no, no. I didn't write this whole thing just to make a pun. Trust me. The "it looks kinda cool" is not my only +argument. I consider it a valid argument, subjective, but still valid. The other argument is again: feedback. This +pseudo-class is used when the element is clicked. This includes most actions, LMB, RMB, tab+menu, and MMB. It means +that in most browsers the element remains :active when someone has context menu open. In some situations +it can clear up the context of the menu as the position of the menu may be ambiguous. +

Surprisingly, there are also situations in which browser doesn't react to the click. Opening some other application, +running task in the background, or generally running some javascript code. Of course, things break down, and then +:active may act as "Yeah, it should be working" information. +

I didn't have much trouble understanding :visited and it's uses. My homepage is an old-school index, so +the usage is clear. Let's say that someone decides to read all of my posts, and that at that point there is plenty of +them (by the way, that's a torture, don't do that). Assuming they won't clear their browser history, the webpage will +clearly inform them what they saw, and what they didn't. +

Just to be clear, not all use cases require those pseudo-classes, but quite an amount of them really could use them. +Look at youtube. At some point they stopped using :visited, but soon after they added similar information +below the thumbnail in form of a red bar that display your watch progress. It means that in the end they consider it +worth keeping. As always, think about your case, consider needs of your users, they behaviour, possible workflows, and +whatever else. If it makes sense, then I encourage you to remember about :visited and :active. +

+ diff --git a/how_to_write_a_minimal_html5_document-1.png b/how_to_write_a_minimal_html5_document-1.png new file mode 100644 index 0000000..01ccae3 Binary files /dev/null and b/how_to_write_a_minimal_html5_document-1.png differ diff --git a/how_to_write_a_minimal_html5_document.html b/how_to_write_a_minimal_html5_document.html new file mode 100644 index 0000000..6a70c3b --- /dev/null +++ b/how_to_write_a_minimal_html5_document.html @@ -0,0 +1,142 @@ + + + + + + + + + +How To Write a Minimal HTML5 Document + + + +
+

How To Write a Minimal HTML5 Document

+

Published on 2020-08-03 18:18:00+02:00 +

Yes, I know how it sounds to have both "HTML" and "minimal" in one sentence. I think it's possible to accomplish +that and I'll show you how. Before we start, let's set some rules:

+ + + +

Sounds good. Keeping those rules in mind, the shortest document we can produce is:

+ +
+<!doctype html>
+<title>Hello</title>
+
+ +

In case of doubts consult W3's checker. You must insert the code yourself, +as they don't support this kind of links.

+ +html5 logo + +

Now then, that's not quite useful, but it clearly indicates that we can skip tags. We can skip a lot of tags and +the document will remain valid. First of all, you can skip head and body elements entirely, +as long as you keep the content in the document in a nice sequence and the division between the meta information and the +actual body is easily deducible: + +

+<!doctype html>
+<html lang="en">
+<meta charset="utf-8">
+<link rel="stylesheet" href="style.css">
+
+<title>Hello</title>
+
+<h1>Hello</h1>
+<p>Lorem ipsum dolor sit amet.
+
+ +

There are a few points of interests in this example: + +

+
<html lang="en"> +
html element can be omitted entirely but it's suggested to use its lang attribute to +specify intended language of the document. Missing lang="en" is not an error; it's a warning, but the +attribute is quite helpful for browsers, search engines and therefore users. +
<meta charset="utf-8"> +
meta element with charset is not needed, but is suggested. +
<p>Lorem... +
Ending tag for p may be omitted in the usual cases. +
</h1> +
Ending tag for h1 (and other headings) must be present just like in case of title element. +
+ +

The rule that applies to p also applies to e.g. li, dt or dd. +Generally, if I'm not sure if I can omit the ending tag, I ask myself two questions: + +

+ +

Answering those is kind of tricky at the start but one gets used to it. I did, at least. If you are still unsure, you +can refer directly to the standard. +

Let's walk through a longer example, the comments are in-lined in its elements: + +

+<!doctype html>
+<html lang="en">
+<meta charset="utf-8">
+<link rel="stylesheet" href="style.css">
+
+<title>Title must have an ending tag</title>
+
+<article>
+<h1>h1 must have one as well</h1>
+
+<p>This paragraph is good as it is.
+<p>Same goes for this one. In-line text styles must have end tags pretty
+much <strong>always</strong>. But hey, it would be weird to
+<em>not have them</em> there, right?</p>
+<img src="image.png" alt="logo or something, dunno">
+<p>Img element can be a child of a paragraph. If you want to make sure that
+it is outside of p, then write end tag manually.
+<p>Following pre element is considered a block, so you can skip ending tag.
+
+<pre>
+On the other hand pre MUST have ending tag.
+</pre>
+
+<p>We're cool here, as everything is closed so far. Let's list some stuff:
+
+<ul>
+<li>ul is the same as pre in context of ending tag for the p element.
+<li>List elements may omit ending tags.
+<li>Same applies to dt, dd that are used in dl
+<li>dl and ol follow the same rules as pre and ul. Ending tag is needed.
+</ul>
+
+<table>
+<tr><td>As    <td>you     <td>see
+<tr><td>table <td>insides <td>are
+<tr><td>cool  <td>without <td>end tags.
+</table>
+
+<p>But only insides. Table itself must have end tag.
+Same goes for article element:
+
+</article>
+
+ +

That's about it. In example above the deepest element in hierarchy was: +html/article/table/tr/td. +That's 5 visible levels. Of course, behind the scenes there are more including tbody and body, +but that's acceptable in our case. As per requirements, the presented document is valid. + +

I think that adding some more personal restrictions can make the document more readable in plain text. Some users may +appreciate it. Consider adding empty lines where it feels necessary, adding or skipping indention, and so on. + +

In case of problems refer to: +

+
+ diff --git a/hunt_for_lex_and_yacc_the_dinosaur-1.png b/hunt_for_lex_and_yacc_the_dinosaur-1.png new file mode 100644 index 0000000..92bcf66 Binary files /dev/null and b/hunt_for_lex_and_yacc_the_dinosaur-1.png differ diff --git a/hunt_for_lex_and_yacc_the_dinosaur-2.png b/hunt_for_lex_and_yacc_the_dinosaur-2.png new file mode 100644 index 0000000..420947b Binary files /dev/null and b/hunt_for_lex_and_yacc_the_dinosaur-2.png differ diff --git a/hunt_for_lex_and_yacc_the_dinosaur.html b/hunt_for_lex_and_yacc_the_dinosaur.html new file mode 100644 index 0000000..a783a20 --- /dev/null +++ b/hunt_for_lex_and_yacc_the_dinosaur.html @@ -0,0 +1,71 @@ + + + + + + + + + +Hunt for Lex and Yacc, the Dinosaur + + + +
+

Hunt for Lex and Yacc, the Dinosaur

+

Published on 2020-06-20 23:36:00+02:00 +

Everything is text we were told. Imagine that with this assumption we plan to take on a dinosaur. Indeed, this +is one of the greatest jokes ever done in history of programming. Lex manual page said that there is an asteroid that +will kill it for us. There is none, and the dinosaur is what we created ourselves and what we are made of. +

Human and computer interaction is quite limited. It doesn't help that we express ourselves in a very inefficient and +quite bizarre ways. The basis for a chunk of communication is a language, and using it in combination with text in order +to communicate with computers was undeniably a well-made choice, and perhaps even natural to some extent. Designing and +creating languages entirely to express what computer is supposed to do was an expected consequent action. Creation of +tools like Lex and Yacc was also predictable, and obviously there is +nothing wrong with it.

+rifle +

With introduction like this, it feels like I haven't got anything else to write. Regarding this level of abstraction, +yeah, it's not like we can easily change ourselves as human beings. However, if we change the level, we have something +to discuss. That thing, or rather things are the form our written language takes, intermediate abstractions we use, and +tools or interfaces that are between us and the machine. +

I'll focus here on a selected tiny bit of the problem, and if you feel interested in the whole thing, check out e.g. +Bret Victor's talks, especially The Humane +Representation of Thought. +

Now then, for me it looks like the source code is dominantly structured similarly to books. It has table of content, +maybe index, and the content itself. Sometimes there might be some annotations or references to other books. In general, +the content is one sequence. It could be divided into chapters or paragraphs, but it's still one book. Source code +behaves the same way: it's one thing presented to the system that processes it. In some cases, the source is structured +using files as a unit, but the strictness of this approach varies, and the goal for such structuring is so that the +programmer can understand it better. In the end, computer usually receives the source as a whole anyway (like: "here, +the program consists of these files; parse them, compile, link, whatever"). +

The abstract structure of the program is pretty much always explained to the computer using the features present in +the language. It sounds obvious to do so. Thing is, we have more ways of expressing complex structures or hierarchies to +computer than just plain text. +

Files and databases. More generally, a dedicated thing that expresses only structure of the program. +Yes, even using a text file, if you must. It doesn't matter. What's important is to have a dedicated, readable for both +the programmer and the machine way of structuring the program (not just source) that is closer to a graph that's similar +to the abstract semantic graph. The closer we get to this graph representation, the easier it gets to maintain the +source and understand the program it describes. Name or parameter position changes, division of classes into smaller +pieces, movement of functions from one entity to another, separating entire functionality into an external module; all +of those and possibly more become either trivial or non-existent tasks. +

Of course, some of the mentioned methods are less capable than the others. Filesystems aren't really a tool to create +graphs, and text files are terrible at referencing. Additionally, a language that wants to be structured like this +should provide tools for developers. They should be simple and specialized with an interface that lets them be easily +integrated into more verbose toolsets. One example that tried something similar is Smalltalk with its environment. +

You may ask now: "Isn't that what IDEs are?" Similar, but not quite the same. Modern IDEs are the essence of the law +of the instrument: "I'm a text editor and the source is all text, therefore everything I do is change the text!" Some of +them just got better at it. The other problem is that they are external to the environment of the language. An intruder +that seeks information on its own. Look at LLVM and amazing things it produced by exposing smaller and smaller things to +developers. +

The goal is to extract part of the programming language into something new. The key is to find a balance between the +representation, readability, ease of integration, and the tooling. Modern programming languages try to accomplish that +through verbose text editing, which sooner or later might become a dead end. Exposing the representations that are used +internally by the compilers and interpreters to the external tools and the user either through data or small specialized +tools may help us to avoid such fate. Allowing user to interact with more abstract representations in a meaningful way +will prove itself beneficial. +

We are here not to kill a dinosaur. It's impossible for us to do as of now. We are here to reduce it to a smaller +animal. Perhaps a chicken. It will live on our farm, we will take care of it, and in exchange it will give us some eggs. +I believe we have more than just one way to describe the abstract programs that sit in our heads.

+chicken +
+ diff --git a/index.html b/index.html new file mode 100644 index 0000000..a89d9a2 --- /dev/null +++ b/index.html @@ -0,0 +1,98 @@ + + + + + + + + + +Ignore + +
+

Ignore

+
+
+

Birds and Programming

+

The sole reason why birds are not excellent programmers is that they can't use keyboards very well. At some point in +the future it will be required from them to develop an interface to computers. Be it fingers or something that will +completely discard the concept of a keyboard. +

+ +
+

News

+

+Initialized website as git repository. Let's see if it will be useful. +

+Rewritten parts of and updated We Browsers Are No More. +

+Capitalized titles and fixed some links here and there. +

+Added Derelict homepage to the index. +

+Added subtitles with date of publication for each article. Last modified dates will be tracked from this point of time. +

+Removed structured sources in examples as a preparation to extend related topics. +

+Published FAQ and graveyard. +

+Published plop landing page. +

+Minor changes to Organizing your Lua project. +

+Published Organizing your Lua project. +

+Published LICENSE. Minor changes to index and style. +

+More minor changes done to the index, CSS and added index link in all of the articles. +

+Removed old standalone updates page and moved it to the index. Reindexed and removed some pages as they had no worth. +I plan to rewrite, restructure and expand some texts. I also have new experiments coming in to fail miserably. Let's see how it goes! +

diff --git a/integrating_browser_into_your_environment-1.png b/integrating_browser_into_your_environment-1.png new file mode 100644 index 0000000..4c2d87a Binary files /dev/null and b/integrating_browser_into_your_environment-1.png differ diff --git a/integrating_browser_into_your_environment.html b/integrating_browser_into_your_environment.html new file mode 100644 index 0000000..e67bfea --- /dev/null +++ b/integrating_browser_into_your_environment.html @@ -0,0 +1,81 @@ + + + + + + + + + +Integrating Browser Into Your Environment + + + +
+

Integrating Browser Into Your Environment

+

Published on 2020-08-12 23:15:00+02:00 +

Not so long ago I've finally started to play around with a little idea I had when I was writing +the rant about markdown. That little idea was to split web browser into +possibly several smaller utilities with a distinct responsibilities. In other words, to apply Unix-ish philosophy in a +web browser. I've touched this idea in Web browsers are no more and then +did some initial tinkering in Plumbing your own browser. Now time has come +to draw conclusions. Think of this post as a direct update to the plumbing one. +

I don't like IDEs. I have hand-crafted environments that I "live in" when I'm working on any of my computers. Window +manager that I tinkered to my liking, my preferred utilities, my text editor, my shortcuts. Whole operating system is +configured with one thing kept in mind: it belongs to me. IDEs invade this personal space of mine. And so do web +browsers. Of course, you can configure both web browsers and IDEs to some extent. You can even integrate them closer to +your normal environment, but in my experience sooner or later you'll run into limitations. Or you will end up with IDE +consuming your entire operating system (hello, emacs!). I didn't like that. +

Thanks to the amount of alternatives I can happily avoid using IDEs. I can't say that about browsers. Moreover modern +browsers are enormous and hermetic. Usually the only utility you have to interface with them is browse +which in turn is usually just a symbolic link to xdg-open. Not only that, but they only to open links in +their rendering engine and may allow to save a file, so that user can use it once he leaves the browser alone. +

Because of that, and because of other reasons I described in before-mentioned articles, I decided to try if splitting +browser into smaller utilities is a viable option, and just play around this idea. +

For now, I've split it into four parts, but I can see more utilities emerging: +

+
request solver +
Previously, I referred to it as "browse" utility. But the way I have "browse" implemented now implies more than just +one responsibility. On the other, the request solver is meant to only oversee a request. It means it has all the pieces +of information and passes them to utilities in order to complete the request. It interacts with most of other programs +and may interact with user.
+It's one of the most important parts of this system. Due to nature of more verbose media like websites it should support +more than just "get this URI and show it in a view". For instance, it should be able to allow user (or view) to open the +resource in currently used active window or just retrieve files without opening them (in case of e.g. stylesheets). I +believe that there is enough room in here to separate even more utilities. +
protocol demulitplexer +
This one is also a part of the "browse" as of now, just because at this stage it can be a simple switch case or even +non-existent, assuming I plan to support only one protocol (e.g. http). One could pass this responsibility to the file +system, if protocols were to be implemented at this level (the Hurd-ish way). +
protocol daemon +
Not really a daemon (but it can be one!). Retrieves and points to data needed by the request solver. +
opener/view demultiplexer +
Your usual xdg-open clone. A more verbose switch case that opens the resources in appropriate views. +
view/view engine +
Displays the retrieved resource to a user. It's aware of its content and may request secondary files through request +solver (again, e.g. stylesheet or an image). Displays hyperlinks and redirects them to request solver. It's almost +completely agnostic to how they should be handled. It may suggest request solver to open the link in current view, if +the resource type is supported and the view is desired to handle this type of resource. +
+

Now then, implementation currently have request solver and protocol demultiplexer in one utility called "browse". I +see quite a lot of opportunities to split the request solver a little bit more, or at least move some of the tasks to +already existing programs. Nonetheless, they're way more separated than most modern browsers.

+demux, I really like this word +

The biggest pain in all of this is an HTML engine. The more verbose ones were never intended to be used like this. +On the other hand the limited one that I wrote just for this experiment is... Well, way too limited. It allows me to +browse simpler websites like my own, but has problems in those that have CSS that's longer than the website content. +Of course, I don't even mention modern web applications, obviously they won't work without Javascript. +

Surprisingly, despite the enormity of problems mostly related to HTML, CSS or Javascript, I'm staying positive. It +works, it can be integrated in the environment and it's an interesting idea to explore. For some reason it feels like +I took xdg-open to extremes (that's why I keep mentioning it), but I think it's just because I am yet to +polish the concept. +

For now, the utilities are available publicly. You can use them to try +out the idea. I've left there one simple example that uses dmenu for opening an URI either from list of +bookmarks or one entered by hand. Moving base address and some mime type to command line options, should give the +utilities enough flexibility to use e.g. opener to open local files as well. Then it can be used with lf or +any file manager of your choice, and you'll have single utility to handle all kinds of openings. +

I'll move now to other ideas that I left without any conclusion. However, I'm looking forward to seeing if this one +can bring more in the future and most certainly I'll return to it with full focus. + +

+ diff --git a/journey_home_application_deployment-1.png b/journey_home_application_deployment-1.png new file mode 100644 index 0000000..0d4f2ca Binary files /dev/null and b/journey_home_application_deployment-1.png differ diff --git a/journey_home_application_deployment.html b/journey_home_application_deployment.html new file mode 100644 index 0000000..d4b2e81 --- /dev/null +++ b/journey_home_application_deployment.html @@ -0,0 +1,95 @@ + + + + + + + + + +Journey /home - Application Deployment + + + +
+

Journey /home - Application Deployment

+

Published on 2020-05-29 01:27:00+02:00

+mountains and stuff +

File hierarchy in Linux is a mess. However, this time I won't discuss why it is so. Instead, I've mentioned it, so +that we don't feel bad after what we'll do in here. It's a mess, and it's our little mess that we can shape to our +needs. However we like. Especially, if we keep it consistent. +

I've been using various ways to put applications up and running on my server. I let systemd to handle init and +service management for me for around three years now. As of files, I used different ways of structuring my public +content that should be available via the HTTP or FTP server. It usually was a sftp jail somewhat like +/{var/http,srv}/domain.com. +

Lately, I wanted to do something fresh, so I thought: "Let's move everything to /home!" I couldn't find +any convincing reason against it, and there were few nice points to have it implemented. Now then, how does it look +like? +

As usual, for each service or domain I create a new account. I have a skeleton for home directory ready, that sets +it up to look similar to this:

+ +

It tries to look like it follows XDG Base +Directory Specification. Don't be fooled, though. It's close but the purposes are quite different (also +.ssh, grrr). This little structure allows me to assume that I have all needed directories already +in place, and my deployment script doesn't need to care about it. +

Speaking off deployment. Obviously, I automated it. Any binaries that are meant to be run go to +.local/bin/, configuration files go to .config/application/, cache and temporary files +land in .cache/application/. Everything feels quite straight-forward. The difference is in where +the actual data goes to. It's really up to you and how you configure the service. In case of HTTP I like to have a +subdirectory called public/ which serves me as a root. For gitolite, I have the usual +repositories subdirectory. For fossil, I have fossils, and so on and on. You get the idea. +

Most of the times, I want to run some kind of application as a service. I use systemd's +user services. I place unit files +in the .config/systemd/user/. It's not my personal preference. Systemd expects them to be there. Once they +are in place I enable and start them. To make them work properly as a service I enable lingering, so that the services +are not bound to the presence of user sessions, and they act like we expect them to:

+
+# loginctl enable-linger username
+
+

My script handles deployment of the binary and associated unit file if needed. It's very convenient. Of course, +one could automate deployment to any file hierarchy, so what else do I get from this setup? +

First off, similarly to containers, the changes done by deployment don't propagate to the system. The application, +data associated with it, all are bound to this single directory. It's not only that you might avoid mess in the system, +but in case you want to get rid off the application. It's way easier. No need to keep track of your manual edits, files +you added here and there. Delete the user, delete the directory, and it's clear. +

The deployment doesn't need elevated privileges. Once you have created the user and enabled lingering for it, there +is no need for using root anymore. One obstacle could be propagating the configuration files to nginx. I've solved it +with a script that needs elevated privileges, and can be used with sudo. To make it work I added following in global +nginx config:

+
+http {
+	include /home/*/.config/nginx/*.conf;
+}
+
+

This begs for issues, so the script first runs nginx -t. If the configuration files are bad, then it +overwrites them with backed-up copies that are sure to work. If there is none, it changes the name, so that it won't +match the include pattern. If the configuration files are all OK then it reloads nginx, and copies the them as a back-up +that will be used if next deployment is unsuccessful. The users can run the script with: sudo nreload. +

It's kinda subjective, but for me it was easier to automate the processes for creating new users, deploying the +applications, and withholding those applications from the server. The file structure is trimmed compared to the usual +mess with files all over the place. Don't get me wrong. It's not that /etc + /srv is highly +complicated. It's just that I usually end up needing two or three different approaches to file hierarchy, and it becomes +messy very soon. This way gives me very pleasant experience when needing to quickly deploy something for a test and +delete it soon after. I guess container manager like Docker would do, but it feels like an overkill for something that +is dealt with using four 30-line shell scripts. +

All in all, it seems the points are: always automate your repeated activities, no matter where you put your stuff +try to keep it reasonably structured, systemd has user services, and they can be used in various ways. I feel like I +could do the same in /srv instead of /home. Does it really matter? This way I didn't need to +modify adduser... +

+ diff --git a/markdown_is_bad_for_you-1.png b/markdown_is_bad_for_you-1.png new file mode 100644 index 0000000..4a2caa2 Binary files /dev/null and b/markdown_is_bad_for_you-1.png differ diff --git a/markdown_is_bad_for_you-2.png b/markdown_is_bad_for_you-2.png new file mode 100644 index 0000000..60f301d Binary files /dev/null and b/markdown_is_bad_for_you-2.png differ diff --git a/markdown_is_bad_for_you.html b/markdown_is_bad_for_you.html new file mode 100644 index 0000000..00e40bd --- /dev/null +++ b/markdown_is_bad_for_you.html @@ -0,0 +1,80 @@ + + + + + + + + + +Markdown Is Bad For You + + + +
+

Markdown Is Bad For You

+

Published on 2020-05-13 17:32:00+02:00 +

Markdown is a markup language. It wouldn't be misleading to say it's also a family of markup languages that are +derived from the syntax created by John Gruber et alii. Gruber's creation was +inspired from other markup languages but, as author notes, it mostly follows the convention used in plain-text e-mails. +It is usually introduced as text-to-HTML formatting syntax, +language with plain-text-formatting syntax or +easy to read, write, and edit prose. (...) Markdown +is a writing format. +

Now that we are on the same page with Markdown, let's make the rant case clear. We're considering creation of +either, a static website or an application providing dynamic set of pages. To be more exact let's say we are preparing +to publish this very article. Remember, we are keeping in mind just one principle: minimalism. I'll not define it, so +I can have some more freedom in what I say. This also gives you more opportunities to complain about me. That makes us +even. +

We have plenty of information and a goal: make a simple blog. Those of us who are the best buddies with the Markdown +would be happy to just jump into it and get some static site generator or some dynamic blog engine up and running. +Others, who are not yet that used to it, are most likely tempted by the convenient prospect offered by MD. And, of +course, there is the last group, who read the title and is impatiently waiting for me to finally explain my point of +view instead of continuing this foreplay. Ok, ok, ok. The first sin has already been written down. +

If you want to serve a website you need some kind of an HTTP server. Be it nginx, apache or something else. +Surprisingly, it is all you need to serve static content. On the other hand, if you want show Markdown that way, +firstly you need to generate HTML out of it. It's only natural, this format has been created exactly for that. This +means you will end up needing an entire new piece of software. Markdown-to-HTML generator. It goes the same way for the +dynamic blog. The only difference is that instead of some standalone program you will need to integrate a generator +with a parser, and possibly some additional HTML template engine to embed the results nicely. The usage of markdown +always comes with a cost in a form of an additional entity in the architecture or workflow. You can automate it, but it +is always there, and should be acknowledged before you can decide to ignore it.

+One new format and one component needed in the flow +

Previous point implies an additional sin. Markdown not only requires to add more stuff to your existing setup, but +it also at some point stops being Markdown, and starts being HTML. And like I've already said: it's only natural, +because it was designed this particular way. Not only it wants to become HTML, it also allows user to inline HTML. It +means that the design is not minimal in its core. This is because the purpose was to create a syntax to simplify +writing for the web. It never tried to replace anything, and that's probably the worst sin of all. Thanks to Markdown, +you will always have HTML and something else. If you are going for minimalism you shouldn't want that +additional thing at all. +

Markdown with it's limited syntax encourages users to inline HTML if it's impossible for them to accomplish their +goal in just MD. For that reason people who didn't want to use HTML started to extend the syntax. It resulted in spawn +of a lot of offshoot syntaxes. Discussion of how fragmented ecosystem with a whole lot of plugins to a single component +is unhealthy in a long run is way out of the scope of this article. In short: you will not only need something to +process the basic syntax, but you will also require a possibly great amount of plugins to handle the extended one. That +makes even more components you need to integrate into your software. +

These points considered only the problem of increased resources, components or steps in processes or flows. Once you +start using Markdown it may start tempting you to avoid using HTML at all. I've already mentioned the plugin madness, +it may lead to. But that's not all there is. Avoiding the HTML in context of the web is plain stupid. Reason is, +HTML is the Web. Honestly, this one sentence could +summarize all of the previous paragraphs. +

Sadly, HTML has its own problems. The whole family, together with XML has been widely criticized for various +reasons. There is very few people who would try to argue that they are minimal. I won't, because I think they are not. +Adding one more layer of syntax on top of that won't solve the issue. It will only make it worse. HTML5 tries to +accomplish commendable goals and one could use it to create a rather minimal web pages. On how to write somehow minimal +HTML5, make it readable in plain-text, properly mark content within it, extend it to your needs and break the +specification in unimaginable ways... Let's leave it as a topic for another day. Or even more than one. +

In a very roundabout way we can take one more thing out of the Markdown. Perhaps HTML shouldn't be the web? Why are +we building everything on top of a single stack. It's convenient, yes. However, won't it crash if we try to reach too +far? What if we try yo break the concept of the browsers as they are now and make it more modular? Leave the HTTP, but +allow more freedom in how the content is served to the user. +

All in all, I've fooled you from the very beginning. Markdown isn't actually bad for you. It really shines as a +syntax to describe comments, short plain-text documents or messages like e-mails. As long as it is used as a +replacement, it's nice and easy to use. It provides a great way for external users to post their content on your +platform in a quite safe way with only minimal restrictions. However, the moment you use it as a HTML extension, or as +an intermediate format to generate whole HTML pages. It crumbles. It starts to build up on top of your stack, throwing +at it more and more inline HTML and layers of layers of plugins. Keep it simple. +

+It may collapse any time now +
+ diff --git a/of_privacy_and_traffic_tracking-1.png b/of_privacy_and_traffic_tracking-1.png new file mode 100644 index 0000000..ef7ae6c Binary files /dev/null and b/of_privacy_and_traffic_tracking-1.png differ diff --git a/of_privacy_and_traffic_tracking.html b/of_privacy_and_traffic_tracking.html new file mode 100644 index 0000000..5589eaf --- /dev/null +++ b/of_privacy_and_traffic_tracking.html @@ -0,0 +1,42 @@ + + + + + + + + + +Of Privacy and Traffic Tracking + + + +
+

Of Privacy and Traffic Tracking

+

Published on 2020-07-11 21:11:00+02:00 +

Over the past weeks I wondered if anyone actually reads or visits this website. I kind of started worrying that +someone could want to leave some feedback in one way or another. I have plans to handle that, but I also have other +interests right now. I decided to quickly set up a method that will show me that there is no need to worry or hurry up. +

I deployed the very first version today. I think I spent more time deploying it to the server compared to the amount +of time I put into writing it (systemd had some life problems and I was extremely stubborn to preserve 517 day long +uptime). Anyway, don't expect too much from it. +

The goals are quite clear: respect user's privacy and collect useful information. Filter the data as soon as possible +to minimize what is stored. I'm not interested in some big data or hard-core traffic analysis across huge chunks of the +Internet (sup, Google Analytics). I just want to know if there is someone who spent time reading what I wrote.

+magnifying glass +

Ok, so what data do I collect right now?

+ +

That's all. I don't collect any form of identification. Data that is stored is not even linked to the IP address that +sent it over. That's the point. +

In future I would like to minimize data collection even further. I already mentioned early filtering, but there are +also some other improvements I would like to have. The approaches are quite naive. For example, time user spent looking +at the page is calculated as time from load event to beforeunload event. +

Source code is available via public git repository. +

+ diff --git a/organizing_your_lua_project-1.png b/organizing_your_lua_project-1.png new file mode 100644 index 0000000..aa71459 Binary files /dev/null and b/organizing_your_lua_project-1.png differ diff --git a/organizing_your_lua_project-2.png b/organizing_your_lua_project-2.png new file mode 100644 index 0000000..0917288 Binary files /dev/null and b/organizing_your_lua_project-2.png differ diff --git a/organizing_your_lua_project.html b/organizing_your_lua_project.html new file mode 100644 index 0000000..d6bae4a --- /dev/null +++ b/organizing_your_lua_project.html @@ -0,0 +1,239 @@ + + + + + + + + + +Organizing Your Lua Project + + + +
+

Organizing Your Lua Project

+

Published on 2021-01-07 15:45:00+01:00 +

From time to time I hear complaints about how Lua handles modules. Here and there I see and even answer myself +questions regarding require and adjusting the paths in package to allow some desired +behaviour, with the most prominent issue of relative imports that always work. +

Before we hop into the explanation of how to organize files in your Lua projects, let's talk about default importing +mechanism in Lua: require.

+ +lua hierarchy + +

How require handles paths

+

Both package and require are surprisingly interesting tools. At first glance they are +simple. When you look into them, they are still understandable while gaining some complexity that doesn't reach +unnecessary extremes. They are elegant. +

They use a mechanism to find desired files called path resolution or usually simply path. The main +component of is a sequence of patterns that may become a pathname, e.g.: +

+/usr/lib/lua/?.lua;/usr/lib/lua/?/init.lua
+
+

What does it tell us? First off, ? is going to be replaced by the argument that was provided to the +require. All dots will be replaced by an appropriate path separator so that: a.b.c will +become a/b/c in *nix systems. So, for call like require "a.b.c", out path will look +like this: +

+/usr/lib/lua/a/b/c.lua;/usr/lib/lua/a/b/c/init.lua
+
+

Now, each of these paths are tried and the first one that actually exists in the system will be used. If none of them +match an existing file, the import fails. Simple as that. +

The path that is used in resolution is set in package.path. You can modify it in Lua, but it is +intrusive and may depend on a single entry point. Generally, if you plan to release your project as a module for people +to use, I encourage you to avoid modifying anything global. And that's global. Anyway, package.path doesn't +appear out of nowhere - it is populated by one of: +

    +
  1. Environmental variable LUA_PATH_x_x, where x_x is version such as 5_4 +
  2. Environmental variable LUA_PATH +
  3. Default as defined in luaconf.h +
+

Interestingly, if two separators ;; show up in the environmental variable path, they will be replaced by +the default path. Meaning /path/to/project/?.lua;; works as prepending your custom path to the default one. +

Of course, there is way more to it than just this i.a.: requiring modules written in C, searchers or preloads. +However, in our case this knowledge will suffice. +

If you are curious how exactly path is loaded be sure to check out +setpath. + +

Endgame

+

To prepare for development, we need to know where we are heading. First step is to consider the execution +environment. Of course, this and packaging are journeys on their own, so let's just look at two common examples: an +application that uses some framework that uses Lua (e.g. a game made in LÖVE) and a +standalone module for others to use. +

In the first case, it's the duty of the framework to configure the path properly and inform you through the +documentation about it. Paths in LÖVE use their own file hierarchy that is managed by love.filesystem and +by default contains both the game's source (directory or the mounted .love archive) and the save directory. +This means that the structure in your source files is directly reflected in the calls to require, so that +require "module.submodule" will always try game/module/submodule.lua, no matter how you run +the game. This case usually doesn't involve any additional environment configuration for the development stage. +

In the second case, your project will end up in an already configured environment and will need to fit in. The +installation of the package usually involves copying your files to the directory that is already included in the path, +so that no further configuration is needed for the execution (at least regarding the path). You can assume, that the +successful installation will make your modules available in the way you want them. +

This doesn't happen in the development stage, when you rarely install your package, and most certainly you don't +install it each time you want to test it. This means, that you need to adjust the path so that your modules appear in it +as if they were installed in the system. The principle of minimizing the intrusiveness remains, so the best option is to +use the environmental variables to prepare for development. If you run your application or any tool in such environment, +then Lua will have access to your modules no matter where it is run. Additionally, it will be consistent with the +target environment and won't need any additional hacks. + +

Development environment

+

All this talk comes down to: set LUA_PATH in your development environment so that it includes your +project files even if they are not installed in system. A simple approach is to source following in each session: +

+export PROJECT=/path/to/project
+export LUA_PATH="$PROJECT/?.lua;$PROJECT/?/init.lua;;"
+
+

Note the double semicolon that will get replaced by default path, so that other modules that are already installed +are also available. +

Let's try it out: +

+$ source env.sh
+$ find .
+./env.sh
+./modulea/submodule.lua
+./modulea/init.lua
+./moduleb.lua
+$ cd modulea
+$ lua
+Lua 5.4.2
+> require "moduleb"
+table: 0x561e4b72fb20
+> require "modulea"
+table: 0x561e4b73aa40
+> require "modulea.submodule"
+table: 0x561e4b743030
+
+

As you can see, despite being in the subdirectory, you can still use modules with their fully qualified names that +will remain the same once you install the package. Note, that you could require "init" or require +"submodule" in this case, but I strongly recommend against it. Remain specific, follow the rules and pretend that +you use an installed package from an unknown working directory. Don't depend on current working +directory as it is not always the same. Using full names that consider the path setup guarantees +results.

+ +a random whale + +

Organizing your files

+

Finally, this is what we're waiting for. Assume you have a directory that is a parent of all of your project files. +We'll call it a project root. Usually, this is also root directory for your version control system, be +it git or anything else, and for other tools such as building systems or even entire IDEs. +

Because it is such a central place to the project, I usually just go ahead and prepend it to LUA_PATH in +the very same way as in the section above: +

+export PROJECT=/path/to/project
+export LUA_PATH="$PROJECT/?.lua;$PROJECT/?/init.lua;;"
+
+

Just like previously, any Lua file that will be descendent of the root will be accessible to us through +require. But what is that init.lua? +

It's there to create a way to improve hierarchical structure of your project - to allow splitting bigger modules into +smaller parts (or even submodules that could be included on their own), so that the module doesn't grow into a single +millions-lines-long file. In simpler words: you can create a directory named after module and put +init.lua file there and it will act just like a sole module.lua in root. +

You could also create a directory named after module and module.lua file in root at the same time, but this +way you will have two entries per module in the root instead of just one. +

Additionally, you can then put any module-related files into that directory. You can also use init.lua as +a simple wrapper that calls require for each of its submodules and returns a table with them. +

Consider a verbose example: +

+$ find .
+./conf.lua
+./env.sh
+./main.lua
+./persistence/init.lua
+./persistence/tests.lua
+./version.lua
+./wave/init.lua
+./wave/sawtooth.lua
+./wave/sine.lua
+./wave/square.lua
+$ cat wave/init.lua
+-- This is a wrapper example.
+return {
+	sawtooth = require "wave.sawtooth",
+	sine = require "wave.sine",
+	square = require "wave.square",
+}
+$ cat persistence/init.lua
+-- This is a normal module example.
+return {}
+$ cat persistance/tests.lua
+-- This is a script that tests an example module.
+local p = require "persistence"
+assert(type(p) == "table")
+$ cat main.lua
+-- This is an example main of love application.
+local persistence = require "persistence"
+local wave = require "wave"
+$ cat version.lua
+-- This is an example module that acts as version string of the application.
+return "1.0.0"
+
+

Now, this is a mash-up of everything we've discussed. Despite it pretending to be LÖVE application it has +env.sh. Why? The reason is simple: the persistence and wave modules are not meant to be distributed +alone, and they won't ever appear in path of any other environment than LÖVE's. But LÖVE is not the only execution +environment in here: persistence/tests.lua is also meant to be executed. Possibly alone through Lua interpreter. +To allow it env.sh is present and used. +

Let's have another example of a simple module meant for installation: +

+$ find .
+./env.sh
+./hello/Class.lua
+./hello/init.lua
+./hello/tests.lua
+./hello/version.lua
+./LICENSE
+./Makefile
+./README
+$ cat Makefile
+PREFIX?=/usr/local/lib/lua/5.4
+all:
+	@echo Nothing to be done
+install:
+	cp -r hello $(PREFIX)
+uninstall:
+	rm -fd $(PREFIX)/hello/* $(PREFIX)/hello
+
+

As you can see Makefile in this example has targets for installation and removal of the package. The structure +again is simple. Root works as part of the resolution path and so our module is placed in it's own directory named after +it. +

The last example is a project of a single file module: +

+$ find .
+./env.sh
+./LICENSE
+./object.lua
+./README
+
+

Yes, it's that simple. +

Now, having env.sh in every single project might get bothersome, so I usually use a shell function for +managing them, similarly to what Python's venv does or LuaRocks' env. Speaking of, +LuaRocks is yet another interesting story to be told. + +

Summary

+ + +

Alternatives

+

This is just one of the ways to handle structuring your Lua project. It's based on simple rules but has broad usage. +One tempting alternative is this little snippet: +

+local parents = (...):match "(.-)[^%.]+$"
+require(parents .. "sibling")
+
+

Another already mentioned alternatives is adjusting package.path directly in Lua. However, I decided to +skip it due to it's intrusiveness. +

All in all, Lua is extremely customizable and adjustable. I would be surprised if these three would be the only ways +to organize projects in Lua. + +

+ + diff --git a/plop.html b/plop.html new file mode 100644 index 0000000..c74df4a --- /dev/null +++ b/plop.html @@ -0,0 +1,44 @@ + + + + + + + + + + +Plop - Framework for Prototyping Protocols and Servers + + + +
+

plop

+ +

About

+

Plop is a framework for prototyping request-response protocols and servers. +

It was born in a heat of a moment, when I was angry at nginx for reasons I do not recall anymore. It started as a +very basic HTTP/1.1 server, and quickly became a playground I used to test tools, approaches, and standards. After some +time I decided to make a full-pledged piece of software out of it. This is where we are at now. + +

Features

+ + +

Usage

+

plop is currently only available through it's git +repository. The only dependency is Lua 5.3. +

To build, install, and run plop follow this simple instruction: +

+$ git clone https://git.ignore.pl/plop
+$ cd plop
+$ make
+$ sudo make install
+$ plop
+
+

For more information consult plop(1) manual page, sources, or the output of plop -h. +

+ diff --git a/plop.png b/plop.png new file mode 100644 index 0000000..05b2301 Binary files /dev/null and b/plop.png differ diff --git a/plumbing_your_own_browser-1.png b/plumbing_your_own_browser-1.png new file mode 100644 index 0000000..bbfebec Binary files /dev/null and b/plumbing_your_own_browser-1.png differ diff --git a/plumbing_your_own_browser.html b/plumbing_your_own_browser.html new file mode 100644 index 0000000..4f9b999 --- /dev/null +++ b/plumbing_your_own_browser.html @@ -0,0 +1,99 @@ + + + + + + + + + +Plumbing Your Own Browser + + + +
+

Plumbing Your Own Browser

+

Published on 2020-08-01 21:38:00+02:00

+plumbing +

In spirit of the previous post about web browsers, how about a little +experiment? Let's write a simple tool that implements downloading, history management and displaying the content. This +is intended as a trivial and fun experiment. +

Ideally, I think the architecture would divide into: protocol daemon, navigator, opener and view engines. However, +even with this setup some of them would have wide responsibilities. I don't really like that, but I leave it to future +to deal with. Anyway, what do they do?

+
+
protocol daemon
Responsible for data acquisition and caching. For instance HTTP protocol daemon. +
navigator
The quickest way to explain it: the address bar. It handles history, probably sessions, windows, + initial requests to protocol daemon from the user. This one would need some attention to properly integrate it with + the environment and make sure that its responsibilities don't go too far. +
opener
Not really xdg-open or rifle, but something of this sort. Gets data marked for display from the + protocol server and acts as a demux for view engines. +
view engine
Your usual browser excluding things that already appeared earlier. It may also be something else, + like completely normal image viewer, hyperlinked markdown viewer or even less. Or more like sandboxed application + environment that is not a web application. +
+

Sounds like a complex system, but we can do it easily in a short shell script. I won't bother with view engines, as +right now, that's rather time consuming to get them work, especially that browsers weren't written with this use case in +mind. Even those minimal ones can't do. Generally, they would need to communicate with protocol server to retrieve +secondary data (like stylesheet or images) and communicate with navigator when user clicked some kind of link. +

Anyway, let's start with protocol daemon! Our target is web browser, so we need something to handle HTTP for us. What +else could we use if not curl? Frankly speaking, just curl could be sufficient to view things:

+
+$ curl -sL https://ignore.pl/plumbing_your_own_browser.html
+...
+...
+...
+
+

Yeah, if you use st as terminal emulator like I do, then you need to add | less at the end, so that you +can read it. Honestly, with documents that are written in a way that allows people to read them as plain text, that's +enough (posts in this websites can be read in plain text). +

However, although it's tempting to not, I'll do more than that. Now that we have a protocol daemon that is not a +daemon, the next one is the opener. Why not navigator? For now interactive shell will be the navigator. You'll see how. +

It's possible that you already have something that could act as an opener (like rifle from ranger file manager). +There are plenty of similar programs, including xdg-open. I believe that they could be configured to work nicely in this +setup, but let's write our own:

+
+#!/bin/sh
+TMP=$(mktemp -p /dev/shm) &&
+	{ TYPE=$(curl -sLw "%{content_type}\n" $@ -o "$TMP") &&
+		case "$TYPE" in
+			application/pdf) zathura "$TMP";;
+			image/*) sxiv "$TMP";;
+			text/*) less "$TMP";;
+			*) hexdump "$TMP";;
+		esac }
+rm -f "$TMP"
+
+

That's a lot of things to explain! First two, up to case "$TYPE" in are actually protocol daemon. The +$@ is what comes from the navigator. In our case, it's the arguments from the shell that run our command. +Next up, the case statement is the opener. Based on the output of curl's write-out the script selects program to open +the temporary file from the web. After that, the file is removed, in other words caching is not supported yet. +

Surprisingly, that's it, hell of a minimal browser. Works nicely with pdf files, images and text formats that are not +extremely bloated. Possibly with some tinkering around xdg-open and x default applications some hyperlinks between the +formats could be made (e.g. a pdf links to an external image). +

Now, I could go further and suggest something an option like this:

+
+application/lua) lua_gui_sandbox "$TMP";;
+
+

I find it interesting and worth looking into. I'll leave it as an open thing to try out. +

The are some more things to consider. For instance, the views should know the base directory the file comes from as +some hyperlinks are relative. In other words, programs used as views should allow to state base of the address in some +way:

+
+{ curl -sLw "%{content_type}\n${url_effective}\n" $@ -o "$TMP" | {
+	read TYPE
+	read URL
+	BASE_URL=$(strip_filename_from_url "$URL") } &&
+		case "$TYPE" in
+			text/html) html_view --base "$BASE_URL" "$TMP";;
+			text/markdown) markdown --base "$BASE_URL" "$TMP";;
+			# ...
+		esac }
+
+

By then, the markdown would know that if the user clicks some hyperlink with a relative path, then it +should append the base path to it. It could also provide information that matters in e.g. CORS. +

For now, that's it. The ideas are still unrefined, but at least they are moving somewhere. Hopefully, I will get +myself to write something that could act as a view and respect this concept. My priority should be HTML view but I feel +like starting with simplified Markdown (one without HTML). +

+ diff --git a/stupid_templating_with_shell_cat_and_envsubst-1.png b/stupid_templating_with_shell_cat_and_envsubst-1.png new file mode 100644 index 0000000..d390eac Binary files /dev/null and b/stupid_templating_with_shell_cat_and_envsubst-1.png differ diff --git a/stupid_templating_with_shell_cat_and_envsubst.html b/stupid_templating_with_shell_cat_and_envsubst.html new file mode 100644 index 0000000..7a84554 --- /dev/null +++ b/stupid_templating_with_shell_cat_and_envsubst.html @@ -0,0 +1,72 @@ + + + + + + + + + +Stupid Templating With Shell, cat and envsubst + + + +
+

Stupid Templating With Shell, cat and envsubst

+

Published on 2020-07-14 20:26:00+02:00 +

Now something trivial and fun. Creating templates of documents or configurations in shell. There are two reasons why +I considered doing that instead of using some more verbose utilities. First off, availability - I have POSIX compliant +shell pretty much everywhere I go. It's a big part of my usual environment and because of that I'm used to it. That's +the second reason - frequency I use it with. It's good to mention that it's one of more verbose utilities out there. +

Let's start with plain shell. I actually use this method to serve the blog posts via fcgiwrap. I use it with a single +file, but it can accept external files. However those files won't have their content expanded in any way. If you keep it +in a one script file then it's possible. Basically, I'm talking about cat:

+
+#!/bin/sh
+cat /dev/fd/3 $@ /dev/fd/4 3<<BEFORE 4<<AFTER
+<!doctype html>
+<html lang="en">
+BEFORE
+<script src=""></script>
+AFTER
+
+actual cat +

What you see here is a combination of +heredoc and plain +redirection. It's done like this to avoid calling cat more than once. The script simply concatenates BEFORE +heredoc with any arguments passed to the script, and finally with AFTER heredoc. The output is similar to what you see +in source of this page. The template is trimmed for the sake of the example. +

Now, now. Heredoc can easily expand variables but if we were to cat before.inl $@ after.inl it wouldn't +work. For that I use +envsubst. +Consider file called template.conf: +

+server {
+	listen 80;
+	server_name $DOMAIN$ALIASES;
+	root /srv/http/$DOMAIN/public;
+}
+
+

It could be wrapped in cat and heredoc, but for me it's not desired. Let's say I want my configuration templates to +be exactly that not executable scripts. There are three different solutions that I've heard: eval, +sed and envsubst. Eval can lead to dangerous situations and minor problems with +whitespace. Sed is what I believe is the most typical and straight-forward solution. As for the last one, +it goes like this:

+
+$ export DOMAIN=example.tld
+$ export ALIASES=' www.example.tld'
+$ envsubst '$DOMAIN$ALIASES' <template.conf
+server {
+	listen 80;
+	server_name example.tld www.example.tld;
+	root /srv/http/example.tld/public;
+}
+
+

First, you set the variables (if they are not yet there). Then call the envsubst with a single variable +and redirect the template file into it. This single variable is called SHELL-FORMAT. It restricts the variables +that will be substituted in the input. It's completely optional, but if you have some $server_name you +don't want to substitute, then it's kind of useful. It actually doesn't have any format, you just need to reference the +variables in it: '$DOMAIN$ALIASES', '$DOMAIN $ALIASES', and '$DOMAIN,$ALIASES' +all work the same. +

+ diff --git a/style.css b/style.css new file mode 100644 index 0000000..53bd3a0 --- /dev/null +++ b/style.css @@ -0,0 +1,68 @@ +body { + max-width: 43em; + margin: 1em auto; + padding: 0 1em 22vh 1em; +} + +p, dd, pre { + line-height: 129%; + text-align: justify; + margin-bottom: 1em; +} + +h1, h2, h3, h4, h5, h6 { + color: #3482a5; + margin-bottom: 0.2em; +} + +.subtitle { + color: #999; + margin-top: 0.2em; + font-size: 90%; + text-align: center; +} + +pre { + background-color: #f0f0f0; + padding: 1em; + overflow-x: auto; +} + +code { + font-weight: bold; +} + +img { + max-width: 100%; +} + +article > img { + display: block; + margin: 1em auto; +} + +article > header > *, article > h1 { + text-align: center; +} + +a:link { + color: #0e42ef; +} + +a:link:hover { + background: #0e42ef36; +} + +a:visited { + color: #6f0eef; +} + +a:visited:hover { + background: #6f0eef36; +} + +a:active, +a:active:hover { + color: #e0083b; + background: #e0083b36; +} diff --git a/the_gentlest_introduction_to_building_with_makefiles-1.png b/the_gentlest_introduction_to_building_with_makefiles-1.png new file mode 100644 index 0000000..768e52e Binary files /dev/null and b/the_gentlest_introduction_to_building_with_makefiles-1.png differ diff --git a/the_gentlest_introduction_to_building_with_makefiles-2.png b/the_gentlest_introduction_to_building_with_makefiles-2.png new file mode 100644 index 0000000..79250c7 Binary files /dev/null and b/the_gentlest_introduction_to_building_with_makefiles-2.png differ diff --git a/the_gentlest_introduction_to_building_with_makefiles-3.png b/the_gentlest_introduction_to_building_with_makefiles-3.png new file mode 100644 index 0000000..0c1afb9 Binary files /dev/null and b/the_gentlest_introduction_to_building_with_makefiles-3.png differ diff --git a/the_gentlest_introduction_to_building_with_makefiles.html b/the_gentlest_introduction_to_building_with_makefiles.html new file mode 100644 index 0000000..f4b2cac --- /dev/null +++ b/the_gentlest_introduction_to_building_with_makefiles.html @@ -0,0 +1,239 @@ + + + + + + + + + +The Gentlest Introduction to Building With Makefiles + + + +
+

The Gentlest Introduction to Building With Makefiles

+

Published on 2020-05-14 18:44:00+02:00 +

If you are here, you are most likely in need to build C or C++ program. Chances are you were not even looking for +tutorial about GNU make or Makefiles in general. Chances are that you +need to get you assignment done by yesterday, or you want to refresh your memory from back in the day you used C for +the last time. No matter your background, I'll try to walk you through the process of building your C or C++ program +using make command. +

Sadly, the tutorial will explain the stuff, so that you have an overview after reading it. It won't go too deeply. +Anyway, if you are only interested in an example to copy, there is one below. +

If you have no idea, why using build system is nice. There are plenty of reasons. I will give you two. They automate +the building process, so that you don't have to type same things all over again, and you don't need to remember your +configuration at all times. If used consistently, they try to build only parts of your project that were changed. It +can affect the building time greatly. +

Building a single file project

+

You have just finished writing your first implementation of Hello, world!, you have terminal open or some kind +of prompt up and running, and now you would like to build the program and execute it. You've probably seen it somewhere +but let me remind how to do it by hand using gcc:

+
+$ ls
+hello.c
+$ gcc hello.c -o hello
+$ ./hello
+Hello, world!
+
+

Nice! But writing gcc hello.c -o hello all the time when you want to rebuild the program sounds +bothersome even if you consider using command history. If you were to extend the program with libraries or additional +files it sounds even more tiresome. +

Let's put make to use! All you need to do is replace gcc hello.c -o part with make, +so that you have:

+
+$ ls
+hello.c
+$ make hello
+cc hello.c -o hello
+$ ls
+hello   hello.c
+$ ./hello
+Hello, world!
+
+

You probably noticed that make shamelessly prints out the command it used to build your program. How did it +know? Make is a master of default variables, implicit rules, deduction, and hiding it's secrets from curious eyes +of those who seek knowledge. Actually, no, the +documentation is available for anyone in various forms. We'll not discuss it in detail, that wouldn't be gentle, so +assume for now that make will know how to compile and link your C or C++ program. Rules that describe how +make does that are called implicit rules. We'll use and affect them extensively.

+colorful toy blocks +

Using libraries with implicit rules

+

Make and makefiles heavily rely on your environment. If you don't know what it is, for simplicity let's say +that the environment is a set of variables associated with your current shell/terminal/prompt session. Make is +so greedy that it takes all of your environmental variables and copies them as own. The implicit rules may use those +copied variables, and they do exactly that. Those variables are usually called implicit variables. +

We can take advantage of it. Let's say we are building a game with SDL2. SDL2 +requires an additional include directory, a flag, and a library in order to build with it. Firstly, we'll set selected +variables in our environment (via export VARIABLE=value), and then we'll build the program:

+
+$ ls
+hello-sdl.c
+$ export CFLAGS='-D_REENTRANT -I/usr/include/SDL2'
+$ export LDLIBS='-lSDL2'
+$ make hello-sdl
+gcc -D_REENTRANT -I/usr/include/SDL2  hello-sdl.c  -lSDL2  -o hello-sdl
+$ ./hello-sdl
+
+

The values I've used are specific to SDL2, don't mind them. What interests us in this example are the names of the +variables: CFLAGS and LDLIBS. First one, is a set of parameters that describe how our thing +should be handled during compilation. CFLAGS is for C language, for C++ programs there is +an equivalent variable called CXXFLAGS. Second variable, LDLIBS may contain a list of +libraries that the linker should link to our program. In the example above there is no clear difference between +compilation and linking, and thus both variables are copied by make to a single command. Luckily, it makes no +difference to us, especially when the outcome is satisfying. +

First Makefile

+

Obviously, we would need to repeat those exports each time we start new session. This would bring us +back to the level of repeating whole gcc call on and on. We could put them in some kind of file, couldn't +we? Luckily, make predicted that and it may read the contents of so-called makefiles. Just put a file called +Makefile in the project directory and insert the variables there:

+
+CFLAGS=-D_REENTRANT -I/usr/include/SDL2
+LDLIBS=-lSDL2
+
+

Now, if you run make, it will read the Makefile and use the variables that are defined in it:

+
+$ ls
+hello-sdl.c   Makefile
+$ make hello-sdl
+gcc -D_REENTRANT -I/usr/include/SDL2  hello-sdl.c  -lSDL2  -o hello-sdl
+
+

Less writing is always cool. How about getting rid of the hello-sdl from every call to make? +That's also possible. hello-sdl is a target. Targets are associated with +rules, any number of them, be it implicit or user-defined. If user doesn't provide a target name as an +argument in command line, make uses first target that is specified in the makefile. We can create targets by +writing rules. The syntax to do so is rather straight forward and contains: names of targets, prerequisites needed, and +a recipe which may be a single-line command or may span the eternity. Knowing all of that, we can write a very peculiar +rule. Everything will be handled by an implicit rule, and we'll only give a hint to make which thing we want it +to build by default:

+
+CFLAGS=-D_REENTRANT -I/usr/include/SDL2
+LDLIBS=-lSDL2
+hello-sdl:
+
+

Surprisingly, that's enough. hello-sdl is the name of our target. It's a first target that appears in +this makefile, therefore it will be the default one. : (colon) is a required separator between the target +and prerequisites list. We didn't add any dependencies, as the sole dependency on hello-sdl.c file is +acknowledged thanks to the implicit rule. And because we didn't write any recipe, the recipe from the implicit rule is +used. When we use it, it looks like this:

+
+$ ls
+hello-sdl.c   Makefile
+$ make
+gcc -D_REENTRANT -I/usr/include/SDL2  hello-sdl.c  -lSDL2  -o hello-sdl
+
+

Adding more files

+more files +

In a long run, it would be more useful to have more than one file in a project. Make also predicted that and +allows users to build programs from multiple sources. Amazing, isn't it? Now is the moment we finally split up +compilation from linking in a visible manner. Let's say we have a project with three files hello.c, +sum.h and sum.c, their content is respectively:

+
+#include <stdio.h>
+#include "sum.h"
+int main(int argc, char * argv[]) {
+	printf("2 + 3 = %d", sum(2, 3));
+}
+
+
+#pragma once
+int sum(int a, int b);
+
+
+int sum(int a, int b) {
+	return a + b;
+}
+
+

The structure of this project is easily seen. Hello.c depend directly on sum.h due to the include, and +it requires the sum function to be compiled and available when linking the final executable. First dependency is +so stupidly easy to write, that you might be actually surprised about it: you just need to add sum.h file to +prerequisites in the rule description. The other one, is slightly more interesting. We could just add sum.c to +prerequisites, but we will die a horrible death after a while if we do that. Technically, it's not even the thing we're +trying to accomplish, so don't do that. +

Instead, let's use .o files that are products of compilation of a single source file. We can link them +together with libraries to form an executable. We are finally clearly dividing our building process into compilation +stage, and linking stage. Let's introduce two such files: hello.o and sum.o. They are build from their +respective sources. This means we now have three files with compiled or linked code: hello, +hello.o and sum.o. Latter doesn't depend on anything, so there is no need for us to write anything about +it. hello.o depends on sum.h (again, due to already mentioned include). Despite the fact that we call the +sum() function in it, it doesn't depend on sum.o. Why? Because it is just an intermediate file. It never +executes anything. On the other hand, hello executes it, so it needs all of the intermediate .o files in +its prerequisites list. +

All in all, the Makefile will look like this:

+
+hello: hello.o sum.o
+hello.o: sum.h
+
+

When we use it:

+
+$ make
+cc -c -o hello.o hello.c
+cc -c -o sum.o sum.c
+cc hello.o sum.o -o hello
+$ ./hello
+2 + 3 = 5
+
+

Surprisingly, that's all you need to know. With that, you can build pretty much everything. With time and when you +gain some additional knowledge you may want to write your makefiles more explicitly: things like CC=gcc to +make sure that the correct compiler is used. Your own recipes for generating headers or targets that are not generating +any files, but rather install the software or clean up the directory. Targets not associated with files are called +.PHONY and sooner or later you will encounter them. Actually... +

Clean up your project directory with make

+

For some reasons, you may want to remove all built executables and intermediate files, or any other garbage files +that your workflow involves. In previous part, I've already noted that you can accomplish that using .PHONY +targets. Such cleaning target is quite common and is usually called clean. Consider the following:

+
+hello: hello.o sum.o
+hello.o: sum.h
+
+clean:
+	$(RM) *.o hello
+
+.PHONY: clean
+
+

As it's not the default target, you must invoke it by name:

+
+$ make clean
+rm -f *.o hello
+
+

The marked line is called the recipe. It describes what rule is supposed to do. Only one recipe per target is used, +make discards previous recipes for the target if new one is defined, so only the bottom-most is effective. +$(RM) is a default variable that is expected to describe the command that can be used to safely remove +files no longer needed by the project. You've probably noticed that .PHONY exists as a target. We add +clean to it's prerequisites list to let make know that clean is not expected to create +a file called clean.

+sweeping dust +

Example makefile for C++ project

+

Following makefile is used to build a simple C++ pager, program for opening and scrolling through a file in a +command line interface. Please note, that by default cc is used as a linker. It means that, if we are building +a C++ project, the standard C++ library will be missing. We can avoid it by writing own linking recipe, or by adding +-lstdc++ to libraries manually. Latter approach is used in the example.

+
+CXXFLAGS=-std=c++17 -Wall -Wextra -Werror -O2
+LDLIBS=-lstdc++ -lncurses++
+
+pager: ansi.o content.o pager.o
+pager.o: ansi.h content.h
+
+clean:
+	$(RM) pager *.o
+
+.PHONY: clean
+
+

What's next

+

Obviously, that's not everything there is to make. In my opinion, this is all you need for regular usage in +small to medium projects. From this point you can extend your knowledge. I would suggest to learn more about +variables and built-in +functions. They will help you to create +more extendible makefiles, and write less in general. In case you'll end up needing to write a proper recipe and more +complicated rule - head to sections: +writing recipes or +writing rules. +The automatic variables are +an amazingly useful tool when writing your own rules. Actually, these three things are usually, the first to be +mentioned by other tutorials about make. However, with approach presented in here, you should be able to avoid +them for quite a long time. Be wary though - don't be ignorant. You've been showed the basic usage of +make that is heavily dependent on implicit rules and hidden mechanics. You know that they exists and now it's +turn for you to go out there, use them and slowly learn how they really work. +

+ diff --git a/web_browsers_are_no_more-1.png b/web_browsers_are_no_more-1.png new file mode 100644 index 0000000..6907535 Binary files /dev/null and b/web_browsers_are_no_more-1.png differ diff --git a/web_browsers_are_no_more-2.png b/web_browsers_are_no_more-2.png new file mode 100644 index 0000000..0c027ed Binary files /dev/null and b/web_browsers_are_no_more-2.png differ diff --git a/web_browsers_are_no_more.html b/web_browsers_are_no_more.html new file mode 100644 index 0000000..c8e63c7 --- /dev/null +++ b/web_browsers_are_no_more.html @@ -0,0 +1,103 @@ + + + + + + + + + +Web Browsers Are No More + + + +
+

Web Browsers Are No More

+

Published on 2020-07-28 19:39:00+02:00, last updated on 2021-07-25 12:56:00+02:00 + +

Browsers, we all know what those are, right? We use them every day and it wouldn't be an exaggeration to say +that they are one of the most used pieces of software all over the world. They have tremendous impact on not only the +industry, but also on regular life of normal people. We sure do know what they are, right? + +

By the power of Hinchliffe's Rule I say no (is it really if the question wasn't in the title?). We don't know what +web browsers are. Rather, the name got us tricked. It no longer represents what those programs are. Before we go into +details, let's talk about what was in the past. + +

It all started with the WorldWideWeb, followed by NCSA Mosaic, which in turn spawned Netscape Navigator. To wage a +war against it, Internet Explorer has been created. In late '90 and early '00 all major players (with exception of +Chrome) have made their appearances. By the time Chrome finally arrived, that's 2008, the term "web browser" was widely +understood and had established its meaning. The way we understand browsers is from sometime of that period. + +

Mozilla tells us that a web browser takes you anywhere on the internet, letting you see text, images and video +from anywhere in the world. That's vague as hell but Wikipedia isn't really better with: browser is a software +program for accessing information on the World Wide Web. It then describes a simple use case: a user requests +page from a website, browser retrieves the content from a web server and then displays the page on the user's device +(quote slightly adjusted in size). + +

These descriptions should resonate with most of us when we think "web browser". At least to some degree. + +

If I were to describe it with my own words I would probably say something of the sort: program that downloads, +prepares and displays content of a website to the user. May support stylesheets, history, cache and scripting engine +for dynamic content. + +

It's still vague but it gives some more hints regarding the functionality. In all this, the key to the meaning is +hidden inside the word website. Generally speaking, website provides webpages that are +hypertext documents. Where hypertext usually means HTML. It's fair to simplify it into: website is +a group of HTML documents with optional images or other files, all served via HTTP. Rarely we expect browser to support +anything else than, in terms of formats: HTML, images, video, plain text or maybe pdf, and in terms of protocols: HTTP. +Sometimes they implement stuff like FTP or maybe torrent, but that's not really their main purpose. + +

Browsers aren't really web browsers like: entire web browsers. They can just show you a selected part of the web and +that's OK. And I think that's part of the reason it all went wrong. People didn't think it's OK but it was OK. And then, +they wanted even more.

+ +drawing of surprised birds + +

Thus, functionalities were migrated to browsers. The number of use cases steadily rose. To be honest, it's very hard +to draw a line between things that should be in a web browser and things that should not. It's slightly clearer if there +is a protocol that supports the same functionality like in case of e.g., e-mail reading and sending. Take git as an +example. Is cgit a bad idea? I wouldn't call it "bad" straight away. I'm not sure if I would call it "bad" even after +months of contemplating about it. + +

The second part of the reason for the downfall of the browsers is Javascript. Dynamic content was made extreme. It no +longer was there to support the user and extend the content, but it started to become the main component of this new +thing served over the web. Now, we know what this new thing is called: Web Application. + +

Coming back to the description of a web browser I should probably reword it to: a program that downloads, caches +and runs web applications for the user or prepares and displays a webpage. + +

In other words, nowadays the main purpose for browsers is to run web applications. It doesn't sound very +browser-ish now, does it? + +

Please, don't get me wrong. There's nothing ultimately evil about that. It's only natural that over the years the +purpose and expectations have changed. I think what's important is acknowledging this change and being aware of the new +meaning. I believe most people who work in web dev are aware of these changes, although I never heard or seen anyone +explaining clearly where we stand now.

+ +drawing of a fox + +

Firefox, Chrome, Edge and others, are platforms for seamlessly acquiring and running applications on user's computer. +The applications are somehow mixed with webpages but are normal computer programs nonetheless. + +

In the end, I feel like I stated the obvious. Still, what can we learn from it? I think that these shower-thoughts +can help us see our needs better. What we have is a platform that can seamlessly launch cross-platform programs in a +sandboxed environment. Additionally, these programs and their views (sometimes with selected parameters) can be +expressed as globally unique identifiers in form of the URLs. + +

The problem is that this can easily lead us to a very unhealthy situations. The overall dependency on the Internet +connection for running local programs will only increase. This in turn, can make things as "Application as a Service" +more and more prevalent. I don't necessarily think they are bad, but they may become bad for the user, depending on how +they are implemented and sold. + +

Trends like Electron partially fight this. On the other hand, they create other problems including astoundingly +enormous duplication. Each electron application usually has its own web engine. The good thing is that at least +partially user owns the software. The bad thing, is that generally speaking it wasn't intentional. The fun thing is that +these somehow sandboxed applications are now distributed through sandboxed platforms like snap. I wonder how fast will +we get yet another layer of sandboxing there. + +

Anyway, don't be angry, be thankful. We know where we stand and what we have in hands. Let's face it, solutions like +browser applications are extremely convenient. Let's create a way to handle them and integrate into currently existing +environment in a healthy way or create something entirely new that will make us all hide in shame, because we didn't +think of it earlier. Myself, I want to explore breaking down the monolithic browsers into smaller pieces. +

+ -- cgit v1.1