+++ title = 'My Setup - p1' subtitle = 'Containers!' summary = "I go over the things I've learned about docker, and how I've used it to host my website." date = 2024-10-25T23:11:17+03:00 draft = false +++ # My Setup In this 'series' I will be walking you through my process of how I host everything on this server. I'm currently running, on top of [my blog](https://emin.software), a [gogs instance](https://git.emin.software). When first creating this website, I just had my blog. I generated this blog using [hugo](https://hugo.io): a static site generator. Hugo allowed me to focus on writing whatever I wanted in Markdown format, it would take care of converting my writing into HTML and CSS. I had a small issue with how I wrote my code and deployed it though: whenever I made a small change to the page, I had to manually rebuild it, then upload the updated version to my server and put it in the web directory. This is a cumbersome process. The whole point of using hugo is to *focus on the writing*, so having to zip and reupload for every typo is... not great. I wanted to be able to do a simple `git push`, and not worry about the rest. The "manual" approach also depends on me having already installed all necessary software. If you have a dedicated server that you're running yourself, that's probably okay, you just have to setup once, but I'm running this on a VPS that I'm not sure I'll keep forever. The ability to reproduce this exact setup within minutes actually matters. After reading a bit on this topic, I decided I would use docker for this. Podman would work just as nicely (any containerization software would work, really), but I decided on docker because it's been the standard for a while now ## Motivation Basically, I'm already running a web server. Why shouldn't I also host several other services for friends and family while I'm at it? Why shouldn't I make the entire setup reproducible? Here are some of the services I wanted to self-host: - Web server: obviously, who doesn't want a website? - Some git server: having my own place to show off all the things I've done is certainly really cool. For this, something like [Gitea](https://about.gitea.com/) would normally be great. I went with [Gogs](https://gogs.io/) instead, because it is far more lightweight. - Wireguard: Free VPN along with the website? sign me up. - CI/CD: automatic testing and releases of my software is cool, and also incredibly useful. Of course, there are always more things I could be self-hosting. So it makes sense to automate the setup, and that's where docker comes in. ## Basics of docker Before we can get to the exciting stuff, we need to go over what docker is, and how to use it. Essentially, docker is a container engine: it lets you build and run applications in a containerized environment. Containers are useful because they provide security, easy setup and most importantly, reproducibility. I'm not going to spend any more time explaining what containers are and why they're good, that's been done to death already. Right now, what matters is the actual setup, so let's get on with it. If you've used docker before, you'll feel right at home. Many commands are unchanged from docker, making docker a suitable drop-in replacement. Some things like network setups tend to be a little different, but that won't matter too much right now. In case you're unfamiliar with docker, here are some basic commands (run these either as root, or as a user in the `docker` group): ```sh # Search for container images (on docker.io unless you configure otherwise) $ docker search # Download (pull) an image from remote repo $ docker pull # list the images you have pulled. $ docker images # run a container. $ docker run # run a container, but with a LOT of flags. I just listed the most useful ones. $ docker run -i # interactive, so you can e.g. run a shell in the container -t # allocates a tty. useful with -i so that shell completion etc. can work -d # opposite of -i, detach and run in the background --port : # port forwarding, for when you need a server. -v :: # give the container access to some directory # ... want a shell? # list running containers. add -a to list ALL containers, running or stopped. $ docker ps <-a> # stop a running container. $ docker stop # stopped containers don't automatically get removed. This command removes it. $ docker rm ``` ## Compose is nice. Docker compose is a nice way to essentially "group together" some containers, and ship them in an easy way. Usually, on a server, each application *isn't* totally separate from each other - for my own use case, I want my git server (e.g. gogs) to automatically build and update my website whenever I push to its git repository. That means my git server and web server can't be *totally* separate, there's some amount of relation. At the same time... I don't really want to set up both containers, then their volumes, and their ports etc. by hand. Sure I could stick it in a shell script, but that's hardly elegant. Docker compose helps with this: you can create a `compose.yaml` file, and define containers, ports, volumes, secrets all inside this file. Then, when you run `docker compose up` this configuration is read, and all of it is processed as you would want it to.