My personal infrastructure setup

published at

I have some principles on setting up my development environment, which can be found here. And in this post I will share how was my production environment setup.


I have some principles on setting up my production environment since I'm single fighter:

I will share the details here.

Never ship source code

I have no single source code on my production server, all done by pulling docker image from private registry (Spoiler: GitLab)

Since mostly my application was built on Node.js, node_modules is always there even I pass --production while installing the dependencies.

My application need runtime dependencies, so source code & its dependencies are always there, on my working directory. I use cheap server, running something like npm run build costed me an expensive task. My CPU was scream, sometimes freeze. I need better solution for this (except just increasing my swapfile).

So I move all operations to CI server, and running npm run build there. Its obviously impossible if I bring node_modules to artifact or just bring the generated artifact and do install dependencies for twice on my server which is not so effective.

Then I decide to never ship my source code to production server. The workflows are:

Also there is always a drawback, in my case, the drawback was there is no caching layer on building the image. I use docker in docker so the GitLab CI can execute docker push and my server can pull the latest image from it.

Since this is not a big deals, until today I still use this workflow. My df -h is reduced significant since there is no source code & dependencies there. Another drawback was I need to cleaning up unused images (since I don't know how to automatically delete old/unused images).

1 app 1 database

I don't create any microservices but I have so many apps on my server. As this post published I have 9 apps and has 6 database instances (Sqlite, MongoDB, MySQL, and Redis). I use this approach to reducing "complexity" on setting up database (also: neighbours noise, security, etc)

It should have single source of truth.

For this context, it was docker-compose.yml.

Environment variable should done on application-level.

Not OS. Luckily I use docker compose, so I can pass environment key to it services

All operations should have a log

Because sometimes I'm stupid.


This is my home directory on production server:

user@hostname:~$ tree -L 1
|-- data
`-- docker-compose.yml

1 directory, 1 file

All application data are placed on data directory with its service name. For example:

user@hostname:~$ tree -L 2 data/faultable_blog/

`-- content
    |-- apps
    |-- data
    |-- images
    |-- logs
    |-- settings
    `-- themes

7 directories, 0 files

Yes, that was Ghost data.

Basically I store "database" on data directory. This is how MySQL files looks like:

user@hostname:~$ tree -L 1 data/concat/

|-- aria_log.00000001
|-- aria_log_control
|-- ib_buffer_pool
|-- ib_logfile0
|-- ib_logfile1
|-- ibdata1
|-- something
|-- something_production
|-- mysql
`-- performance_schema

I just need to mount that directory to /var/lib/mysql, then I can persistent my MySQL database.

This is how it looks my docker ps Not bad, right?


I still face problem on data persistence, it hard enough for me since I'm relatively new comers on Docker/container ecosystem. Also on backup something which I have talk about it here.

And I still need learn more about those. It must.


That was my personal infrastructure on my production environment. Not so industry standard, but it works for me for now. Setting up Dockerfile, .gitlab-ci.yml, and docker-compose.yml is not so big deals for me since I can be more productive after setting up those at first.

All my request was handled by Traefik, which I have talk about Traefik here. I'm no longer using Cloudflare as my CDN since it not so works well with my infrastructure, sometimes.

Discussion here