Programs with custom services, virtual environments, config files in different locations, programs creating datas in different location…

I know today a lot of stuff runs in docker, but how does a sysadmin remember what has done on its system? Is it all about documenting and keeping your docs updated? Is there any other way?

(Eg. For installing calibre-web I had to create a python venv, the venv is owned by root in /opt, but the service starting calibre web in /etc/systemd/system needs to be executed with the User=<user> specifier because calibre web wants to write in a user home directory, at the same time the database folder needs to be owned by www-data because I want to r/w it from nextcloud… So calibreweb is installed as a custom root(?) program, running in a virtual env, can access a folder owned by someone else, but still needs to be executed by another user to store its data there… )

Despite my current confusion in understanding if all of this is right in terms of security, syntax and ownership, No fucking way I will remember all this stuff in a week from now… So… What do you use to do, if you do something? Do you use flowcharts? Simple text documents? Both?

Essentially, how do you keep track?

  • Ephera@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    5 days ago

    “Infrastructure as code” is what the strategy is typically called. You use one of the many tools for orchestrating configuration of hosts (Ansible, OpenTofu, Puppet, Saltstack, Chef, etc.). These allow you to provide configuration files and code for setting up your hosts in a central place. This place is typically a Git repo, allowing you to keep track of when which change was made.

    Depending on the tool you use, you trigger applying the configuration on your dev PC, or there’s a hosted CI/CD server which automatically rolls out the changes when a new commit is pushed.