I’m planning to set up proper backups for my server, but I’m not sure which software to use. I’ve looked for solutions with encryption, compressed, incremental backups. These seem to be the best options:

Does anyone have experience with these, and if so, what was your experience?

EDIT 2023-12-28:

It seems most people are using Restic of which about half mention using a wrapper such as resticprofiles, creatic and autorestic.

Borg Restic Kopia
3 7 5
  • ShortN0te@lemmy.ml
    link
    fedilink
    English
    arrow-up
    17
    ·
    11 months ago

    I started out with borg. Basically had no problems with it. Then i moved to Restic. For the past few years i am using it, i never experienced any issue with it. Can only recommend Restic.

  • Lem453@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    11 months ago

    Borg (specifically Borg Matic) has been working very well for me. I run it on my main server and then on my Nas I have a Borg server docker container as the repository location.

    I also have another repository location my on friends Nas. Super easy to setup multiple targets for the same data.

    I will probably also setup a Borg base account for yet another backup.

    What I liked a lot here was how easy it is to make automatic backups, retention policy and multiple backup locations .

    Open source was a requirement so you can never get locked out of your data. Self hosted. Finally the ability to mount the backup as a volume / drive. So if I want a specific file, I mount that snapshot and just copy that one file over.

  • ptrck@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 months ago

    Im using borgmatic, a wrapper around Borg that has some extra functionality.

    Very happy with it, does exactly as advertised.

  • SeriousBug@infosec.pub
    link
    fedilink
    English
    arrow-up
    7
    ·
    11 months ago

    I’ve been using Kopia for all my backups for a couple years, both backing up my desktop and containers. It’s been very reliable, and it has nice features like being able to mount a backup.

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      11 months ago

      Why did you choose this option instead of directly syncing it with restic’s rclone backend?

      • mlaga97@lemmy.mlaga97.space
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        11 months ago

        An external hard drive is a lot faster than my internet connection and helps fulfill 3-2-1 requirements.

        • Unchanged3656@infosec.pub
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          11 months ago

          Does it though? I had a similar setup in the past, but I did not feel good with it. If your first backup corrupts that corruption is then synced to your remote location. Since then I have two separate backup runs for local and remote. But restic as well with resticprofile. Remote is a SFTP server. For restic I am using the rclone backend for SFTP since I had some connection issues with the internal SFTP backend (on connection resets it would just abort and not try to reconnect, but I think it got improved since then)

          • mlaga97@lemmy.mlaga97.space
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            I only do automated copy to B2 from the local archive, no automated sync, which as far as I understand should be non-destructive with versioning enabled.

            If I need to prune, etc. I run will manually sync and then immediately restic check --read-data from a fast VPS to verify B2 version afterwards.

  • Matthias Liffers@social.tthi.as
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    I’m using Autorestic, a wrapper for Restic that lets you specify everything in a config file. It can fire hooks before/after backups so I’ve added it to my healthchecks instance to know if backups were completed successfully.

    One caveat with Restic: it relies on hostnames to work optimally (for incremental backups) so if you’re using Autorestic in a container, set the host: option in the config file. My backups took a few hours each night until I fixed this - now they’re less than 30 minutes.

  • epyon22@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    I setup a script to backup my lvm volumes with kopia. About to purchase some cloud storage to send it off site. Been running for a while de duplication working great. Encryption working as far as I can tell. The sync to other repo option was the main seller for me.

    • rambos@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      Daily backup to backblaze b2 and also to local storage with kopia. Its been running for a year I think, no issues at all. I didnt need a real backup yet, just did some restore tests so far

  • Nyfure@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    11 months ago

    Was using borg, was a bit complicated and limited, now i use kopia.
    Its supposed to support multiple machines into a single repository, so you can deduplicated e.g. synced data too, but i havent tested that yet.

      • Nyfure@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        Index of repositories is held locally, so if you use the same repository with multiple machines, they have to rebuild their index every time they switch.
        I also have family PCs i wanted to backup too, but borg doesnt support windows, so only hacky WSL would have worked.
        But the worst might be the speed of borg… idk what it is, but it was incredibly slow when backing up.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          if you use the same repository with multiple machines, they have to rebuild their index every time they switch

          I’m a beginner with Borg so sorry in advance if I say something incorrect l. I backup the same files to multiple distinct external HDDs and my solution was to use distinct repos for each one. They have different IDs so the caches are different too. The include/exclude list is redundant but I can live with that.

  • pacjo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    For me it’s restic with creatic wrapper, apprise for notifications and some bash / systemd scripts to make it all connected.

    Everything is in a config file, just as god intended.

  • ogarcia@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    restic without any doubt. I use it with S3 backend and SSH copy and it has an excellent performance (with copies of years).

    Borg I was using it for a while (to compare) and I do not recommend it, it is not a bad product, but it has a lousy performance compared to restic.

    Kopia I didn’t know it, but from what I have read about it it seems to be very similar to restic but with some additions to make it pretty (like having ui).

    Some people say that Kopia is faster in sending data to the repository (and other people say it’s restic), I think that, unless you need ui, I would use restic.

  • dfense@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    I use restic with resticprofiles (one config file), notifications via (self hosted) ntfy.sh and wasabi as backend. Been very happy, runs reliably and has all the features of a modern backup solution, especially like the option to mount backups as if it were a filesystem with snapshots as folders, makes finding that one file easy without having to recover)

  • Toribor@corndog.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    I really like Kopia. I backup my containers with it, my workstations, and replicate to s3 nightly. It’s great.

  • Droolio@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    11
    ·
    edit-2
    11 months ago

    IMHO, Duplicacy is better than all of them at all those things - multi-machine, cross-platform, zstd compression, encryption, incrementals, de-duplication.

    • Atemu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      11 months ago

      Note that while they’re disingenuously proclaiming themselves to be a “free” tool, the license is actually an unfree proprietary custom license.

      • hersh@literature.cafe
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Thank you for saving me the trouble of investigating this as an option.

        No reason to tolerate proprietary licenses when there are so many viable FLOSS solutions out there.

      • Droolio@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        The licence is pretty clear - the CLI version is entirely free for personal use (commercial use requires a licence, and the GUI is optional). If you don’t like the licence, that’s fine, but it’s hardly ‘disingenuous’ when it is free for personal use, and has been for many years.

    • oDDmON@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      11 months ago

      The subscription model is a wee bit off putting. I employ old hardware and don’t wish to be frog marched into an update/grade that could break that.

      Have seen it happen before, been in IT too fucking long not to.

      • Droolio@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        Yes, I also work in IT.

        The paid GUI version is extremely cautious on the auto-updates (it’s basically a wrapper for the CLI) - perhaps a bit too cautious. The free CLI version is also very cautious about making sure your backup storage doesn’t break.

        For example, they recently added zstd encryption, yet existing storages stay on lz4 unless you force it - and even then, the two compression methods can exist in the same backup destination. It’s extremely robust in that regard (to the point that if you started forcing zstd compression, or created a new zstd backup destination, you can use the newest CLI to copy data to the older lz4 method and revert - just as an example). And of course you can compile it yourself years from now.

    • Nyfure@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      11 months ago

      I mean the tools mentioned also support these features, how does duplicacy and its prorpietary software make them better?