I’m currently in the long process of rebuilding my declarative homelab using k3s, ArgoCD and NixOS.

I had previously used Keycloak but that always seemed massively overqualified and way too complex for my purposes. With this rebuild I saw my chance to try out Authentik which appears to be in good standing with the homelab community.
They have tons of documentation for pretty much anything which was encouraging to me. Well except for the documentation for their Helm Charts maybe…

Started off with version 2025.12.x, am now onto 2026.02.x and have spent most weekends in between that on getting Authentik to even just deploy to the cluster.
It’s partially my fault for attempting to use Secrets initially but even now with hardcoded keys in my git repo the default example chart doesn’t work:

values.yaml
authentik:
  existingSecret:
    secretName: authentik-secret

  postgresql: # None of this gets applied at all so I do it manually below...
    password: "somepasswd"

server:
  replicas: 1

  env: # Manually apply all the configuration values. Why am I using Helm charts again?
    - name: AUTHENTIK_POSTGRESQL__HOST
      value: authentik-postgresql
    - name: AUTHENTIK_POSTGRESQL__USER
      value: authentik
    - name: AUTHENTIK_POSTGRESQL__PASSWORD
      value: "somepasswd"
    - name: AUTHENTIK_POSTGRESQL__NAME
      value: authentik

  route:
    main:
      # ...

postgresql:
  enabled: true

  auth: # And set everything here once again
    username: authentik
    password: "somepasswd"
    postgresPassword: "somepasswd"
    usePasswordFiles: false
    database: authentik

  primary:
    persistence:
      size: 4Gi

I started off with the official example and after all these undocumented changes it still only deploys-ish:

With the defaults authentik-server would always try to reach the DB under localhost which doesn’t work in the context of this chart/k8s.
So after a while I figured out that the authentik: configuration block doesn’t actually do anything and I set all the values the chart should set by hand.

Now the DB connects but the liveliness probe on the authentik-server pod fails. It logs the incoming probe requests but apparently doesn’t answer them (correctly) leading to k8s killing the pod.

Sorry for the ramble but I’ve hit my motivational breaking point with Authentik.
Since the community seems to like it a bit I am left wondering what I’m doing wrong to have this many issues with it.

Did you people have this much trouble with Authentik and what have you switched to instead?

  • jrgd@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 hours ago

    Coming back and checking the values file posted. Not sure why your authentik block won’t get used in your values file. Your current issue of non-starting is likely the Authentik server container starting successfully, but failing liveness while waiting for the worker container(s) that is definitely not spooling up with your current configuration.

    Something to denote about Authentik itself that won’t be well-explained by the quickstart for the Helm chart itself is that Authentik is split into two containers: server and worker. For most environment variabless and mounted secrets, both the server and worker definitions should have them applied. The chart tends to handle most of the essential shared stuff in the authentik block to prevent the duplication, but secrets will likely need to be mounted for both volumes if using file or env references in the shared config, as well as most env overrides will need to be applied for both.

    • Starfighter@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      Managed to get it working by passing in env vars from a secret now.

      ArgoCD has a really handy web UI that allows you to quickly see what kind of resources get deployed.
      Especially for learning k8s I found that much easier to visualize than raw kubectl outputs.

  • curled@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    24 hours ago

    Maybe the example you posted is incomplete, but it looks like you haven’t defined a secret key like the official example does. Either via the charts authentik.secret or env var AUTHENTIK_SECRET_KEY. For reference, here are all the env vars I define, maybe it helps: https://github.com/SquaredPotato/home-ops/blob/main/kubernetes%2Fapps%2Fsecurity%2Fauthentik%2Fapp%2Fsecret.sops.yaml

    Helmrelease is located in the same folder as you might’ve guessed :)

    • Starfighter@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Passing in the secrets once via the global: section is very neat. Got it working now with a few of the other tips and stole your trick for my secret handling. Thank you :)

  • jrgd@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    When I did my authentik setup through helm chart a while back, the only real problems I had were with learning blueprints and not so much with getting Authentik to do its thing.

    The main things you should be checking given a liveliness probe failure is kubectl -n <namespace> describe pod <podname> to check the reason for failure. Additionally, kubectl logs -p -n <namespace> <podname> [container]. Will get you logs of the last run of the pod that has already failed, rather than the current run that may be soon to fail. Those two commands should point you pretty directly where your chart config has gone wrong. I can likely help as well if you are unsure what you are looking at.

    Additionally, once you get things working, please go back and usw secrets properly with the chart. Authentik lets you sub many values for env vars or files, which combined with mounting secrets is how you can use them.

    • Starfighter@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Did you also have to set all these env vars by hand?
      I am wondering if it might have something to do with rendering Helm Charts under ArgoCD.

      I’ll give it another try with your recommendations.

      And should I get it working finally, I will obviously switch back to using Secrets.
      I only removed them to reduce possible points of failure.

      As for blueprints, that’s a task for future me xD

      • jrgd@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        22 hours ago

        In my case I’m running an external Postgres DB and external cache plus a handful of other settings. As such, I have a decently sized values file. All of the env vars I was looking for in my case are provided in the chart, so I didn’t need to set any directly, but just through their counterparts in the values file.

        I don’t use ArgoCD in my case, so I couldn’t really say if it would affect your deployment strategy in any way.

        • Starfighter@discuss.tchncs.deOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 hours ago

          Got it working thanks to your troubleshooting tips now. Also found a very neat way to handle secrets from another comment.

          I tend to run a DB instance per service as that makes backup restoration much easier for me. An idle postgres sits at around 50MB which is a cost I’m willing to pay.

          Thank you again for your help :)