Luka builds things

Declarative Incus on NixOS with SSO

• Published:

I've started my HomeLabbing journey with Proxmox quite a few years ago, thinking I would have many VMs and LXC containers doing so many things. In the end I've ditched Proxmox for NixOS, I'm using one server and many baremetal systemd services. I run 75 of those and only 3 containers. I've yet to use LXC containers.

But lately I've been itching for some gaming and I've got nothing to game on! I've previously used QEMU and Virsh to setup a VM with GPU passthrough on my server and it was working nicely. I was able to connect a monitor the GPU and sit down to game.

But since then another kid came into the picture and I lost that space and had to move to another room. I haven't really had time to game so that was fine. And I started to use my GPU for my Home Assistant's Speech-To-Text functionality via wyoming-piper.

So now I can't dedicate the full GPU to my VM anymore, and neither can I just connect the monitor to the server, since it's now in another room.

I did a bit of searching and found out that if you have an LXC container, it can share the GPU with other processes or even other LXC containers!

Then I found this guide and decided to implement it.

But since I have no Proxmox, I had to look for alternatives, and that is how I found out about Incus. So that is what I'll be showing you how to set it up on your NixOS machine with SSO login via Authelia.

Phew, what an intro. Let's get started!

Prerequisites

This guide assumes you already have Caddy and Authelia configured.

My Incus NixOS module

This post will be a bit different in the sense that I will just provide you with the NixOS module that I use. I will try to comment it extensively, so that you know what values to put where for your setup.

What you don't see in this module is how I set up Caddy and Authelia, both of which this module uses. If you run into issues, I'll be happy to help you out if you reach out (my email is on the homepage).

You might need the following pages to help you if you get stuck, they sure helped me tremendously:

Keep in mind that whoever can log into Incus can do everything. They do not yet have a concept of users or permissions. A bit of a shame to be really honest.

Anyway, here is the whole thing:

{
  config,
  lib,
  ...
}:
# This uses a modified standard lib that includes a `my` field for my-setup-specific values.
with lib;

let
  # For this config, for easier reference
  cfg = config.my.services.incus;
  # Module option `dataDir` is the root dir, so we expand it a bit to get the storage pool dir
  # Down the line I might need more than one
  defaultStoragePoolDir = "${cfg.dataDir}/storage-pools/default";
  # The OIDC Client ID I will use. You should make this a bit longer, and a bit more random than this example
  oidcClientId = "incus";
in
{
  # Here we define the available options consumers of this module can set
  options.my.services.incus = with lib.types; {
    enable = mkEnableOption "incus";

    # I place all my data in `/var/my`, so incus gets a place there too
    dataDir = mkOption {
      type = path;
      default = "/var/my/incus";
    };

    # The LISTEN address for the UI
    uiAddress = mkOption {
      type = str;
      default = "127.0.0.1";
    };

    # And its corresponding port
    uiPort = mkOption {
      type = int;
      default = 17171;
    };

    # The domain name to use
    # Don't forget to create this domain name on your router or wherever you create those
    domain = mkOption {
      type = str;
      default = "incus.${my.domain}"; # expands to something like `incus.example.com`
    };

    # The network Incus will use for containers
    # We'll create a bridge, so a `br` is in the name
    networkName = mkOption {
      type = str;
      default = "incusbr0";
    };
  };

  # Apply config only if the module is enabled
  config = mkIf cfg.enable {
    virtualisation.incus = {
      # Enable Incus
      enable = true;
      # Enable the UI
      ui.enable = true;
      # Specify the initial settings
      # Mind that if you change these imperatively via the `incus` command, they will not be overwritten by this declarative configuration.
      preseed =
        let
          poolName = "default";
        in
        {
          config = {
            # Where the UI will listen on
            "core.https_address" = "${cfg.uiAddress}:${toString cfg.uiPort}";
            # OIDC config
            "oidc.issuer" = "https://${config.my.services.authelia.domain}"; # This resolves to something like `https://authelia.example.com`
            "oidc.client.id" = oidcClientId;
            "oidc.audience" = "https://${cfg.domain}";
            # I went the SSO route, since I use that everywhere else as well
            # This disables certificate login
            "user.ui.sso_only" = "true";
          };
          networks = [
            {
              # VMs and containers will have IPs in the 10.10.10.X network
              config = {
                "ipv4.address" = "10.10.10.1/24";
                "ipv4.nat" = "true";
              };
              # Name the network
              name = cfg.networkName;
              # Specify its type
              type = "bridge";
            }
          ];
          # The profiles are the default settings for containers
          profiles = [
            {
              # Name this profile
              name = "default";
              devices = {
                # The network that will get attached to our container, that will use our bridge
                eth0 = {
                  name = "eth0";
                  network = cfg.networkName;
                  type = "nic";
                };
                # Create a disk mounted at root, using the 'default' pool (created below)
                root = {
                  path = "/";
                  pool = poolName;
                  type = "disk";
                };
              };
            }
          ];
          # Define the data pool, with the 'dir' driver, meaning it's just a directory at 'config.source'
          storage_pools = [
            {
              config.source = defaultStoragePoolDir;
              driver = "dir";
              name = poolName;
            }
          ];
        };
    };

    # Add my user to these groups so that it can use and configure Incus
    my.user = {
      extraGroups = [
        "incus"
        "incus-admin"
      ];
    };

    # As per the Wiki, Incus only works with nftables, so we enable that
    networking.nftables.enable = true;
    # And set the bridge to be trusted to allow traffic on this interface
    networking.firewall.trustedInterfaces = [ cfg.networkName ];

    # Create the folder if it does not exist
    systemd.tmpfiles.rules = [
      "d ${defaultStoragePoolDir} 750 root root"
    ];

    # This one uses the Impermanence module, skip it if you don't use it
    # Set the data and config directories to persist across reboots
    my.persisted.directories = [
      cfg.dataDir
      "/var/lib/incus"
    ];

    # Make it accessible
    my.services = {
      proxy.reverseProxies = [
        {
          from = cfg.domain;
          extraConfig = ''
            reverse_proxy https://${cfg.uiAddress}:${toString cfg.uiPort} {
              transport http {
                tls_insecure_skip_verify
              }
            }
          '';
        }
      ];
      # Above translates into this Caddy virtual host config
      # services.caddy.virtualHosts."${cfg.domain}" ={
      #  extraConfig = ''
      #    forward_auth ${config.my.services.authelia.domain} {
      #      uri /api/authz/forward-auth
      #      copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
      #    }
      #
      #    reverse_proxy https://${cfg.uiAddress}:${toString cfg.uiPort} {
      #      transport http {
      #        tls_insecure_skip_verify
      #      }
      #    }
      #  '';
      #  hostName = cfg.domain;
      #  useACMEHost = my.domain;
      #}

      # This sets up Incus in Authelia
      # Again this uses my module
      authelia = {
        # This one is merged into `config.services.authelia.instances.default.settings.access_control.rules`
        accessRules = [
          {
            domain = cfg.domain;
            subject = [
              [ "user:luka" ]
            ];
            whenInternal = true;
            policy = "two_factor";
          }
          # becomes
          #{
          #  domain = cfg.domain;
          #  networks = "internal"; # I only want Incus to be accessible on my internal networks (my LAN, LAB, VPN, etc)
          #  policy = "two_factor";
          #  subject = [ [ "user:luka" ] ];
          #}
        ];
        # This one is simply mapped
        # config.services.authelia.instances.default.settings.identity_providers.oidc.clients = cfg.oidcClients;
        oidcClients = [
          {
            client_id = oidcClientId;
            client_name = "Incus";
            public = true;
            authorization_policy = "two_factor";
            require_pkce = false;
            pkce_challenge_method = "";
            redirect_uris = [
              "https://${cfg.domain}/oidc/callback"
            ];
            audience = [
              "https://${cfg.domain}"
            ];
            scopes = [
              "openid"
              "offline_access"
            ];
            response_types = [
              "code"
            ];
            grant_types = [
              "authorization_code"
              "refresh_token"
            ];
            access_token_signed_response_alg = "RS256";
            userinfo_signed_response_alg = "none";
            token_endpoint_auth_method = "none";
          }
        ];
      };
    };
  };
}

And there we go! Don't forget to git add modules/incus.nix and my.services.incus.enable = true; in your hosts configuration and save. Just one nixos-rebuild later and you're good to go!

Conclusion

So I have Incus running now and I was able to experiment with it already. But I still have not yet set up a headless gaming LXC container. Perhaps in another blog post.