Notes on NixOS

In the cold Uruguayan winter of 2019, I decided to try out NixOS. Now, a few years later, I can’t go back to a system without nix installed. This are the things I wish I knew at that time.

The tooling (nix) and the language (also named nix) might not be amazing. But crafting binaries in the most reproducible way with hermetic builds is a very powerful idea. The reusability of NixOS configuration is great for managing a few computers and VPSs. I also find the NixOS test framework great for preventing “configuration drift” and detecting upstream changes.

Many users of nix have a very particular kind of amnesia: after struggling with a nix feature, succeeding, and updating the mental model about how nix works, it’s very hard to imagine how it felt before. That makes it particularly hard to teach it ( some are great at it though). So, these are things I know as of August 2025:

TL;DR

Contents

On nixpkgs

nixpkgs is the most populous, most up-to-date package repository of its kind, surpassing even Arch’s AUR and all Debian repositories (source: repology.org ). They have strong automated tests against regressions. Packages are free to declare their own versions of dependencies. Patches to libraries can be local, not global, allowing multiple versions of the same library to coexist.

Including several libraries often leads to a large install size on disk (my workstation install takes about 60GB on disk as of August 2025). But it also leads to great coordination between maintainers. Take this 2010 thread on the Debian mailing list . There is a lot of work required to coordinate versions of libraries and binaries. In contrast, NixOS release conversations are mostly about getting green light from working groups having their packages up to date and in good shape.

On systemd

systemd is at the core of NixOS. Search Kagi for critiques of it. I’m not going to dive into the subject right now, but I agree with the critique of user-hostile decisions being made by the maintainers.

Having said that, in practice, systemd has an amazing UX when using NixOS:

1
2
3
4
systemd.services.http-server = {
  wantedBy = [ "multi-user.target" ];
  script = "${pkgs.python3}/bin/python3 -m http.server 8080";
};

With only four lines this defines a service without having to remember the correct commands like systemctl enable, systemctl start, or format of the systemd services definition.

My mental model is that building a NixOS system generates (among other things) a systemd unit that sets up the whole environment for the new configuration. Running nixos-rebuild switch will disable the previous unit and its dependencies; then activate the new one. It’s a clean and efficient solution to mount/unmount, install programs, setup folders, configure services and networks, etc.

On flakes

The whole issue of whether to use them or not is a real “flame war”, and can be reduced to unimportant ego battles. It’s quite easy to switch from one to the other, as we’ll see later.

A flake has two main fields: inputs and outputs. By convention, the important attributes of outputs are nixosConfigurations, devShells, packages, overlays and nixosModules, but new attributes can be created. nix flake has a validator for the structure that will generate a warning if something does not follow the convention (warning: unknown flake output 'foobar').

One thing that flakes help with is avoiding overlays. I believe this is useful and helps with runtime performance quite a bit. It also helps a lot with managing external dependencies and avoiding channels. Just use flakes and avoid drama.

NixOS tests

The NixOS test framework is a coordination mechanism between nix (a build tool like make), nixpkgs, and qemu.

The code for these tests is very similar to the code used to build the packages. That’s because a test is a package (in nix lingo, it’s a derivation). Here’s a self-contained example flake.nix that you can save into an empty folder and run with nix flake check. It defines a systemd service, spawns two virtual machines, and verifies that one machine can access the systemd service on the other machine. Note how succint it is!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
  inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
  outputs = { nixpkgs, ... }:
    let
      system = "x86_64-linux";
      pkgs = nixpkgs.legacyPackages.${system};
    in
    {
      checks.${system}.default = pkgs.nixosTest {
        name = "custom-http-server-test";
        nodes = {
          server = {
            networking.firewall.allowedTCPPorts = [ 8080 ];
            systemd.services.http-server = {
              wantedBy = [ "multi-user.target" ];
              script = "${pkgs.python3}/bin/python3 -m http.server 8080";
            };
          };
          client = {
            environment.systemPackages = with pkgs; [ curl ];
          };
        };
        testScript = ''
          start_all()
          server.wait_for_open_port(8080)
          client.succeed("curl -f http://server:8080")
        '';
      };
    };
}

The NixOS test framework will control that virtual machine and make assertions about its correct working. These tests are completely network isolated to ensure reproducibility. Check out how the nginx service is tested in the official repository and you’ll find it’s very similar to this example.

NixOS marks a very clean difference between what is a “package” and what is a “service”. Other distributions don’t make this distinction: when on Debian you apt install nginx, you get both the binary for nginx installed as well as configuration files and automatic setup of the service in the /etc and /etc/nginx folders to configure it. Thus, the use of tools like dpkg-reconfigure and manual editing of system files is required to properly configure the “service” part once the “package” is installed.

In NixOS, the package (pkgs.nginx) and the NixOS service (services.nginx) are independently defined, and most services, by convention on their definition of service options, allow you to override the binary package to use. You can generally use package as an attribute of the service configuration – in the case of nginx that would be services.nginx.package.

A NixOS system configuration is also a package (a derivation). Stretching words a little, it can be said that all NixOS users are package maintainers.

The repository for packages can be queried at search.nixos.org/packages and how to configure services at search.nixos.org/options . For example, the package PeerTube can be configured in a NixOS system with services.peertube .

Extending NixOS

Check out the definition of the loki service in the official nixpkgs repo. It does a build-time validation of the configuration (with a command similar to nginx -t). If you got acquainted with the nix language, you will see that our simple example above, with the very simple systemd service, is not that far away from how maintainers build their services.

nginx , redis , and postgres to see how maintainers build their services. After declaring a service, it can be used in the same way as a NixOS service:

1
2
3
4
5
6
7
8
9
{...}: {
    imports = [
        ./modules/service-declaration.nix
    ];
    mymodules.service = {
        enable = true;
        dataDir = "/some/path";
    };
}

Namespacing custom services, like the above mymodules, is a very personal choice. gvolpe ’s configuration was super helpful to learn what NixOS can do. But he uses the global services frequently, and I prefer to distinguish whether something is defined in nixpkgs or elsewhere. hlissner , known for doom-emacs, uses modules.services, kradalby , maintainer of headscale, uses my, mitchellh , of HashiCorp and ghostty fame, uses the global space as gvolpe. Mic92 opts for not defining modules at all (maybe he upstreams everything, they are a prolific nixpkgs contributor). Check out their configurations, they are quite different from each other, and each one has some valuable useful patterns.

Building a Service

Kiwix is a program that serves offline copies of Wikipedia and other wikis. It’s packaged in nixpkgs, but there is no NixOS service defined. Let’s build it, using the existing pkgs.kiwix , to create a services.kiwix . I like to host my own copy of wikipedia. Given the critiques of the Wikimedia Foundation and understanding of “how the sausage is made”, I’d rather keep their logs clean of my browsing history.

Our objective is to configure a kiwix instance with this snippet:

1
2
3
4
5
services.kiwix = {
  enable = true;
  port = 1337;
  wikis = [ ./wikipedia.zim ];
};

We achieve that by configuring a set of options (those enable, port, and wikis fields in the snippet) and implementing how the service is run with systemd:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
{ config, pkgs, lib, ... } = 
with lib;
{
  options.services.kiwix = {
    enable = mkEnableOption "kiwix service";
  
    port = mkOption {
      type = types.int;
      description = "Port for the kiwix service";
      default = 1337;
    };
  
    wikis = mkOption {
      type = types.listOf types.path;
      description = "List of wiki ZIM files to serve";
      default = [];
    };
  };
  config = mkIf cfg.enable {
    # Setup the user that will run the service
    users.users.kiwix = {
      isSystemUser = true;
      group = "kiwix";
    };
    users.groups.kiwix = {};
  
    systemd.services.kiwix = {
      description = "Kiwix Wiki Service";
      after = ["network.target"];
      wantedBy = ["multi-user.target"];
  
      serviceConfig = let 
        files = lib.concatMapStrings (_: "${_} ") cfg.wikis;
      in {
        ExecStart = "${pkgs.kiwix-tools}/bin/kiwix-serve -p ${toString cfg.port} " + files;
        Type = "simple";
        User = "kiwix";
        Group = "kiwix";
        Restart = "always";
        RestartSec = "10";
        
        ReadWritePaths = [ cfg.wikis ];
      };
    };
  };
};

When creating systemd services, you should always harden them properly. There is a really good snippet in this comment from January 2025 .

Let’s write a test and make sure this works. Our plan is to:

test.nix can be referenced by both flakes as well as the nix-build command. By exposing an argument pkgs with a default value import <nixpkgs> {}, both nix build (the “flake way”) and nix-build (the “no-flakes” way) work seamlessly.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
{ pkgs ? import <nixpkgs> {} }:
pkgs.nixosTest {
  name = "kiwix-service";
  nodes.machine = {pkgs, ...}: {
    imports = [./module.nix];

    services.kiwix = {
      enable = true;
      port = 1337;
      wikis = [
        ./bitcoin_en_all_maxi_2021-03.zim
      ];
    };
    environment.systemPackages = [pkgs.curl];
  };

  testScript = ''
    machine.wait_for_open_port(1337)
    machine.succeed("curl -f http://127.0.0.1:1337/")
  '';
}

Run the test from a shell using nix-build test.nix. It can also be included in our main flake for the repository, adding checks.${system}.kiwix-service = import ./kiwix-service-path/test.nix to the outputs attribute set. This callPackage convention is the main reason why I think the debate over flakes or not is overblown.

At this point, we could even upstream this to nixpkgs .

How to manage data

./file-path without " is a nixlang path type. "./file-path" is a string that contains a path. By using wikis = [ ./bitcoin_en_all_maxi_2021-03.zim ];, the nix build system will include that .zim file into the nix store (/nix/store/...). There are many patterns to include data for services in NixOS, each with its pros and cons:

  1. Adding the file to the repository and the store

    • Good: Easy management, full reproducibility of the build over time.
    • Bad: Binary files pollute the repository
    1
    2
    3
    
    wikis = [
      ./bitcoin_en_all_maxi_2021-03.zim
    ];
    

    In our example, we need the file to be part of the working directory where we build our system. I think almost all users of nix and NixOS use git to manage their configurations, and git is a first class citizen of the nix ecosystem. Having the file in the repository makes versioning and file management easy. The disadvantage is that storing binary blobs is a known anti-pattern for git repositories.

  2. Build-time creation of the file

    • Good: Full reproducibility, treats configuration and infrastructure as code
    • Bad: Not useful for binary files or content
    1
    2
    3
    4
    
    environment.etc."app/config.json".text = builtins.toJSON {
      port = 8080;
      debug = false;
    };
    

    Sometimes the file can be generated from the repository files (that’s not the case here). An example of this are nginx configurations and VirtualHost definitions. For our example, this is not a relevant option, but it is a pattern that is frequent, particularly in home-manager configurations. Users tend to have two options: write their configuration as simple files and include them (option 1 above), or write their configuration in nixlang and generate the file. One example of this is the nixos-generators project, which is used to generate disk and ISO images for different hardware/host platforms.

  3. Build-time network fetching

    • Good: Mantains reproducibility, allows for clean dependency management
    • Bad: Build fails if the resource becomes offline

    An alternative to including the file in the repo is to do a build-time fetching of the resource. Use nix-prefetch-url --type sha256 $URL to know the hash of a downloaded file. Use builtins.fetchurl to reference the file:

    1
    2
    3
    4
    5
    6
    
    wikis = [(
      builtins.fetchurl {
        url = "https://download.kiwix.org/zim/other/bitcoin_en_all_nopic_2021-03.zim";
        sha256 = "0w5588qj5l7z7fd5rhah8ss6a2mq74giavdm25q0glcwjqwp2gbf";
      }
    )];
    

    This mantains the hermetic property of the build by executing the network calls before the build stage. nix-prefetch-url will already have that stored in the nix store, so downloads are not duplicated. It mantains the reproducibility, and will fail if the hash of the file changes, preventing upstream modifications from changing our final build. The fetched file will be placed in the nix store, in some path like /nix/store/mz19raygl5v7ckhbcz66jgcsrcvp4m5x-bitcoin_en_all_nopic_2021-03.zim, similar to the first alternative we discussed in this section.

  4. Run-time reference

    • Good: Very flexible, fast and easy to manage (it’s just files in your system). Allows changes on the file without having to do rebuilds
    • Bad: Breaks hermeticity, can’t be used for reproducible tests
    1
    
    wikis = ["/opt/downloads/bitcoin.zim"];
    

    If the file is on a network drive, or needs to be modified frequently by users, or otherwise we don’t care about the build consistency of the file, we can just reference the path from the nix store. This is how many users in the i3/sway communities decide to manage their files, since usually changes in the configuration are monitored and the service restarted whenever they change, avoding a full rebuild of the NixOS configuration.

Appendix: SSH at initrd to unlock LUKS

Custom services and tests shine when we have to deal with more complex pieces of software that might change over time, like unstable packages, very custom integration of different pieces of software or rapidly evolving projects. The example of kiwix in this blogpost is quite simplistic, but I didn’t want to overextend.

Full-disk encryption with the possibility of connecting through SSH at boot time, to enter the password to unlock a LUKS drive, is a feature that I always liked to have. In other distros, I always forget which are the particular files that I need to change or update to make it work, and it’s particularly brittle to distribution upgrades. I built this about three years ago and only once or twice had to review it (plus the occasional key rotation).

The ssh key for the server will get written to the publicly-readable /nix/store. Any process in your NixOS system can read that key. After that, an attacker-in-the-middle can read your encryption password while you enter it.

So, how does that look like in practice?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
{ config, lib, pkgs, ... }: let
  # Using the `boot.initrd` global attrSet
  cfg = config.boot.initrd.remoteUnlock;
in {
  options.boot.initrd.remoteUnlock = {
    enable = lib.mkEnableOption "remote LUKS disk unlocking via SSH";
    hostKeys = ...; 
    authorizedKeys = ...;
    network = {
      interface = ...;
      sshPort = ...;
      useDHCP = ...;
      address = ...;
      gateway = ...;
    };
  };

  config = lib.mkIf cfg.enable {
    assertions = [{
      assertion = cfg.authorizedKeys != [];
      message = "boot.initrd.remoteUnlock.authorizedKeys must not be empty";
    }];

    boot.initrd = {
      availableKernelModules = ["virtio_net"];
      network = {
        enable = true;
        ssh = {
          enable = true;
          port = cfg.network.sshPort;
          hostKeys = lib.mapAttrsToList (name: _: "/etc/secrets/initrd/${name}") cfg.hostKeys;
          authorizedKeys = cfg.authorizedKeys;
        };
      };

      systemd = {
        enable = true;
        network = lib.mkIf !cfg.network.useDHCP {
          enable = true;
          networks."50-static" = {
            matchConfig.Name = cfg.network.interface;
            networkConfig = {
              Address = cfg.network.address;
              Gateway = cfg.network.gateway;
            };
          };
        };
      };

      secrets = lib.mapAttrs' (
        name: value: lib.nameValuePair "/etc/secrets/initrd/${name}" (lib.mkForce value)
      ) cfg.hostKeys;
    };
  };
}

And then, in our flake:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
{
  inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.05";
  outputs = { self, nixpkgs, }: checks.x86_64-linux.remote-unlock-test = let
    remoteUnlock = import ./module.nix;
    pkgs = import nixpkgs {system = "x86_64-linux";};
    inherit (pkgs) lib;
    # Small helper that creates an actual derivation with files for our test.
    # The result of calling `buildSSHKeys "ed25519" "my_key" is a folder
    # in the /nix/store that looks very similar to any other package
    # Creating a package is sometimes very easy, as these five lines show,
    # and it's the equivalent of creating a .deb or .rpm package
    buildSSHKeys = type: name: pkgs.runCommand "ssh-keys" {} ''
      mkdir -p $out
      ${pkgs.openssh}/bin/ssh-keygen -t ${type} -N "" -f $out/${name}
      chmod 600 $out/*
    '';
    sshAccess = buildSSHKeys "ed25519" "id_ed25519";
    sshHost = buildSSHKeys "ed25519" "ssh_host_ed25519_key";

    # The actual configuration for our module in the test machine
    remoteUnlockConfig = {
      enable = true;
      hostKeys.ssh_host_ed25519_key = "${sshHost}/ssh_host_ed25519_key";
      authorizedKeys = [(builtins.readFile "${sshAccess}/id_ed25519.pub")];
    }
  in
    pkgs.nixosTest {
      name = "systemd-initrd-luks-password";

      nodes = {
        machine = {pkgs, ...}: {
          imports = [remoteUnlock];
          # Settings for qemu
          virtualisation = {
            # Creating two disks, to actually test that we only need to enter
            # our password once
            emptyDiskImages = [ 512 512 ];
            useBootLoader = true;
            # Booting off the encrypted disk requires an available init script
            mountHostNixStore = true;
            useEFIBoot = true;
          };
          # The driver for the network interface card emulated by qemu
          boot.initrd.availableKernelModules = ["e1000"];
          boot.loader.systemd-boot.enable = true;
          environment.systemPackages = [pkgs.cryptsetup];

          # Actually use our module configuration
          boot.initrd.remoteUnlock = remoteUnlockConfig;

          # Specialisations are "variations" that allow quickly switching
          # between different configurations. It's more useful in tests than
          # other scenarios, although some use cases include switching a lot
          # of configuration for notebooks when they are docked in a known
          # location, or on the go, setting display preferences, power saving
          # options, etc.
          specialisation.boot-luks.configuration = {
            boot.initrd.luks.devices = lib.mkVMOverride {
              cryptroot.device = "/dev/vdb";
              cryptroot2.device = "/dev/vdc";
            };
            virtualisation = {
              rootDevice = "/dev/mapper/cryptroot";
              fileSystems."/".autoFormat = true;
              # test mounting device unlocked in initrd after switching root
              fileSystems."/cryptroot2".device = "/dev/mapper/cryptroot2";
            };
          };
        };
        client = { pkgs, config, ... }: let
          serverIp = ...;
        in {
          environment = {
            systemPackages = with pkgs; [netcat];
            etc = {
              knownHosts.text = "server, ${serverIp} ${lib.readFile "${sshHost}/ssh_host_ed25519_key.pub"}";
              sshKey = {
                source = "${sshAccess}/id_ed25519";
                mode = "0600";
              };
            };
          };
        };
      };

      testScript = ''
        start_all()

        # Create encrypted volume
        machine.wait_for_unit("multi-user.target")
        machine.succeed("echo -n supersecret | cryptsetup luksFormat -q --iter-time=1 /dev/vdb -")
        machine.succeed("echo -n supersecret | cryptsetup luksFormat -q --iter-time=1 /dev/vdc -")
        machine.succeed("echo -n supersecret | cryptsetup luksOpen   -q               /dev/vdc cryptroot2")
        machine.succeed("mkfs.ext4 /dev/mapper/cryptroot2")

        # Boot from the encrypted disk
        machine.succeed("bootctl set-default nixos-generation-1-specialisation-boot-luks.conf")
        machine.succeed("sync")
        machine.crash()

        # Boot and decrypt the disk
        machine.start()
        machine.wait_for_console_text("Please enter passphrase for disk cryptroot")
        client.wait_until_succeed("nc -z machine 22")

        # Type the password
        client.succeed("echo supersecret | ssh machine -i ${sshAccess}/id_ed25519")
        machine.wait_for_unit("multi-user.target")

        assert "/dev/mapper/cryptroot on / type ext4" in machine.succeed("mount"),\
          "/dev/mapper/cryptroot does not appear in mountpoints list"
        assert "/dev/mapper/cryptroot2 on /cryptroot2 type ext4" in machine.succeed("mount")
      '';
    };
  };
}