Core Concepts

niso's architecture is built on three ideas: packages instead of images, systemd as the runtime, and dependency-first packaging with full process isolation.

Packages, not images#

A Docker image is a layered filesystem with an OS base, system libraries, runtime, and your application — often 500 MB to 2 GB. A niso package is a compressed archive containing your application and all its dependencies.

The package format is tar+zstd with an Ed25519 signature. There are no layers, no base images, no Dockerfile. The manifest.toml inside the package declares everything: binary entrypoint, runtime, isolation, networking, and volumes.

What goes in a package#

A niso package contains everything your application needs to run — your code, its dependencies, and its configuration. The only thing not bundled is the language runtime itself (Node.js, Python, Ruby), which is installed once on the host.

Docker image

Ubuntu 22.04 ......... 78 MB

apt packages ......... 120 MB

Node.js 20 ........... 180 MB

node_modules ......... 90 MB

Your app code ........ 2 MB

Total: ~470 MB

niso package

Your app code ........ 2 MB

node_modules ......... 8 MB

manifest.toml ........ 1 KB

Package: ~10 MB

+ Node.js runtime on host (18 MB, installed once)

+ No OS layer needed

Dependencies are always packaged
Your node_modules/, pip packages, Ruby gems, Go vendor directory — these all ship inside the .niso package. Only the runtime interpreter itself (the node, python3, ruby binary) lives on the host. Compiled languages (Rust, Go, C++) need no runtime at all — the binary is self-contained.

systemd is the runtime#

Docker runs a daemon (dockerd) that manages container lifecycles. If the daemon crashes, all containers stop. niso has no daemon.

When you run niso activate, niso generates a systemd unit file and tells systemd to start it. systemd — the process manager already running on every Linux server — handles the rest: restarts, logging, resource limits, boot ordering.

Your application runs as a regular systemd service. You can manage it with niso status or directly with systemctl and journalctl.

bash
# These are equivalent:$ niso status my-api$ systemctl status niso-my-api$ niso logs my-api --follow$ journalctl -u niso-my-api -f

Runtime deduplication#

In Docker, every image bundles its own copy of the runtime. Running three Node.js apps means three copies of Node.js — ~540 MB of redundant data on disk.

niso separates your code + dependencies (packaged) from the runtime interpreter (installed once on the host). When you declare use = "nodejs:20", niso installs Node.js 20 once at /opt/niso/runtimes/nodejs/20/ and bind-mounts it into each service's isolated rootfs at activation time.

manifest.toml
[runtime]use = "nodejs:20"    # Node.js binary installed once on host                     # Your node_modules ship inside the package[copy]include = ["server.js", "package.json", "node_modules/"]

The process itself sees a normal filesystem — it has no idea the runtime was bind-mounted from the host. Full isolation is maintained.

Compiled languages skip the runtime entirely
Rust, Go, and C++ binaries are self-contained. No [runtime] section needed. The package contains just the binary and the manifest — often under 10 MB.

Assembled rootfs and full isolation#

At activation time, niso assembles a complete, isolated filesystem for the service. The process runs inside this rootfs and cannot see or access the host filesystem.

Assembled rootfs for niso-my-api:/├── app/                     ← Your code + node_modules (from package)│   ├── server.js│   ├── package.json│   └── node_modules/        ← Dependencies shipped in .niso package├── usr/local/bin/node        ← Runtime binary (bind-mounted from host)├── data/                    ← Persistent volume (bind-mounted)├── tmp/                     ← Private tmpfs└── etc/    └── hosts                ← DNS for service discoveryThe host filesystem is invisible.The process thinks this IS the entire system.

Every component is isolated by systemd:

  • RootDirectory — pivot_root into the assembled rootfs, host invisible
  • PrivateTmp — isolated /tmp per service
  • PrivateDevices — no access to physical devices
  • ProtectSystem=strict — rootfs is read-only
  • ProtectHome — /home is inaccessible

Even though the Node.js runtime lives at /opt/niso/runtimes/ on the host, the process cannot access that path. It only sees /usr/local/bin/node inside its own rootfs.

Isolation model#

niso provides Docker-equivalent isolation using Linux kernel primitives, all configured via systemd directives:

LayerDockerniso
ProcessPID namespacePID namespace (PrivateUsers)
Filesystemoverlayfs layersAssembled rootfs (RootDirectory)
Networkveth + iptablesveth + nftables (kernel DNAT)
Resourcescgroups v1/v2cgroups v2 (MemoryMax, CPUQuota)
Syscallsseccomp profileseccomp profile (@system-service)
CapabilitiesDrop mostDrop all (zero by default)
Usersroot in containerDynamicUser (unique UID per service)

Package lifecycle#

A package goes through these stages:

  1. Packniso pack creates a .niso archive from your binary/code, dependencies, and manifest.
  2. Distributeniso push uploads to a registry. niso pull downloads it.
  3. Installniso install extracts the archive to /opt/niso/packages/name/version/.
  4. Activateniso activate assembles the rootfs (code + deps + runtime + volumes), generates a systemd unit with full isolation, sets up networking, and starts the service.
  5. Rollbackniso rollback switches to the previous version instantly.
/opt/niso/packages/my-api/├── 1.0.0/              # Extracted package│   ├── manifest.toml│   ├── bin/my-api       # or app code + node_modules│   └── ...├── 1.1.0/│   ├── manifest.toml│   └── ...├── current → 1.1.0     # Active version└── previous → 1.0.0    # Rollback target

State database#

niso stores all state in a single SQLite database at /opt/niso/state/niso.db. This tracks installed packages, active versions, volumes, networks, and port mappings. Zero configuration, ACID transactions, no external database required.

Configuration files#

niso uses three configuration files:

  • manifest.toml — per-package configuration (binary, runtime, isolation, networking, volumes)
  • stack.toml — multi-service composition (like docker-compose.yml)
  • fleet.toml — multi-host deployment configuration

All configuration is TOML. There is no Dockerfile, no docker-compose.yml, no YAML.