I run Docker every day. It’s the backbone of my local development workflow: databases, services, APIs, the whole stack. Docker Desktop does a lot of things well, and it’s improved significantly over the years. But it also does a lot of things I don’t need. For my workflow, I just need docker run to work with minimal overhead.

OrbStack is a strong alternative, and it’s genuinely good software—fast, lightweight, and well-designed. However, it’s subscription-based. I wanted something I could own outright, with no recurring cost and less dependency on someone else’s business model.

This is why Apex exists.

What is Apex

Apex is a native macOS application that runs Docker containers inside a lightweight Alpine Linux VM, using Apple’s Virtualization.framework. No Electron. No QEMU. No subscription. Just a Swift app that boots a small Linux kernel, starts Docker, and aims to stay out of your way.

It has three interfaces:

  • A menu bar app for seeing running containers, starting/stopping the VM, and quick actions
  • A full GUI for container management, image browsing, volume inspection, live stats, and an interactive terminal
  • A CLI with apex start, apex stop, apex ssh, apex docker, and other expected commands

Architecture

The system has three layers: the macOS host (Swift), the VM boundary (Apple Virtualization.framework), and the guest OS (Alpine Linux with a Go agent).

graph TB
    subgraph "macOS Host"
        GUI["Apex.app<br/>(SwiftUI)"]
        CLI["apex CLI<br/>(13 commands)"]
        DAEMON["apexd<br/>(Daemon)"]
        HELPER["Privileged Helper<br/>(XPC, root ops)"]

        GUI --> DAEMON
        CLI --> DAEMON
        DAEMON --> HELPER
    end

    subgraph "VM Boundary"
        VF["Virtualization.framework"]
        VSOCK["vsock<br/>(VirtioSocket)"]
        DAEMON --> VF
        VF --> VSOCK
    end

    subgraph "Alpine Linux VM"
        AGENT["apex-agent<br/>(Go, JSON-RPC)"]
        DOCKER["dockerd +<br/>containerd"]
        NFS["NFS Server"]
        MACHINE["Machine Manager<br/>(namespaces)"]

        VSOCK --> AGENT
        AGENT --> DOCKER
        AGENT --> NFS
        AGENT --> MACHINE
    end

Each component communicates with the next through defined protocols. The GUI and CLI both talk to the daemon. The daemon manages the VM. The VM runs a Go agent that controls Docker. This separation is intended to make the system easier to test and reason about.

The vsock Decision

This is one of the most important architectural choices in Apex.

Most VM managers communicate between the host and guest over TCP. The VM gets an IP address, and the host connects to it like any other network service. This works, but can be disrupted by VPNs, network changes, or proxy reconfigurations, which may cause Docker to stop responding until things are reset.

Apex uses vsock (VirtioSocket) instead. It’s a direct communication channel between the host and guest at the hypervisor level, without depending on IP addresses, routing tables, or normal host/guest IP networking. This approach is less affected by VPN changes, network switches, or proxy reconfigurations. Even if the VM has no network connectivity, vsock can still function.

graph LR
    subgraph "Host (macOS)"
        DS["Docker Socket<br/>~/.apex/run/docker.sock"]
        PF["Port Forwarder"]
        DNS["DNS Server<br/>*.apex.local"]
        TLS["TLS Proxy<br/>HTTPS termination"]
    end

    subgraph "vsock channels"
        V1["port 52222<br/>Agent RPC"]
        V2["port 2375<br/>Docker API"]
        V3["port 2049<br/>NFS"]
        V4["port 3000+<br/>Container ports"]
    end

    subgraph "Guest (Alpine)"
        AG["apex-agent"]
        DK["dockerd"]
        NF["NFS exports"]
        CT["Containers"]
    end

    DS --> V2 --> DK
    PF --> V4 --> CT
    DNS --> V1 --> AG
    TLS --> V4
    AG --> NF

Every service (Docker API forwarding, NFS file sharing, port forwarding, agent RPC) runs over vsock on dedicated ports. The result is a container runtime that is less sensitive to host network changes.

Two Languages, One System

Apex is written in two languages, each chosen for where it runs:

Swift runs on the host. It’s a natural choice for a macOS app: native UI with SwiftUI, integration with Virtualization.framework, system APIs for DNS and keychain access, and Swift-NIO for async networking. The daemon, CLI, and GUI share a common ApexCore library for configuration, models, and IPC.

Go runs inside the VM. The apex-agent is a lightweight JSON-RPC server that listens on a vsock port. It manages Docker’s lifecycle, handles NFS exports, creates machine namespaces, and forwards ports. Go compiles to a single static binary, cross-compiles easily to linux/arm64, and has a small runtime footprint—well-suited for a minimal Alpine guest.

graph TB
    subgraph "Swift"
        CORE["ApexCore<br/>Config, Models, IPC"]
        APP["ApexApp<br/>SwiftUI GUI"]
        CLISRC["ApexCLI"]
        DMN["ApexDaemon<br/>VM lifecycle, networking"]

        CORE --> APP
        CORE --> CLISRC
        CORE --> DMN
    end

    subgraph "Go"
        AGNT["apex-agent<br/>JSON-RPC server"]
        MACH["Machine Manager<br/>Namespaces, overlay FS"]
        DOCK["Docker Control<br/>Lifecycle, config"]
        NET["Networking<br/>iptables, bridge"]

        AGNT --> MACH
        AGNT --> DOCK
        AGNT --> NET
    end

    DMN -- "vsock" --> AGNT

Protocol-Driven Design

Every major subsystem in Apex is defined by a Swift protocol:

  • VMProvider: start, stop, pause, resume a virtual machine
  • ContainerRuntime: provision, start, stop Docker
  • StorageProvider: manage disk images and file shares
  • NetworkProvider: DNS, port forwarding, proxy detection
  • Runner: execute commands (on host or in guest)

This isn’t abstraction for its own sake. Each protocol has two implementations: a real one and a mock. The real VZVMProvider uses Virtualization.framework. The mock lets me run all 38 tests without booting a VM. The real HostRunner spawns processes via Foundation. The GuestRunner dispatches commands over vsock to the agent.

Multi-Machine Architecture

Docker Desktop runs one VM. If you want isolated environments, you run separate Docker Compose stacks and hope they don’t conflict.

Apex takes a different approach. A single shared Alpine VM boots once, and inside it, the agent creates isolated machines, each with its own overlay filesystem, network namespace, veth pair, and Docker socket. One VM, multiple isolated environments, with minimal overhead.

graph TB
    subgraph "Single Alpine VM"
        AGENT["apex-agent"]

        subgraph "Machine: dev"
            M1_NS["Network Namespace"]
            M1_OV["Overlay FS"]
            M1_DK["dockerd"]
        end

        subgraph "Machine: staging"
            M2_NS["Network Namespace"]
            M2_OV["Overlay FS"]
            M2_DK["dockerd"]
        end

        subgraph "Bridge Network (10.0.0.0/8)"
            BR["br-apex"]
            BR --> M1_NS
            BR --> M2_NS
        end

        AGENT --> M1_DK
        AGENT --> M2_DK
    end

Each machine gets its own Docker daemon, filesystem layer, and network. You can switch between them with apex start --profile staging. The shared VM kernel means near-instant machine creation, since there’s no need to boot a second VM. This theory is being tested in early alpha builds with larger workloads.

File Sharing with NFS

One of the trickiest parts of running containers in a VM is making your local files available inside it. You need your project directories mounted in the guest so that Docker volume mounts actually point to real files on your Mac.

Apex uses NFS for this. The Go agent inside the VM runs an NFS server that serves shared directories to the guest through a vsock-backed transport layer.

graph LR
    subgraph "macOS"
        DOCKER_CLI["docker run -v ~/project:/app"]
        NFS_PROXY["NFS Proxy<br/>localhost:12049"]
        MOUNTD["Mountd Proxy<br/>localhost:20048"]
    end

    subgraph "vsock"
        V1["port 2049"]
        V2["port 20048"]
    end

    subgraph "Alpine VM"
        NFS_SRV["NFS Server"]
        EXPORTS["Export Manager"]
        FS["Guest Filesystem"]

        NFS_SRV --> EXPORTS
        EXPORTS --> FS
    end

    DOCKER_CLI --> NFS_PROXY
    NFS_PROXY --> V1 --> NFS_SRV
    MOUNTD --> V2 --> EXPORTS

The result is that docker run -v ~/project:/app works as expected. Your home directory is available inside the VM through NFS, and Docker volume mounts resolve correctly. The NFS proxy uses raw Darwin sockets instead of NIO to avoid thread-safety issues with high-throughput file operations, and the export manager handles per-directory projections so only the paths you actually use get exposed.

Networking That Actually Works

Beyond vsock, Apex handles several networking concerns that developers often encounter:

DNS: A custom UDP DNS server resolves *.apex.local to localhost. Run nginx in a container named web, and https://web.apex.local just works. The resolver installs to /etc/resolver/apex.local via a privileged helper, and macOS picks it up automatically.

TLS: An SNI-aware TLS termination proxy generates per-domain certificates on the fly, signed by a local root CA that gets installed in your system keychain. This avoids the need for --insecure flags or self-signed cert warnings.

Port forwarding: Apex can automatically detect and forward configured container ports, reducing manual port-mapping setup.

VPN resilience: An SCDynamicStore watcher detects network changes (VPN connect/disconnect, Wi-Fi switch, proxy reconfiguration) and re-injects the correct proxy settings into the VM. This helps containers keep working when your network changes.

Takeaways

Building Apex taught me a few things:

Virtualization.framework is solid. Apple doesn’t get enough credit for this API. It’s clean, well-documented, and performant. Booting a full Linux VM takes under 3 seconds on an M-series Mac in my experience. The vsock support is robust. If you’re building anything that needs a Linux environment on macOS, it’s worth considering.

JSON-RPC over vsock was a good choice. I considered gRPC for the host-agent protocol. It would have meant protobuf code generation for both Swift and Go, a more complex build pipeline, and harder debugging. JSON-RPC with length-prefixed frames is simple: 4 bytes for the message length, then JSON. I can hexdump the vsock traffic and read it.

Graceful degradation matters. Apex’s startup has critical steps and non-critical steps. If the core boot (VM, agent, Docker) succeeds, the system is usable. DNS, TLS proxy, health monitoring, and network watchers all layer on top. If any of them fail, the core still works. Users care most about docker run working.

What’s Next

Apex is currently in active development! The project is evolving rapidly as I refine features, improve stability, and gather feedback from early testers.

Some features currently in consideration:

  • Portable machines
  • Kubernetes
  • Importing/Exporting config

If you’re interested in following Apex’s progress or want to be notified when it’s available for broader testing or release, I encourage you to sign up for updates. Thanks for sticking around!