Introduction

Mesh Hypervisor is a streamlined system for managing diskless Linux nodes over a network. Running on Alpine Linux, it lets you boot and configure remote nodes from a single flash drive on a central orchestration node, using PXE and custom APKOVLs. Whether you’re spinning up KVM virtual machines or running bare-metal tasks, Mesh Hypervisor delivers fast, deterministic control—perfect for home labs, edge setups, or clusters.

With Mesh Hypervisor, you get a lightweight, CLI-driven toolkit: deploy nodes in minutes, tweak configs in /host0/, and connect them with VXLAN meshes. It’s built for sysadmins who want flexibility—add storage, tune networks, or script custom setups—all from one hub. This guide walks you through setup, usage, and advanced tweaks to make your cluster hum.

Mesh Hypervisor is an MVP—still growing, but rock-solid for what it does. Start with Getting Started to launch your first node.

Key features include:

  • Centralized Control: A single orchestration node manages all remote nodes via a flash drive image.
  • Network Resilience: Automatic network scanning and proxy DHCP ensure adaptability to complex topologies.
  • Deterministic and Individually Addressable Configurations: Hardware IDs provide consistent, reproducible and individualized node setups.
  • Workload Support: KVM virtual machines (workloads) run on remote nodes, extensible to other formats.
  • VXLAN Networking: Mesh networking with IPv6 for isolated, scalable communication.

Who is it For?

Mesh Hypervisor is designed for Linux system administrators who need a lightweight, distributed hypervisor solution. It assumes familiarity with Linux primitives, networking concepts (e.g., DHCP, PXE), and CLI tools. If you’re comfortable managing servers via SSH and crafting configuration files, Mesh Hypervisor provides the tools to build a robust virtualization cluster with minimal overhead.

This guide will walk you through setup, usage, and advanced operations, starting with Getting Started.

Getting Started

New to Mesh Hypervisor? This section gets you from zero to a running cluster. Check hardware needs, flash the central node, and boot your first remote node—all in a few steps. Dive in with Prerequisites.

Prerequisites

This section outlines the requirements for deploying Mesh Hypervisor on your systems. Mesh Hypervisor is a distributed hypervisor built on Alpine Linux, designed for diskless operation via PXE booting. Ensure your environment meets these hardware, software, and network specifications before proceeding to Installation.

Hardware Requirements

  • Central Orchestration Node:

    • Architecture: x86_64.
    • CPU: Minimum 2 cores.
    • RAM: Minimum 2 GB (4 GB recommended).
    • Storage: USB flash drive (at least 8 GB) for the boot image; no local disk required.
    • Network: At least one Ethernet port.
  • Remote Nodes:

    • Architecture: x86_64.
    • CPU: Minimum 2 cores.
    • RAM: Minimum 2 GB (4 GB recommended).
    • Storage: Optional local disks for workloads; operates disklessly by default.
    • Network: At least one Ethernet port, configured for network booting (PXE).

Mesh Hypervisor runs lightweight Alpine Linux, so resource demands are minimal. The central node can be an old laptop or embedded device, while remote nodes must support PXE in their BIOS/UEFI.

Software Requirements

  • Familiarity with Linux system administration, including:
    • Command-line operations (e.g., SSH, basic file editing).
    • Alpine Linux package management (apk) is helpful but not mandatory.
  • A system to prepare the flash drive image (e.g., a Linux host with dd or similar tools).

No additional software is pre-installed on target systems; Mesh Hypervisor provides all necessary components via the orchestration node.

Network Requirements

  • A physical Ethernet network connecting the central node and remote nodes.
  • No pre-existing PXE server on the network (Mesh Hypervisor will host its own).
  • Optional DHCP server:
    • If present, Mesh Hypervisor will proxy it.
    • If absent, Mesh Hypervisor will provide DHCP on detected subnets.
  • Sufficient IP address space for automatic subnet assignment (e.g., from 10.11.0.0/16 or similar pools).

Mesh Hypervisor scans networks using ARP to detect topology and avoid conflicts, so no specific topology is assumed. Ensure remote nodes can reach the central node over Ethernet.

Knowledge Prerequisites

Mesh Hypervisor targets Linux sysadmins comfortable with:

  • Network booting (PXE) concepts.
  • Editing configuration files (e.g., manifest files, network configs).
  • Debugging via logs and CLI tools.

If you can set up a PXE-booted Linux server and manage it via SSH, you’re ready for Mesh Hypervisor.

Proceed to Installation once these requirements are met.

Installation

This section covers installing Mesh Hypervisor by preparing and booting the central orchestration node from a flash drive image. Mesh Hypervisor uses a prebuilt Alpine Linux image that includes the orchestration software and a package mirror for remote nodes. Remote nodes connect via PXE once the central node is running. Confirm you’ve met the Prerequisites before proceeding.

Step 1: Download the Image

The Mesh Hypervisor image is available as a single file: fragmentation.latest.img.

  • Download it from: https://fragmentation.dev/fragmentation.latest.img.
  • Size: Approximately 7 GB (includes a prebuilt package mirror for MVP simplicity).

No checksum is provided currently—verify the download completes without errors.

Step 2: Write the Image to a Flash Drive

Use a USB flash drive with at least 8 GB capacity. This process overwrites all data on the device.

On a Linux system:

  1. Identify the flash drive’s device path (e.g., /dev/sdb):
    lsblk
    
    Look for the USB device by size; avoid writing to your system disk (e.g., /dev/sda).
  2. Write the image:
    sudo dd if=fragmentation.latest.img of=/dev/sdb bs=4M status=progress
    
    Replace /dev/sdb with your device path.
  3. Sync the write operation:
    sync
    
  4. Safely eject the drive:
    sudo eject /dev/sdb
    

For Windows or macOS, use tools like Rufus or Etcher—refer to their documentation.

Step 3: Boot the Orchestration Node

  1. Plug the flash drive into the system designated as the central orchestration node.
  2. Access the BIOS/UEFI (e.g., press F2, Del, or similar during boot).
  3. Set the USB drive as the first boot device.
  4. Save and reboot.

The system boots Alpine Linux and starts Mesh Hypervisor services automatically—no input needed.

Step 4: Verify the Node is Running

After booting, log in at the console:

  • Username: root
  • Password: toor

Run this command to check status:

mesh system logview

This opens logs in lnav. If you see DHCP and PXE activity, the node is up and serving remote nodes.

Next Steps

The orchestration node is now active. Connect remote nodes and deploy workloads in Quick Start.

Quick Start

This section guides you through booting a remote node and running a workload with Mesh Hypervisor after installing the orchestration node (see Installation). Using the default configuration, you’ll boot a remote node via PXE and start a KVM workload with VNC access.

Step 1: Connect a Remote Node

  1. Verify the orchestration node is active:
    mesh system logview
    
    Look for DHCP and PXE activity in the logs.
  2. Connect a remote node to the same Ethernet network:
  3. Power on the remote node and enter its BIOS/UEFI:
    • Set network/PXE as the first boot device.
    • Save and reboot.

The remote node pulls its kernel and initramfs from the orchestration node, then boots Alpine Linux with a default APKOVL. A unique 8-character UUID is generated during boot using genid machine 8, based on hardware DMI data.

Step 2: Verify Remote Node Boot

On the orchestration node’s console (logged in as root/toor):

  1. List online nodes:
    mesh node info
    
    This shows each node’s UUID (e.g., a1b2c3d4). The UUID is the first 8 characters of a SHA-512 hash of the node’s DMI modalias.
  2. Note the UUID of your remote node.

If no nodes appear, check logs (mesh system logview) for DHCP requests or HTTP downloads (kernel, initramfs). Ensure the remote node’s network port is connected.

Step 3: Start a Workload

Mesh Hypervisor includes a default KVM workload configuration named qemutest1. To launch it:

  1. Use the remote node’s UUID from Step 2 (e.g., a1b2c3d4).
  2. Run:
    mesh workload start -n a1b2c3d4 -w qemutest1
    
    • -n: Node UUID.
    • -w qemutest1: Workload name (preconfigured in /host0/machines/default).

This starts a KVM virtual machine with 500 MB RAM, 2 CPU threads, and an Alpine ISO (alpine-virt-3.21.3-x86_64.iso) for installation.

Step 4: Access the Workload

The qemutest1 workload uses VNC for console access:

  1. Identify the remote node’s IP with
    mesh node info
    
  2. From any system with a VNC client (e.g., vncviewer or TigerVNC):
    vncviewer 192.168.x.y:5905
    
    • Port 5905 is derived from runtime.console.id=5 (5900 + 5).
  3. The VNC session displays the Alpine installer running in the workload.

Next Steps

You’ve booted a remote node and accessed a workload via VNC. See Usage for managing nodes, customizing workloads, or configuring networks.

Core Concepts

Mesh Hypervisor is built on a few key ideas: a central node, diskless remotes, VXLAN networking, and workloads. This section explains how they fit together—start with Architecture Overview.

Architecture Overview

Mesh Hypervisor is a distributed hypervisor based on Alpine Linux, designed to manage a cluster of diskless servers for virtualization. It uses a single central orchestration node to boot and configure remote nodes over a network. This section provides a high-level view of the system’s structure and primary components.

System Structure

Mesh Hypervisor consists of two main parts:

  • Central Orchestration Node: A system booted from a flash drive, hosting all control services and configuration data.
  • Remote Nodes: Diskless servers that boot from the central node and run workloads (currently KVM virtual machines).

The central node drives the system, distributing boot images and configurations to remote nodes, which operate as a unified cluster.

graph TD
    A[Central Orchestration Node<br>Flash Drive] -->|Boots| B[Remote Node 1]
    A -->|Boots| C[Remote Node 2]
    A -->|Boots| D[Remote Node N]
    B -->|Runs| E[Workload: KVM]
    C -->|Runs| F[Workload: KVM]
    D -->|Runs| G[Workload: KVM]

Core Components

  • Flash Drive Image: The central node runs from a 7 GB Alpine Linux image, embedding Mesh Hypervisor software and a package mirror.
  • Network Booting: Remote nodes boot via PXE, fetching their OS from the central node without local storage.
  • Configurations: Node-specific setups are delivered as Alpine APKOVLs, tied to hardware UUIDs for consistency.
  • Workloads: KVM virtual machines execute on remote nodes, managed via CLI from the central node.

Basic Flow

  1. The central node boots from the flash drive and starts network services.
  2. Remote nodes boot over the network, using configurations from the central node.
  3. Workloads launch on remote nodes as directed by the central node.

Design Notes

Mesh Hypervisor emphasizes minimalism and determinism:

  • Diskless remote nodes reduce hardware dependencies.
  • A single control point simplifies management.
  • Networking adapts to detected topology (details in Networking).

For deeper dives, see Central Orchestration Node, Remote Nodes, Networking, and Workloads.

Central Orchestration Node

The central orchestration node is the backbone of Mesh Hypervisor. It boots from a flash drive and delivers the services required to initialize and manage remote nodes in the cluster. This section outlines its core functions and mechanics.

Role

The central node handles:

  • Boot Services: Provides PXE to start remote nodes over the network.
  • Configuration Delivery: Distributes Alpine APKOVLs for node setups.
  • System Control: Runs the mesh CLI tool to manage the cluster.

It operates as an Alpine Linux system with Mesh Hypervisor software preinstalled, serving as the single point of control.

Operation

  1. Startup: Boots from the flash drive, launching Mesh Hypervisor services.
  2. Network Scanning: Uses ARP to detect connected Ethernet ports and map network topology dynamically.
  3. Service Deployment:
    • TFTP server delivers kernel and initramfs for PXE booting.
    • DHCP server (or proxy) assigns IPs to remote nodes.
    • HTTP server hosts APKOVLs and a package mirror.
  4. Control: Executes mesh commands from the console to oversee nodes and workloads.

Diagram

graph LR
    A[Central Node<br>Flash Drive] -->|TFTP: PXE| B[Remote Nodes]
    A -->|HTTP: APKOVLs| B
    A -->|DHCP| B

Key Features

  • Flash Drive Storage: Stores configs and data in /host0—back this up for recovery.
  • ARP Scanning: Sequentially sends ARP packets across ports, listening for replies to identify network connections (e.g., same switch detection).
  • Package Mirror: Hosts an offline Alpine package repository for remote nodes, ensuring consistent boots without internet.
  • Network Flexibility: Starts DHCP on networks lacking it, proxies existing DHCP elsewhere.

Configuration

Core settings live in /host0 on the flash drive, including:

  • Subnet pools for DHCP (e.g., 10.11.0.0/16).
  • Default package lists for the mirror.
  • Network configs for topology adaptation.

See Configuration Reference for details.

Notes

The central node requires no local disk—just an Ethernet port and enough RAM/CPU to run Alpine (see Prerequisites). It’s built for plug-and-play operation. Setup steps are in Installation.

Next, see Remote Nodes for how they rely on the central node.

Remote Nodes

Remote nodes are the operational units of Mesh Hypervisor, executing tasks in the cluster. They boot from the central orchestration node over the network and typically operate disklessly. This section outlines their role and mechanics.

Role

Remote nodes handle:

  • Workload Execution: Run KVM virtual machines (workloads) using configurations from the central node.
  • Bare-Metal Tasks: Can execute scripts or applications directly (e.g., HPC setups like MPI or Slurm) via prebuilt configuration groups.

They depend on the central node for booting and initial setup, offering versatility in deployment.

Operation

  1. Boot: Initiates via PXE, pulling kernel and initramfs from the central node’s TFTP server.
  2. Identification: Generates an 8-character UUID from hardware data.
  3. Configuration: Requests an APKOVL from the central node’s HTTP server:
    • Fetches a custom APKOVL matching the UUID if available.
    • Uses a default APKOVL for unrecognized nodes.
  4. Runtime: Boots Alpine Linux, joins the cluster, and runs workloads or bare-metal tasks.

Diagram

sequenceDiagram
    participant R as Remote Node
    participant C as Central Node
    R->>C: PXE Boot Request
    C-->>R: Kernel + Initramfs (TFTP)
    Note over R: Generate UUID
    R->>C: Request APKOVL (HTTP)
    C-->>R: APKOVL (Default or Custom)
    Note over R: Boot Alpine + Run Tasks

Key Features

  • Flexible Execution: Supports KVM workloads or direct application runs (e.g., MPI groups preinstalled via /host0/machines/).
  • UUID-Based Identity: Uses genid machine 8 for a deterministic UUID, linking configs consistently.
  • APKOVLs: Delivers node-specific overlays (files, scripts, permissions) from the central node.
  • Storage Option: Boots and runs the OS without local storage; local disks are optional for workloads or data.

Notes

Remote nodes need PXE-capable hardware and minimal resources (x86_64, 2+ GB RAM—see Prerequisites). Without local storage, they’re stateless, rebooting fresh each time. Local disks, if present, don’t affect the boot process or OS—only workload or task storage. See Workloads for VM details and Networking for connectivity.

Next, explore Networking for cluster communication.

Networking

Networking in Mesh Hypervisor enables remote nodes to boot from the central orchestration node and connect as a cluster. It uses PXE for booting and VXLAN for flexible, overlapping node networks. This section outlines the key networking elements.

Boot Networking

Remote nodes boot over Ethernet via PXE:

  • Central Node Services:
    • TFTP: Serves kernel and initramfs.
    • DHCP: Assigns IPs, proxying existing servers or providing its own.
  • Detection: ARP scanning maps network topology by sending packets across ports and tracking responses.

This ensures nodes boot reliably across varied network setups.

Cluster Networking

Post-boot, nodes join VXLAN-based IPv6 networks:

  • VXLAN Meshes: Virtual Layer 2 networks link nodes, with a default mesh for all and custom meshes configurable.
  • Membership: Nodes can belong to multiple meshes, defined in /host0/network/.
  • Addressing: Deterministic IPv6 addresses are derived from node UUIDs.

Workloads on nodes can attach to these meshes for communication.

Diagram

graph TD
    C[Central Node]
    subgraph Mesh1[Default IPv6 Mesh]
        subgraph R1[Remote Node 1]
            W1a[Workload 1a]
            W1b[Workload 1b]
        end
        subgraph R2[Remote Node 2]
            W2a[Workload 2a]
            W2b[Workload 2b]
        end
    end
    subgraph Mesh2[Custom IPv6 Mesh]
        subgraph R2[Remote Node 2]
            W2a[Workload 2a]
            W2b[Workload 2b]
        end
        subgraph R3[Remote Node 3]
            W3a[Workload 3a]
            W3b[Workload 3b]
        end
    end
    C -->|Manages| Mesh1
    C -->|Manages| Mesh2

Key Features

  • ARP Scanning: Adapts to topology (e.g., multiple ports to one switch) dynamically.
  • IPv6: Powers VXLAN meshes with UUID-based addresses, avoiding collisions.
  • Multi-Mesh: Nodes can join multiple VXLAN networks as needed.
  • Time Sync: Crony aligns clocks to the central node for consistent state reporting.

Notes

Mesh Hypervisor requires Ethernet for PXE (see Prerequisites). VXLAN meshes use IPv6 over this base network. Custom setups are detailed in Network Configuration.

Next, see Workloads for what runs on the nodes.

Workloads

Workloads in Mesh Hypervisor are the tasks executed on remote nodes. They include KVM virtual machines and bare-metal applications, managed via configurations from the central orchestration node. This section explains their role and operation.

Role

Workloads enable Mesh Hypervisor’s flexibility:

  • KVM Virtual Machines: Virtualized instances (e.g., Alpine VMs) running on remote nodes.
  • Bare-Metal Tasks: Direct execution of scripts or applications (e.g., MPI for HPC) on node OS.

Both types leverage the same boot and config system, tailored to node needs.

Operation

  1. Configuration: Defined in files under /host0/machines/ on the central node, distributed as APKOVLs.
  2. Deployment: Launched via mesh workload commands (VMs) or preinstalled groups (bare-metal).
  3. Execution: Runs on remote nodes, accessing node resources (RAM, CPU, optional disks).
  4. Access: VMs offer VNC/serial consoles; bare-metal tasks use node-level tools (e.g., SSH).

Diagram

graph TD
    C[Central Node<br>/host0/machines/]
    R[Remote Node]
    C -->|APKOVL| R
    R -->|KVM| V[VM Workload<br>e.g., Alpine VM]
    R -->|Bare-Metal| B[Task<br>e.g., MPI Process]
    V -->|VNC/Serial| A[Access]

Key Features

  • KVM Support: VMs use QEMU/KVM with configurable RAM, CPU, and disk images (e.g., QCOW2, ISO).
  • Bare-Metal Groups: Prebuilt scripts or apps (e.g., MPI, Slurm) run directly on Alpine, no virtualization.
  • Config Files: Specify VM settings (e.g., platform.memory.size) or bare-metal files/permissions.
  • Determinism: UUID-based APKOVLs ensure consistent deployment across reboots.

Notes

Workloads require remote nodes to be booted (see Remote Nodes). KVM VMs are the default focus, with bare-metal as an alternative for specialized use cases. Local storage is optional for VM disks or task data—see Prerequisites. Full setup details are in Configuring Workloads.

This concludes the Core Concepts. Next, explore Usage.

Usage

Ready to run Mesh Hypervisor? This section shows you how: manage nodes, configure workloads, tweak networks—all via the mesh CLI. Kick off with The mesh Command.

The mesh Command

The mesh command is the primary CLI tool for managing Mesh Hypervisor. It runs on the central orchestration node, controlling the system, nodes, and workloads. This section explains its structure and basic usage.

Overview

mesh is a wrapper script grouping commands into three categories: system, node, and workload. Each category has subcommands for specific tasks, executed from the central node’s console.

Usage

Run mesh <category> <action> [options] from the console. Full syntax:

Usage: mesh <category> <action> [options]

Categories and Actions:
  system:
    stop                 Stop the system
    configure            Configure system settings
    start                Start the system
    logview              View system logs
    download             Download system components

  node:
    info                 Display node information
    ctl                  -n <node> [command]

  workload:
    list                 [-n <node>]
    start                -n <node> -w <workload>
    hard-stop            -n <node> -w <workload>
    soft-stop            -n <node> -w <workload>
    pause                -n <node> -w <workload>
    resume               -n <node> -w <workload>
    download             -n <node> -w <workload> [-f]
    createimg            -n <node> -w <workload> [-f]
    snapshot-take        -n <node> -w <workload> [-s <snapshot name>]
    snapshot-list        -n <node> -w <workload>
    snapshot-revert      -n <node> -w <workload> -s <snapshot name>
    snapshot-delete      -n <node> -w <workload> -s <snapshot name>

Options:
  -n, --node <uuid>             Node UUID
  -w, --workload <uuid>         Workload UUID
  -s, --snapshot-name <string>  Snapshot name
  -f, --force                   Force the operation
  -V, --version                 Show version information
  -h, --help                    Show this help message

Key Commands

  • System:
    • mesh system start: Launches PXE, DHCP, and HTTP services.
    • mesh system logview: Opens logs in lnav for debugging.
  • Node:
    • mesh node info: Lists online nodes with UUIDs.
    • mesh node ctl -n <uuid>: Runs a shell command (e.g., apk upgrade) or logs in via SSH.
  • Workload:
    • mesh workload start -n <uuid> -w <name>: Starts a KVM VM.
    • mesh workload list -n <uuid>: Shows running workloads on a node.

Notes

  • Commands execute over SSH for remote nodes, using preinstalled root keys (configurable in /host0).
  • Node UUIDs come from mesh node info; workload names match config files (e.g., qemutest1).
  • See subsequent sections for specifics: Managing Nodes, Configuring Workloads.

Next, explore Managing Nodes.

Managing Nodes

Remote nodes in Mesh Hypervisor are managed from the central orchestration node using the mesh node commands. This section covers how to list nodes, control them, and handle basic operations.

Prerequisites

  • Central node is running (mesh system start executed).
  • Remote nodes are booted via PXE (see Quick Start).

Commands run from the central node’s console.

Listing Nodes

To see online nodes:

mesh node info

Output shows each node’s UUID (e.g., a1b2c3d4), generated from hardware data. Use these UUIDs for other commands.

If no nodes appear, check logs with mesh system logview for PXE or DHCP issues.

Controlling Nodes

The mesh node ctl command interacts with a specific node:

mesh node ctl -n <uuid> [command]
  • -n <uuid>: Targets a node by its UUID from mesh node info.

Examples

  • Run a Command:

    mesh node ctl -n a1b2c3d4 "apk upgrade"
    

    Updates packages on the node, pulling from the central node’s mirror.

  • Shell Access:

    mesh node ctl -n a1b2c3d4
    

    Opens an SSH session as root to the node. Exit with Ctrl+D or logout.

  • Reboot:

    mesh node ctl -n a1b2c3d4 "reboot"
    

    Restarts the node, triggering a fresh PXE boot.

Batch Operations

For all nodes at once, use the all keyword:

mesh node ctl -n all "apk upgrade"

Executes sequentially across all online nodes.

Notes

  • SSH uses preinstalled root keys from /host0—configurable if needed.
  • Nodes are stateless without local storage; reboots reset to their APKOVL config.
  • Local storage (if present) persists data but doesn’t affect boot (see Remote Nodes).
  • Full node config details are in Manifest Files.

Next, see Configuring Workloads for running tasks on nodes.

Configuring Workloads

Workloads in Mesh Hypervisor are KVM virtual machines running on remote nodes, configured and managed from the central orchestration node. This section covers creating and controlling KVM workloads using config files and mesh workload commands. The orchestration node is CLI-only (no GUI); access VMs via tools like VNC from another machine. For bare-metal tasks, see Configuring Groups.

Prerequisites

  • Remote nodes are online (check with mesh node info).
  • Central node is running (mesh system start executed).

Commands and edits run from the central node’s console.

Creating a Workload Config

KVM workload configs are stored in /var/pxen/monoliths/ on the central node. Each file defines a VM, uploaded to nodes when started.

Example: Create /var/pxen/monoliths/qemutest1.conf:

name=qemutest1
uuid=qweflmqwe23
platform.volume.1.source=http://host0/isos/alpine-virt-3.21.3-x86_64.iso
platform.volume.1.type=iso
platform.volume.1.path=/tmp/alpine.iso
platform.memory.size=500M
platform.cpu.threads=2
platform.cpu.mode=Penryn-v1
runtime.console.type=vnc
runtime.console.id=5
  • Sets up a VM with an Alpine ISO, 500 MB RAM, 2 threads, and VNC access.

Managing Workloads

Use mesh workload with a node UUID (from mesh node info) and workload name matching the config file (e.g., qemutest1).

Commands

  • Download Resources:

    mesh workload download -n a1b2c3d4 -w qemutest1
    

    Fetches the ISO to /tmp/alpine.iso on the node. Add -f to force redownload.

  • Start:

    mesh workload start -n a1b2c3d4 -w qemutest1
    

    Uploads qemutest1.conf to the node and launches the VM.

  • Access:

    vncviewer <node-ip>:5905
    

    From a separate machine with a VNC viewer (e.g., TigerVNC), connect to the console (port 5900 + id=5). Find <node-ip> in mesh system logview.

  • Stop:

    mesh workload soft-stop -n a1b2c3d4 -w qemutest1
    

    Gracefully stops the VM. Use hard-stop to force it.

  • List:

    mesh workload list -n a1b2c3d4
    

    Lists running workloads on the node.

Notes

  • Configs upload on start, stored in /var/pxen/monoliths/ on nodes after.
  • Workloads need node resources (RAM, CPU); check Prerequisites.
  • Snapshot commands (e.g., snapshot-take) are KVM-only—see The mesh Command.
  • Full syntax is in Workload Config.

Next, see Network Configuration or Configuring Groups.

Configuring Nodes

In Mesh Hypervisor, remote nodes are configured to run bare-metal tasks directly on their Alpine OS using machine folders and group folders in /host0/. Machine folders target specific nodes with custom settings, while group folders provide reusable configurations across multiple nodes. These are combined into node-specific APKOVLs by the compile-apkovl script, applied during PXE boot. This section walks through simple setup steps, run from the central orchestration node’s CLI. For KVM workloads, see Configuring Workloads.

Prerequisites

Before starting, ensure:

  • Remote nodes are online (check with mesh node info).
  • Central node is running (mesh system start executed).

All actions occur on the central node’s console.

Understanding Node Configuration

Node configurations are stored in /host0/ subdirectories. Here’s the basics:

  • Machine Folders: In /host0/machines/ with user-chosen names (e.g., my-server, default). Each ties to a node via a uuid file.
  • Group Folders: In /host0/groups/ (e.g., timezone-est, baseline). These are shared setups applied to machines.
  • Compilation: The mesh system configure command merges group and machine settings into an APKOVL, stored in /srv/pxen/http/ for nodes to fetch.

Files in Machine and Group Folders

Both folder types can include:

  • manifest (Required): Defines files to install or modify on the node.
  • packages (Optional): Lists Alpine packages (e.g., chrony) to add to the node’s /etc/apk/world.

Machine folders also use:

  • uuid (Required): Holds the node’s 8-character UUID (e.g., 10eff964) or default for new nodes.
  • groups (Optional): Lists group folders (e.g., baseline, timezone-est) to apply, in order—only these groups are included.
  • SKIP (Optional): An empty file; if present, skips APKOVL compilation for that machine.

Special Machine Folders

Two special cases exist:

  • default: UUID is default. Configures new nodes until a matching UUID folder is set.
  • initramfs: Builds the initramfs APKOVL, used system-wide during boot.

Configuring a Group

Set up a reusable group to set the EST timezone on multiple nodes:

  1. Create the folder /host0/groups/timezone-est/.
  2. Add /host0/groups/timezone-est/manifest:
    L SRC=/usr/share/zoneinfo/EST TGT=/etc/localtime
    
    This creates a soft link to set the node’s timezone to EST.

Configuring a Machine

Configure a node with UUID 10eff964 to use a manual hostname and the EST timezone:

  1. Create the folder /host0/machines/my-server/ (name is arbitrary).
  2. Add /host0/machines/my-server/uuid:
    10eff964
    
    This links the folder to the node’s UUID (discovered via mesh node info).
  3. Add /host0/machines/my-server/groups:
    baseline
    timezone-est
    
    This applies baseline (for essential setup) and timezone-est. Only listed groups are included.
  4. Add /host0/machines/my-server/manifest:
    O MODE=root:root:0644 SRC=/host0/machines/my-server/hostname TGT=/etc/hostname
    
    This sets a manual hostname (e.g., node1).
  5. Add /host0/machines/my-server/hostname:
    node1
    
    This is the source file for the hostname.

Applying the Configuration

Apply the setup to the node:

  1. Compile the APKOVLs:
    mesh system configure
    
    This builds an APKOVL for each machine folder in /host0/machines/.
  2. Reboot the node:
    mesh node ctl -n 10eff964 "reboot"
    
    The node fetches its APKOVL and applies the settings.

Running Tasks

Verify the setup:

mesh node ctl -n 10eff964 "cat /etc/hostname"

This should output node1.

Notes

  • Groups do not nest—only machine folders use a groups file to reference top-level /host0/groups/ folders, and only those listed are applied (e.g., baseline must be explicit).
  • The groups file order sets manifest application, with the machine’s manifest overriding last.
  • If packages are added, run mesh system download first to update the mirror.
  • For manifest syntax, see Manifest Files; for node control, see Managing Nodes.

Next, explore Network Configuration.

Network Configuration

Networking in Mesh Hypervisor spans PXE booting and cluster communication via VXLAN meshes. This section shows how to configure node networking—static IPs, bridges, or custom VXLAN networks—using machine folders and group folders in /host0/, applied via APKOVLs. All steps are run from the central orchestration node’s CLI. For an overview, see Networking.

Prerequisites

Ensure the following:

  • Remote nodes are online (check with mesh node info).
  • Central node is running (mesh system start executed).

Configuring a Static IP

Nodes boot with DHCP by default, but you can set a static IP using a machine or group folder’s manifest. For a node with UUID 10eff964:

  1. In /host0/machines/my-server/ (from Configuring Nodes), add to manifest:
    O MODE=root:root:0644 SRC=/host0/machines/my-server/interfaces TGT=/etc/network/interfaces
    
    This installs the static IP config.
  2. Create /host0/machines/my-server/interfaces:
    auto eth0
    iface eth0 inet static
        address 192.168.1.100
        netmask 255.255.255.0
        gateway 192.168.1.1
    
    This sets eth0 to a static IP.
  3. Compile and apply:
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    
    The node reboots with the new IP.

Use a group folder (e.g., /host0/groups/net-static/) to apply to multiple nodes.

Configuring a Network Bridge

Bridges connect physical interfaces to workloads or VXLANs. For node 10eff964 with a bridge br0:

  1. In /host0/machines/my-server/manifest, add:
    O MODE=root:root:0644 SRC=/host0/machines/my-server/interfaces TGT=/etc/network/interfaces
    A MODE=root:root:0644 SRC=/host0/machines/my-server/modules TGT=/etc/modules
    A MODE=root:root:0644 SRC=/host0/machines/my-server/packages TGT=/etc/apk/world
    A MODE=root:root:0644 SRC=/host0/machines/my-server/bridge.conf TGT=/etc/qemu/bridge.conf
    A MODE=root:root:0644 SRC=/host0/machines/my-server/bridging.conf TGT=/etc/sysctl.d/bridging.conf
    
    This sets up the bridge and QEMU support.
  2. Create these files in /host0/machines/my-server/:
    • interfaces:
      auto eth0
      iface eth0 inet manual
      
      auto br0
      iface br0 inet static
          address 192.168.1.100
          netmask 255.255.255.0
          gateway 192.168.1.1
          bridge_ports eth0
      
      This bridges eth0 to br0 with a static IP.
    • modules:
      tun
      tap
      
      This loads KVM bridge modules.
    • packages:
      bridge
      
      This installs bridge tools.
    • bridge.conf:
      allow br0
      
      This permits QEMU to use br0.
    • bridging.conf:
      net.ipv4.conf.br0_bc_forwarding=1
      net.bridge.bridge-nf-call-iptables=0
      
      This enables bridge forwarding and skips iptables.
  3. Compile and apply:
    mesh system download
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    
    The node reboots with br0 ready for workloads.

Workloads can attach to br0—see Configuring Workloads.

Configuring a VXLAN Network

Custom VXLAN meshes extend the default network. Define them in /host0/network/ and install via a machine’s manifest:

  1. Create /host0/network/manage.conf:
    name=manage
    prefix=fd42:2345:1234:9abc::/64
    vni=456
    key=456
    
    • name: Identifies the mesh.
    • prefix: IPv6 ULA prefix.
    • vni: Virtual Network Identifier.
    • key: Seed for addressing. The central node (host0) is the default reflector.
  2. In /host0/machines/my-server/manifest, add:
    O MODE=root:root:0644 SRC=/host0/network/manage.conf TGT=/var/pxen/networks/manage.conf
    
    This installs the config.
  3. Compile and apply:
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    
    The node joins the manage mesh, creating bridge br456 (format: br<vni>).

Workloads attach to br456. Add more nodes by repeating step 2 in their manifest.

Notes

Next, explore Manifest Files.

Manifest Files

Manifest files in Mesh Hypervisor define how files are installed or modified on remote nodes. Stored in /host0/machines/ or /host0/groups/, they’re compiled into APKOVLs by mesh system configure and applied on node boot. This section shows practical examples for common tasks, run from the central orchestration node’s CLI. For node setup, see Configuring Nodes.

Prerequisites

Ensure:

  • Remote nodes are online (check with mesh node info).
  • Central node is running (mesh system start executed).

Using Manifest Files

Each manifest file is a list of entries, one per line, specifying an action, permissions, source, and target. Lines starting with # are comments. Here’s how to use them:

Installing a File

To set a custom hostname on node 10eff964:

  1. In /host0/machines/my-server/manifest (UUID 10eff964):
    O MODE=root:root:0644 SRC=/host0/machines/my-server/hostname TGT=/etc/hostname
    
    • O: Overwrites the target.
    • MODE: Sets permissions (root:root:0644).
    • SRC: Source file on the central node.
    • TGT: Target path on the remote node.
  2. Create /host0/machines/my-server/hostname:
    node1
    
  3. Apply:
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    

To set the EST timezone (e.g., in a group):

  1. In /host0/groups/timezone-est/manifest:
    L SRC=/usr/share/zoneinfo/EST TGT=/etc/localtime
    
    • L: Creates a soft link.
    • No MODE—links inherit target perms.
  2. Link to a machine’s groups file (e.g., /host0/machines/my-server/groups):
    baseline
    timezone-est
    
  3. Apply:
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    

Appending to a File

To add a package (e.g., chrony):

  1. In /host0/machines/my-server/manifest:
    A MODE=root:root:0644 SRC=/host0/machines/my-server/packages TGT=/etc/apk/world
    
    • A: Appends to the target.
  2. Create /host0/machines/my-server/packages:
    chrony
    
  3. Update the mirror and apply:
    mesh system download
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    

Creating a Directory

To make a mount point:

  1. In /host0/machines/my-server/manifest:
    D MODE=root:root:0755 TGT=/mnt/data
    
    • D: Creates a directory.
  2. Apply:
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    

Removing a File

To disable dynamic hostname:

  1. In /host0/machines/my-server/manifest:
    R TGT=/etc/init.d/hostname
    
    • R: Removes the target.
    • No MODE or SRC needed.
  2. Apply:
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    

Notes

Order matters—later entries override earlier ones (e.g., group manifest then machine manifest). Use A to append safely, O to replace. For full syntax and actions, see Manifest Syntax. For applying configs, see Configuring Nodes; for node control, see Managing Nodes.

Next, explore Troubleshooting.

Configuration Reference

Mesh Hypervisor configs live in files—/host0/, workloads, networks, manifests. This section breaks down every key and option. Begin with Orchestration Node Config.

Orchestration Node Config

The central orchestration node in Mesh Hypervisor is configured primarily via /host0/ on its flash drive, with static defaults in /etc/pxen/host0.conf. This section details /host0/ structure and mentions tweakable options in host0.conf. For usage, see Configuring Nodes.

Primary Config: /host0/

/host0/ drives PXE booting, DHCP, package mirroring, and node setups. Changes require mesh system configure to rebuild APKOVLs, applied on node reboot.

Directory Structure

  • /host0/machines/: Node-specific configs.
    • Subfolders (e.g., my-server, default) named arbitrarily.
    • Files:
      • UUID (Required): Node’s 8-char UUID (e.g., 10eff964) or default.
      • manifest (Required): File actions (see Manifest Syntax).
      • groups (Optional): List of group folders (e.g., baseline).
      • packages (Optional): Alpine packages (e.g., chrony).
      • SKIP (Optional): Empty; skips APKOVL build.
  • /host0/groups/: Reusable configs for multiple nodes.
    • Subfolders (e.g., timezone-est).
    • Files:
      • manifest (Required): File actions.
      • packages (Optional): Alpine packages.
  • /host0/network/: VXLAN configs.
  • /host0/packages: Top-level package list for the offline mirror (e.g., mdadm).

Example Configs

  • Machine: /host0/machines/my-server/
    • UUID:
      10eff964
      
    • groups:
      baseline
      timezone-est
      
    • manifest:
      O MODE=root:root:0644 SRC=/host0/machines/my-server/hostname TGT=/etc/hostname
      
    • hostname:
      node1
      
  • Group: /host0/groups/timezone-est/
    • manifest:
      L SRC=/usr/share/zoneinfo/EST TGT=/etc/localtime
      
  • Top-Level: /host0/packages
    chrony
    bridge
    

Static Config: /etc/pxen/host0.conf

/etc/pxen/host0.conf sets static defaults for the central node—paths, DHCP, and networking. It’s rarely edited; comments in the file explain options. Key tweakable settings include:

  • Subnet Pools: subnet_pool (e.g., "10.11.0.0/16" "192.168.0.0/23")—defines DHCP auto-assigned ranges.
  • Default Subnet Size: default_subnet_size (e.g., "25")—sets subnet mask for new networks.
  • Manual Subnets: manual_subnets (e.g., { {demo9} {10.0.43.0/25} })—assigns fixed subnets by interface or MAC.
  • DHCP Retries: dhcp_retries (e.g., "5") and dhcp_retry_pause (e.g., "3")—tunes DHCP request attempts.
  • DNS Settings: dns_servers (e.g., "1.1.1.1" "8.8.8.8") and host0_dns_hostname (e.g., "host0")—configures DNS behavior.

Edit with caution—defaults are optimized for most setups.

Notes

/host0/machines/default/ and /host0/machines/initramfs/ are special—see Configuring Nodes. Group manifests apply first, machine manifests override. Backup /host0/ before changes—see Upgrading the System. For node control, see Managing Nodes; for manifest details, see Manifest Syntax.

Next, explore Workload Config.

Workload Config

Workloads in Mesh Hypervisor are KVM virtual machines defined by config files in /var/pxen/monoliths/ on the central orchestration node, uploaded to remote nodes via mesh workload start. This section fully explains the config structure, keys, and validation, based on the QEMU start script. For usage, see Configuring Workloads.

Overview

Each config file is a key-value text file (e.g., qemutest1.conf) specifying a VM’s name, UUID, resources, disks, network, and console. It’s parsed on the remote node to build a qemu-system-x86_64 command. Keys use dot notation (e.g., platform.cpu.threads), and invalid configs halt startup with errors.

Structure and Keys

Required Keys

  • name:
    • Format: String (e.g., qemutest1).
    • Purpose: VM identifier, matches -w in mesh workload commands.
    • Validation: Must be set or startup fails.
  • uuid:
    • Format: Unique string (e.g., qweflmqwe23).
    • Purpose: Internal VM ID, used for monitor files (e.g., /tmp/pxen/<uuid>/).
    • Validation: Must be unique and set, or startup fails.

Platform Settings

  • platform.memory.size:
    • Format: Number + unit (K, M, G, T) (e.g., 4G, 500M).
    • Purpose: Sets VM RAM as QEMU’s -m arg.
    • Validation: Must match ^[0-9]+[KMGT]$, fit node’s available memory (MemAvailable in /proc/meminfo), or fail.
  • platform.cpu.threads:
    • Format: Integer (e.g., 2).
    • Purpose: Sets vCPU threads as QEMU’s -smp arg.
    • Validation: Must be a positive number, ≤ node’s CPU threads (nproc), or fail.
  • platform.cpu.mode:
    • Format: QEMU CPU model (e.g., Penryn-v1) or host.
    • Purpose: Sets CPU emulation as QEMU’s -cpu arg.
    • Validation: Must match qemu-system-x86_64 -cpu help models or be host (uses KVM), defaults to host if unset.
  • platform.volume.<id>.*:
    • Subkeys:
      • type (Required): qcow2 or iso.
      • path (Required): Node-local path (e.g., /tmp/alpine.iso).
      • source (Optional): URL or path for download (e.g., http://host0/isos/alpine.iso will download the alpine.iso from /srv/pxen/http/isos/, if it has been downloaded to there. Otherwise if the remote host has access to the internet, the url can point directly to an iso download: e.g. https://dl-cdn.alpinelinux.org/alpine/v3.21/releases/x86_64/alpine-virt-3.21.3-x86_64.iso).
      • writable (Optional): 1 (read-write) or 0 (read-only), defaults to 0.
      • ephemeral (Optional): 1 (delete on stop/start) or 0 (persist), defaults to 1.
    • Format: <id> is a number (e.g., 0, 1).
    • Purpose: Defines disks as QEMU -drive args.
    • Validation:
      • path must exist on the node (via mesh workload download).
      • type=qcow2: writable toggles readonly.
      • type=iso: writable=1 fails (CDROMs are read-only).
      • ephemeral=0 fails if path exists pre-start.

Network Settings

  • network.<id>.*:
    • Subkeys:
      • type (Required): bridge or nat.
      • bridge (Required for bridge): Bridge name (e.g., br0).
      • mac (Optional): MAC address (e.g., 52:54:00:12:34:56).
    • Format: <id> is a number (e.g., 0).
    • Purpose: Configures NICs as QEMU -netdev and -device args.
    • Validation:
      • type=bridge: bridge must exist on the node (e.g., /sys/class/net/br0/).
      • mac: If unset, generated from uuid, id, and type (e.g., 52:54:00:xx:xx:xx).
      • Uses virtio-net-pci device.

Runtime Settings

  • runtime.boot:
    • Format: Comma-separated list (e.g., 1,0,n0).
    • Purpose: Sets boot order—numbers reference platform.volume.<id> or network.<id> (with n prefix).
    • Validation: Must match existing volume/network IDs or fail. Unset defaults to first volume (c) or network (n).
  • runtime.console.*:
    • Subkeys:
      • type (Optional): serial-tcp, serial-socket, vnc, defaults to serial-tcp.
      • id (Optional for serial-tcp, required for vnc): Integer (e.g., 5).
    • Purpose: Configures console access.
    • Validation:
      • serial-tcp: Port 7900 + id (e.g., 7905), must be free.
      • serial-socket: Uses /tmp/pxen/<uuid>/serial.sock.
      • vnc: Port 5900 + id (e.g., 5905), must be free.

Example Config

/var/pxen/monoliths/qemutest1.conf:

name=qemutest1
uuid=qweflmqwe23
platform.memory.size=4G
platform.cpu.threads=2
platform.cpu.mode=host
platform.volume.0.source=http://host0/isos/alpine-virt-3.21.3-x86_64.iso
platform.volume.0.type=iso
platform.volume.0.path=/tmp/alpine.iso
network.0.type=bridge
network.0.bridge=br0
runtime.console.type=vnc
runtime.console.id=5
  • 4GB RAM, 2 vCPUs, Alpine ISO, bridged to br0, VNC on port 5905.

Notes

Configs are validated on the node—errors (e.g., missing path, invalid threads) halt startup with logs in /tmp/pxen/<uuid>/output.log. Volumes must be downloaded (mesh workload download) before start. Only KVM is supported now; future types may expand options. For bridge setup, see Network Configuration; for node control, see Managing Nodes.

Next, explore Network Config.

Network Config

Network configs in Mesh Hypervisor define custom VXLAN meshes for node connectivity, stored in /host0/network/ on the central orchestration node. These files (e.g., manage.conf) are installed to remote nodes via manifests and used to build VXLAN bridges like br456. This section explains the keys and how they work, assuming you know basics like ssh and cat. For setup steps, see Network Configuration.

Overview

Each config file sets up a VXLAN mesh—a virtual network linking nodes over your physical Ethernet. On each node, a script reads the config, creates a vxlan<vni> interface (the tunnel) and a br<vni> bridge (where workloads connect), and assigns IPv6 addresses. The central node, called host0, runs an HTTP API inside each VXLAN (e.g., on port 8000) to list all nodes in that mesh. Nodes use this API to find and connect to each other, updating their neighbor lists dynamically.

Here’s the tricky part: nodes need host0’s address to join the VXLAN, but they’re not in it yet. Mesh Hypervisor solves this by giving host0 a fixed IPv6 address—always ending in 0001 (e.g., fd42:1234::1). Nodes start by connecting to that, fetch the API data, then link up with everyone else. If a node disappears, the API updates, and others drop it. Simple, right?

Structure and Keys

Configs are plain text files with key=value lines. Here’s what each key does:

  • name:
    • Format: Any word (e.g., manage).
    • Purpose: Names the VXLAN mesh—helps generate unique addresses and IDs.
    • Example: name=manage—just a label you pick.
    • Must Have: Yes—if missing, the script fails with an error.
  • prefix:
    • Format: IPv6 address with /64 (e.g., fd42:2345:1234:9abc::/64).
    • Purpose: Sets the IPv6 range for the mesh—like a big address pool starting with fd42:2345:1234:9abc:. Every node gets a unique address from this.
    • Example: prefix=fd42:2345:1234:9abc::/64host0 gets fd42:2345:1234:9abc::1, others get random endings.
    • Must Have: Yes—needs to be /64 (64-bit network part), or the script chokes.
  • vni:
    • Format: A number (e.g., 456).
    • Purpose: Virtual Network Identifier—makes vxlan456 and br456. Keeps meshes separate.
    • Example: vni=456—creates br456 on nodes for workloads to join.
    • Must Have: Yes—duplicate VNIs crash the script; each mesh needs its own.
  • key:
    • Format: A number (e.g., 456).
    • Purpose: A seed number—feeds into genid to make unique IPv6 and MAC addresses for each node.
    • Example: key=456—ensures addresses like fd42:2345:1234:9abc:1234:5678:9abc:def0 are predictable.
    • Must Have: Yes—if missing, addressing fails. Same key across meshes might overlap, so mix it up.

Example Config

/host0/network/manage.conf:

name=manage
prefix=fd42:2345:1234:9abc::/64
vni=456
key=456
  • Sets up a mesh called manage with bridge br456, IPv6 starting fd42:2345:1234:9abc:, and key=456 for address generation.

How It Works

When a node boots, it copies this config to /var/pxen/networks/ (via a manifest) and runs a script. Here’s what happens, step-by-step:

  1. VXLAN Interface: Creates vxlan<vni> (e.g., vxlan456)—a tunnel over your Ethernet.
    • Uses port 4789, MTU 1380 (hardcoded).
    • Gets a MAC like 02:12:34:56:78:9a from genid(name+vni).
  2. Bridge Interface: Creates br<vni> (e.g., br456)—a virtual switch.
    • Gets a MAC like 02:ab:cd:ef:01:23 from genid(bridge+name+vni).
    • Links vxlan456 to br456 so traffic flows through.
  3. IPv6 Address: Assigns the node an address like fd42:2345:1234:9abc:1234:5678:9abc:def0.
    • Uses prefix plus a genid(name+vni) suffix—unique per node.
    • host0 always gets prefix:0000:0000:0000:0001 (e.g., fd42:2345:1234:9abc::1).
  4. Connect to host0: Adds host0’s IPv4 (from PXE boot URL) and MAC to the VXLAN’s neighbor list.
    • Starts talking to host0 at fd42:2345:1234:9abc::1:8000.
  5. Fetch Neighbors: Grabs a list of other nodes from host0’s HTTP API.
    • Format: hostname ipv4 mac ipv6 per line.
    • Updates every 3 seconds—adds new nodes, drops missing ones.
  6. Stay Alive: Pings host0’s IPv6 to keep the mesh active.

Workloads (e.g., VMs) plug into br<vni>—like a virtual LAN cable.

Notes

Install configs with a manifest (e.g., O MODE=root:root:0644 SRC=/host0/network/manage.conf TGT=/var/pxen/networks/manage.conf). The HTTP API runs only inside the VXLAN—nodes bootstrap via host0’s 0001 address, not external access. Overlapping prefix or vni values break the mesh—check logs (mesh system logview) if nodes don’t connect. For workload bridges, see Workload Config; for node control, see Managing Nodes.

Next, explore Manifest Syntax.

Manifest Syntax

Manifest files in Mesh Hypervisor tell the system how to set up files on remote nodes. They live in /host0/machines/ or /host0/groups/ on the central orchestration node, alongside source files they reference, and get compiled into APKOVLs during mesh system configure. This section explains the syntax—actions, fields, and rules—so you can tweak nodes even if you just know ssh and cat. For usage, see Manifest Files.

Overview

A manifest is a text file named manifest inside a folder like /host0/machines/my-server/. It lists actions—one per line—like copying a file or making a link. Each line starts with a letter (the action) and has fields (like permissions or paths). That same folder holds the files it calls (e.g., hostname next to manifest). When you run mesh system configure, Mesh Hypervisor reads these lines, applies them in order, and builds the node’s filesystem. If a machine folder uses groups (e.g., baseline), their manifests run first, then the machine’s overrides.

Think of it like a recipe: “Copy hostname to /etc/hostname,” “Make /mnt/data.” The compile-apkovl script checks every line—miss a file or botch the syntax, and the whole mesh system configure stops with an error. That’s on purpose: no silent failures allowed. You fix it, then rerun.

Syntax

Each line starts with an action letter, followed by space-separated fields (FIELD=value). Paths must be full (e.g., /host0/machines/my-server/hostname) since relative paths aren’t supported yet.

Actions

  • O (Overwrite):
    • Purpose: Copies a file from the folder (e.g., /host0/machines/my-server/) to the remote node, replacing what’s there.
    • Fields:
      • MODE=<user:group:perms> (Required): Sets ownership and permissions (e.g., root:root:0644—read-write for owner, read for others).
      • SRC=<full-path> (Required): Source file in the folder (e.g., /host0/machines/my-server/hostname).
      • TGT=<full-path> (Required): Target on the remote node (e.g., /etc/hostname).
    • Example: O MODE=root:root:0644 SRC=/host0/machines/my-server/hostname TGT=/etc/hostname
      • Copies hostname from my-server/ to /etc/hostname with rw-r--r--.
  • A (Append):
    • Purpose: Adds a file’s contents from the folder to the end of a target file (creates it if missing).
    • Fields: Same as OMODE, SRC, TGT.
    • Example: A MODE=root:root:0644 SRC=/host0/machines/my-server/packages TGT=/etc/apk/world
      • Appends packages to /etc/apk/world, sets rw-r--r--.
  • D (Directory):
    • Purpose: Makes a directory on the remote node.
    • Fields:
      • MODE=<user:group:perms> (Required): Sets ownership and permissions (e.g., root:root:0755—read-write-execute for owner, read-execute for others).
      • TGT=<full-path> (Required): Directory path (e.g., /mnt/data).
    • Example: D MODE=root:root:0755 TGT=/mnt/data
      • Creates /mnt/data with rwxr-xr-x.
  • L (Soft Link):
    • Purpose: Creates a symbolic link on the remote node.
    • Fields:
      • SRC=<full-path> (Required): What to link to (e.g., /usr/share/zoneinfo/EST).
      • TGT=<full-path> (Required): Link location (e.g., /etc/localtime).
    • Example: L SRC=/usr/share/zoneinfo/EST TGT=/etc/localtime
      • Links /etc/localtime to /usr/share/zoneinfo/EST—perms come from the target.
  • R (Remove):
    • Purpose: Deletes a file or directory on the remote node.
    • Fields:
      • TGT=<full-path> (Required): Path to remove (e.g., /etc/init.d/hostname).
    • Example: R TGT=/etc/init.d/hostname
      • Removes /etc/init.d/hostname.

Fields Explained

  • MODE=<user:group:perms>:
    • Format: user:group:octal (e.g., root:root:0644).
    • Purpose: Sets who owns the file and what they can do—0644 is owner read-write, others read; 0755 adds execute.
    • Used In: O, A, D—not L (links use target perms) or R (no perms to set).
    • Must Have: Yes for O, A, D—skip it, and the script errors out.
  • SRC=<full-path>:
    • Format: Complete path on the central node (e.g., /host0/machines/my-server/hostname).
    • Purpose: Points to a file in the same folder as manifest—must exist when mesh system configure runs.
    • Used In: O, A, L—not D (no source) or R (nothing to copy).
    • Must Have: Yes for O, A, L—missing file stops the build.
  • TGT=<full-path>:
    • Format: Complete path on the remote node (e.g., /etc/hostname).
    • Purpose: Where the action happens—parent dirs are auto-created.
    • Used In: All actions (O, A, D, L, R).
    • Must Have: Yes—every action needs a target.

Example Manifest

/host0/machines/my-server/manifest:

# Set a custom hostname from this folder
O MODE=root:root:0644 SRC=/host0/machines/my-server/hostname TGT=/etc/hostname
# Add packages from this folder
A MODE=root:root:0644 SRC=/host0/machines/my-server/packages TGT=/etc/apk/world
# Make a mount point
D MODE=root:root:0755 TGT=/mnt/data
# Remove default hostname service
R TGT=/etc/init.d/hostname
  • Source files hostname and packages live in /host0/machines/my-server/ with manifest.

How It Works

When you run mesh system configure, the compile-apkovl script processes every manifest:

  1. Compilation: Reads group manifests (from groups) first, then the machine’s—later lines override earlier ones.
    • Example: baseline sets /etc/motd, my-server overwrites it, my-server wins.
  2. Actions: For each line:
    • O: Copies SRC to TGT, sets MODE.
    • A: Appends SRC to TGT (creates if missing), sets MODE, adds a newline if needed.
    • D: Makes TGT dir, sets MODE.
    • L: Links TGT to SRC.
    • R: Deletes TGT (recursive for dirs).
  3. Validation: Checks SRC exists (for O, A), MODE is valid, and paths are full—any error (e.g., missing hostname) stops the entire build.
    • Fix it, rerun mesh system configure, or no APKOVLs get made.

The result lands in /srv/pxen/http/, and nodes grab it on boot.

Notes

Source files sit in the same folder as manifest (e.g., /host0/machines/my-server/hostname)—keep them together. Paths must be full (e.g., /host0/...)—relative paths like hostname won’t work until a future update. A is safer than O for files like /etc/apk/world—it adds, doesn’t wipe. For examples, see Configuring Nodes; for network configs, see Network Config.

This concludes the Configuration Reference.

Troubleshooting

When Mesh Hypervisor hits a snag, this section helps you diagnose and fix common issues. All steps are run from the central orchestration node’s CLI unless noted. For setup details, see Usage.

Prerequisites

Ensure:

  • Central node is booted (see Installation).
  • You’re logged in (root/toor at the console).

Checking Logs

Start with logs—they’re your first clue:

mesh system logview

This opens lnav in /var/log/pxen/, showing DHCP, PXE, HTTP, and service activity. Scroll with arrows, filter with / (e.g., /error), exit with q.

Common Log Issues

  • DHCP Requests Missing: No nodes booting—check network cables or PXE settings.
  • HTTP 403 Errors: Permissions issue on /srv/pxen/http/—run chmod -R 644 /srv/pxen/http/*.
  • Kernel Downloaded, Then Stops: APKOVL fetch failed—verify UUID matches in /host0/machines/<folder>/uuid. Check permissions on /srv/pxen/http.

Node Not Booting

If mesh node info shows no nodes:

  1. Verify PXE: On the node, ensure BIOS/UEFI is set to network boot.
  2. Check Logs: In mesh system logview, look for DHCP leases and kernel downloads.
  3. Test Network: From the central node:
    ping <node-ip>
    
    Find <node-ip> in logs (e.g., DHCP lease). No response? Check cables or switches.

Workload Not Starting

If mesh workload start -n <uuid> -w <name> fails:

  1. Check Logs: Run mesh system logview—look for QEMU or KVM errors.
  2. Verify Config: Ensure /var/pxen/monoliths/<name>.conf exists and matches -w <name>—see Configuring Workloads.
  3. Resources: SSH to the node:
    mesh node ctl -n <uuid>
    free -m; cat /proc/cpuinfo
    
    Confirm RAM and CPU suffice (e.g., 500M RAM for qemutest1).
  4. Restart: Retry:
    mesh workload soft-stop -n <uuid> -w <name>
    mesh workload start -n <uuid> -w <name>
    

Network Issues

If a node’s IP or VXLAN isn’t working:

  1. Check IP: On the node:
    mesh node ctl -n <uuid> "ip addr"
    
    No static IP? Verify interfaces in manifest—see Network Configuration.
  2. VXLAN Bridge: Check bridge existence:
    mesh node ctl -n <uuid> "ip link show br456"
    
    Missing? Ensure /var/pxen/networks/manage.conf is installed.
  3. Ping Test: From the node:
    mesh node ctl -n <uuid> "ping6 -c 4 fd42:2345:1234:9abc::1"
    
    No reply? Check VXLAN config in /host0/network/.

Time Sync Problems

If nodes show as offline in mesh node info:

  1. Check Time: On the node:
    mesh node ctl -n <uuid> "date"
    
    Off by hours? Time sync failed.
  2. Fix Chrony: Ensure ntp_sync group is applied (e.g., in groups file)—see Configuring Nodes.
  3. Restart Chrony: On the node:
    mesh node ctl -n <uuid> "rc-service crond restart"
    

Notes

  • Logs are verbose—most errors trace back to permissions, network, or config mismatches.
  • If stuck, rebuild configs with mesh system configure and reboot nodes.
  • For manifest tweaks, see Manifest Files; for node control, see Managing Nodes.

Next, explore Advanced Topics.

Advanced Topics

Take Mesh Hypervisor further—upgrades, recoveries, and more. This section digs into the deep end, starting with Upgrading the System.

Setting Up RAID Storage

Mesh Hypervisor nodes are diskless by default, but you can add local RAID storage for data persistence—like for backups or file shares. This guide shows how to set up a RAID array with encryption on a remote node, using the storage group and a custom machine config. Commands run via SSH to the central orchestration node’s CLI (e.g., ssh root@<central-ip>). For node basics, see Configuring Nodes.

Prerequisites

Ensure:

  • A remote node with spare disks (e.g., /dev/sda, /dev/sdb) is online (mesh node info).
  • You’ve got a machine folder (e.g., /host0/machines/storage-node/).
  • The storage group is in /host0/groups/storage/—it’s prebuilt with RAID and encryption tools.

Step 1: Boot and Inspect the Node

  1. Add Storage Group: In /host0/machines/storage-node/groups:
    baseline
    storage
    
    • baseline sets essentials; storage adds mdadm, cryptsetup, etc.
  2. Set UUID: In /host0/machines/storage-node/UUID, use the node’s UUID (e.g., 10eff964) from mesh node info.
  3. Apply: Rebuild and reboot:
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    
  4. SSH In: Connect to the node:
    mesh node ctl -n 10eff964
    
  5. Check Disks: List available drives:
    lsblk
    
    • Example: See /dev/sda and /dev/sdb—unpartitioned, ready for RAID.

Step 2: Create the RAID Array

  1. Build RAID: Make a RAID1 array (mirrored):
    mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
    
    • /dev/md0: Array name.
    • --level=1: Mirror (RAID1)—swap for 5 or 10 if you’ve got more disks.
    • Adjust /dev/sda, /dev/sdb to your drives.
  2. Save Config: Write the array details:
    mdadm --detail --scan > /etc/mdadm.conf
    
    • Example output: ARRAY /dev/md0 metadata=1.2 name=q-node:0 UUID=abcd1234:5678...
  3. Monitor: Check progress:
    cat /proc/mdstat
    
    • Wait for [UU]—array’s synced.

Step 3: Encrypt the Array

  1. Create LUKS: Encrypt /dev/md0:
    cryptsetup luksFormat /dev/md0
    
    • Enter a passphrase (e.g., mysecret)—you’ll generate a keyfile next.
  2. Generate Keyfile: Make a random key:
    dd if=/dev/urandom of=/etc/data.luks bs=4096 count=1
    chmod 600 /etc/data.luks
    
    • 4KB keyfile, locked to root.
  3. Add Key: Link it to LUKS:
    cryptsetup luksAddKey /dev/md0 /etc/data.luks
    
    • Enter the passphrase again—keyfile’s now an alternate unlock.
  4. Open LUKS: Unlock the array:
    cryptsetup luksOpen /dev/md0 data --key-file /etc/data.luks
    
    • Creates /dev/mapper/data.

Step 4: Format and Mount

  1. Format: Use ext4 (or xfs, etc.):
    mkfs.ext4 /dev/mapper/data
    
  2. Mount: Test it:
    mkdir /mnt/data
    mount /dev/mapper/data /mnt/data
    df -h
    
    • See /mnt/data listed—unmount with umount /mnt/data after.

Step 5: Configure the Machine

  1. Exit Node: Back to the central node:
    Ctrl+D
    
  2. Update Manifest: In /host0/machines/storage-node/manifest:
    # RAID config
    O MODE=root:root:0644 SRC=/host0/machines/storage-node/mdadm.conf TGT=/etc/mdadm.conf
    # Encryption
    A MODE=root:root:0644 SRC=/host0/machines/storage-node/dmcrypt TGT=/etc/conf.d/dmcrypt
    O MODE=root:root:0600 SRC=/host0/machines/storage-node/data.luks TGT=/etc/data.luks
    # Filesystem mount
    A MODE=root:root:0644 SRC=/host0/machines/storage-node/fstab TGT=/etc/fstab
    D MODE=root:root:0755 TGT=/mnt/data
    
  3. Add Files: In /host0/machines/storage-node/:
    • mdadm.conf: Copy from node (scp root@<node-ip>:/etc/mdadm.conf .).
    • dmcrypt:
      target=data
      source=/dev/md0
      key=/etc/data.luks
      
    • data.luks: Copy from node (scp root@<node-ip>:/etc/data.luks .).
    • fstab:
      /dev/mapper/data /mnt/data ext4 defaults,nofail 0 2
      
  4. Apply: Rebuild and reboot:
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    
  5. Verify: SSH in, check:
    mesh node ctl -n 10eff964 "df -h"
    
    • /mnt/data should be mounted.

Notes

The storage group handles boot-time RAID assembly and LUKS unlocking—your machine config locks in the specifics. RAID setup is manual first; configs make it persistent. For multi-disk setups (e.g., RAID5), adjust --level and add drives—update dmcrypt and fstab too. See Managing Nodes for CLI tips; Recovery Procedures for RAID fixes.

Next, explore Running Docker.

Running Docker

Mesh Hypervisor nodes can run Docker containers bare-metal—great for lightweight services like a web server or app. This guide shows how to enable Docker on a remote node and spin up a test container. Commands run via SSH to the central orchestration node’s CLI (e.g., ssh root@<central-ip>). For node setup, see Configuring Nodes.

Prerequisites

Ensure:

  • A remote node is online (mesh node info).
  • You’ve got a machine folder (e.g., /host0/machines/docker-node/).
  • The docker group exists in /host0/groups/docker/—it’s prebuilt with Docker tools.

Step 1: Enable Docker

  1. Add Docker Group: In /host0/machines/docker-node/groups:
    baseline
    docker
    
    • baseline sets essentials; docker installs Docker and tweaks services.
  2. Set UUID: In /host0/machines/docker-node/UUID, use the node’s UUID (e.g., 10eff964) from mesh node info.
  3. Apply: Update the mirror, rebuild, and reboot:
    mesh system download
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    
    • mesh system download grabs docker packages; docker group adds them to /etc/apk/world.

Step 2: Run a Test Container

  1. SSH In: Connect to the node:
    mesh node ctl -n 10eff964
    
  2. Verify Docker: Check it’s running:
    docker version
    
    • Should show Docker client and server versions—services auto-start via the group.
  3. Pull an Image: Grab a simple container:
    docker pull hello-world
    
    • Downloads the hello-world image from Docker Hub.
  4. Run It: Start the container:
    docker run hello-world
    
    • Prints a “Hello from Docker!” message and exits—proof it works.
  5. Exit Node: Back to the central node:
    Ctrl+D
    

Notes

The docker group handles everything: installs docker, docker-openrc, etc., links services to /etc/runlevels/default/, and sets rc_cgroup_mode=unified in /etc/rc.conf for cgroups. No manual CLI tweaks needed—just add the group and go. For persistent containers, add a manifest entry (e.g., O MODE=root:root:0644 SRC=/host0/machines/docker-node/run.sh TGT=/usr/local/bin/run.sh) and script your docker run. See Managing Nodes for CLI tips; Configuring Nodes for group details.

Next, explore Configuring Samba.

Configuring Samba

Mesh Hypervisor nodes can run Samba bare-metal to share files over the network—like a folder for backups. This guide sets up Samba to share /mnt/data on a remote node, showing how configs work in either a group or machine folder. Commands run via SSH to the central orchestration node’s CLI (e.g., ssh root@<central-ip>). For node basics, see Configuring Nodes.

Prerequisites

Ensure:

  • A remote node is online (mesh node info)—e.g., one with RAID at /mnt/data (see Setting Up RAID Storage).
  • User nobody exists—Samba uses it for guest access (see Adding a User Account).
  • You’ve got a machine folder (e.g., /host0/machines/terrible-tuneup/).

Step 1: Configure Samba

You can put Samba settings in a group (reusable) or directly in the machine folder (one-off)—both work the same. Here’s how:

Option 1: As a Group

  1. Make the Group: Create /host0/groups/samba/:
    mkdir /host0/groups/samba
    
  2. Add Manifest: Create /host0/groups/samba/manifest:
    A MODE=root:root:0644 SRC=/host0/groups/samba/packages TGT=/etc/apk/world
    L SRC=/etc/init.d/samba TGT=/etc/runlevels/default/samba
    O MODE=root:root:0644 SRC=/host0/groups/samba/smb.conf TGT=/etc/samba/smb.conf
    
  3. Add Packages: Create /host0/groups/samba/packages:
    samba
    samba-server
    samba-server-openrc
    
  4. Add Config: Create /host0/groups/samba/smb.conf:
    [global]
    workgroup = WORKGROUP
    server string = network file share
    netbios name = Mesh Hypervisor-SAMBA
    security = user
    map to guest = Bad User
    guest account = nobody
    
    [data]
    path = /mnt/data
    browsable = yes
    writable = yes
    guest ok = yes
    read only = no
    force user = nobody
    create mask = 0777
    directory mask = 0777
    
  5. Link to Machine: In /host0/machines/terrible-tuneup/groups:
    baseline
    storage
    samba
    

Option 2: In the Machine Folder

  1. Add to Machine: In /host0/machines/terrible-tuneup/manifest:
    A MODE=root:root:0644 SRC=/host0/machines/terrible-tuneup/samba-packages TGT=/etc/apk/world
    L SRC=/etc/init.d/samba TGT=/etc/runlevels/default/samba
    O MODE=root:root:0644 SRC=/host0/machines/terrible-tuneup/smb.conf TGT=/etc/samba/smb.conf
    
  2. Add Files: In /host0/machines/terrible-tuneup/:
    • samba-packages:
      samba
      samba-server
      samba-server-openrc
      
    • smb.conf: Same as above.

Step 2: Apply to the Node

  1. Set Hostname: Mesh Hypervisor generates hostnames like terrible-tuneup from genid using noun/adjective lists—check yours in mesh node info.
  2. Set UUID: In /host0/machines/terrible-tuneup/UUID, use the node’s UUID (e.g., 10eff964) from mesh node info.
  3. Apply: Update, rebuild, reboot:
    mesh system download
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    

Step 3: Test Samba

  1. SSH In: Connect to the node:
    mesh node ctl -n 10eff964
    
  2. Verify Service: Check Samba’s running:
    rc-service samba status
    
    • Should say started—if not, rc-service samba start.
  3. Exit Node: Back to your desktop:
    Ctrl+D
    
  4. Test Access: From another machine:
    • Linux: smbclient -L //terrible-tuneup -U nobody%
      • Lists the data share.
    • Windows: Open \\terrible-tuneup in Explorer—use node’s hostname or IP from mesh node info.
    • Mount: mount -t cifs //terrible-tuneup/data /mnt -o username=nobody,password=
      • Empty password for guest.

Notes

Groups and machines are interchangeable—use a group to reuse Samba across nodes, or stick it in the machine folder for a one-off. The nobody user is required—add it first. This example shares /mnt/data with open perms (0777)—tighten create mask (e.g., 0644) or guest ok = no for security. For RAID setup, see Setting Up RAID Storage; for users, see Adding a User Account.

Next, explore Adding a User Account.

Adding a User Account

Mesh Hypervisor nodes don’t handle user accounts gracefully—Linux’s /etc/passwd isn’t atomic. This guide shows a workaround: append user data via a group to add a user (e.g., mal) across nodes consistently. Commands run via SSH to the central orchestration node’s CLI (e.g., ssh root@<central-ip>). For node basics, see Configuring Nodes.

Prerequisites

Ensure:

  • A remote node is online (mesh node info—e.g., hostname terrible-tuneup).
  • You’ve got a machine folder (e.g., /host0/machines/terrible-tuneup/).

Step 1: Create the User on the Central Node

To avoid errors in mesh system configure, add the user on the central node first—its compile-apkovl script needs the UID/GID to exist.

  1. Add User: On the central node:
    adduser mal
    
    • Set password (e.g., mysecret), fill optional fields (e.g., “Linux User” for full name), pick /bin/bash.
    • UID (e.g., 1000) auto-increments—note it from /etc/passwd.
  2. Copy Lines: Extract user data:
    grep "^mal:" /etc/passwd > /tmp/passwd-mal
    grep "^mal:" /etc/group > /tmp/group-mal
    grep "^mal:" /etc/shadow > /tmp/shadow-mal
    
    • Saves mal’s lines—e.g., mal:x:1000:1000:Linux User,,,:/home/mal:/bin/bash.

Step 2: Build the User Group

  1. Make the Group: Create /host0/groups/useracct-mal/:
    mkdir /host0/groups/useracct-mal
    
  2. Add Manifest: Create /host0/groups/useracct-mal/manifest:
    A MODE=root:root:0644 SRC=/host0/groups/useracct-mal/passwd TGT=/etc/passwd
    A MODE=root:root:0644 SRC=/host0/groups/useracct-mal/group TGT=/etc/group
    A MODE=root:root:0640 SRC=/host0/groups/useracct-mal/shadow TGT=/etc/shadow
    D MODE=1000:1000:0755 TGT=/home/mal
    
    • Appends user data, makes /home/mal with UID:GID (not mal, as it might not exist yet on nodes).
  3. Add Files: In /host0/groups/useracct-mal/:
    • passwd: Copy from /tmp/passwd-mal (e.g., mal:x:1000:1000:Linux User,,,:/home/mal:/bin/bash).
    • group: Copy from /tmp/group-mal (e.g., mal:x:1000:).
    • shadow: Copy from /tmp/shadow-mal (e.g., mal:$6$...hashed...mysecret...:20021:0:99999:7:::).
    cp /tmp/passwd-mal /host0/groups/useracct-mal/passwd
    cp /tmp/group-mal /host0/groups/useracct-mal/group
    cp /tmp/shadow-mal /host0/groups/useracct-mal/shadow
    

Step 3: Apply to the Node

  1. Link Group: In /host0/machines/terrible-tuneup/groups:
    baseline
    useracct-mal
    
    • baseline for essentials, useracct-mal adds the user.
  2. Set UUID: In /host0/machines/terrible-tuneup/UUID, use the node’s UUID (e.g., 10eff964) from mesh node info.
  3. Apply: Rebuild and reboot:
    mesh system configure
    mesh node ctl -n 10eff964 "reboot"
    

Step 4: Test the User

  1. SSH In: Connect to the node:
    mesh node ctl -n 10eff964
    
  2. Verify User: Check mal exists:
    grep "^mal:" /etc/passwd
    ls -ld /home/mal
    
    • Should show mal:x:1000:1000... and drwxr-xr-x 1000 1000 /home/mal.
  3. Test Login: Switch user:
    su - mal
    
    • Enter mysecret—drops you to /home/mal with /bin/bash.
  4. Exit: Back to root, then out:
    exit
    Ctrl+D
    

Notes

This hack appends to /etc/passwd, /group, and /shadow—not atomic, so pick unique UIDs (e.g., 1000) manually across groups to avoid clashes. Create users on the central node first—compile-apkovl fails if UIDs/GIDs don’t exist there. Hashes come from adduser—copy them, don’t guess. Reuse this group (e.g., useracct-mal) on multiple nodes for consistency. For RAID shares needing nobody, see Configuring Samba.

Next, explore Upgrading the System.

FAQ

This section answers common questions about Mesh Hypervisor. For detailed guides, see Usage.

How do I back up the system?

Copy /host0 and keys from the central node via SSH:

scp -r root@<central-ip>:/host0 ./backup/
scp root@<central-ip>:/var/pxen/pxen.repo.rsa ./backup/
scp root@<central-ip>:/var/pxen/pxen.repo.rsa.pub ./backup/

This grabs configs and keys—restore with scp to a new flash drive. See Upgrading the System.

Why IPv6 only for VXLAN?

IPv6 ensures deterministic, collision-free addressing using UUIDs (e.g., genid machine 8). It’s simpler than NAT-heavy IPv4 meshes. See Networking.

What’s the default config?

New nodes boot with /host0/machines/default/ (UUID default) until a matching UUID folder exists in /host0/machines/. Edit it for baseline settings—see Configuring Nodes.

How do I update configs?

Edit /host0/ locally, then upload:

scp -r ./host0 root@<central-ip>:/
mesh system configure

Reboot nodes to apply (mesh node ctl -n ALL "reboot"). See Managing Nodes.

What if a node’s hardware fails?

Boot new hardware, grab its UUID from mesh node info, update the old machine’s UUID file, rebuild APKOVLs, and reboot. See Recovery Procedures.

Next Steps

Explore Roadmap & Limitations.

Roadmap & Limitations

Mesh Hypervisor is a Minimum Viable Product (MVP)—a solid core with room to grow. This section covers its current limits and planned enhancements. For usage, see Usage.

Current Limitations

Mesh Hypervisor’s MVP status means some trade-offs:

  • Security: No encryption—root SSH uses default keys (toor). Configs and data transfer over HTTP are unencrypted.
  • Workloads: KVM-only for now—other virtualization (e.g., Xen, containers) isn’t supported yet.
  • Networking: Manual VXLAN setup; no dynamic routing or GUI management.
  • Interface: CLI-only on the central node—no TUI or web dashboard.
  • Storage: Diskless by default; local RAID/LUKS needs manual config (e.g., storage group).

These keep Mesh Hypervisor simple and deterministic but limit its polish.

Roadmap

Future releases aim to address these:

  • Encryption: Add SSH key management, HTTPS for APKOVLs, and VXLAN encryption (e.g., IPSec).
  • Virtualization: Support Xen, LXC, or Docker alongside KVM for broader workload options.
  • Network Automation: Dynamic VXLAN config, IPv6 routing, and bridge management tools.
  • User Interface: Introduce a curses-based TUI for the central node, with a web UI later.
  • Storage: Simplify RAID/LUKS setup with prebuilt groups or scripts.
  • User Management: Replace root-only access with role-based accounts.

No timelines yet—focus is on stability first. Feedback drives priorities.

Notes

Mesh Hypervisor’s MVP trades features for simplicity—security and flexibility are next. For current workarounds, see Configuring Nodes and Network Configuration. Questions? Check FAQ.