Introduction
Mesh Hypervisor is a streamlined system for managing diskless Linux nodes over a network. Running on Alpine Linux, it lets you boot and configure remote nodes from a single flash drive on a central orchestration node, using PXE and custom APKOVLs. Whether you’re spinning up KVM virtual machines or running bare-metal tasks, Mesh Hypervisor delivers fast, deterministic control—perfect for home labs, edge setups, or clusters.
With Mesh Hypervisor, you get a lightweight, CLI-driven toolkit: deploy nodes in minutes, tweak configs in /host0/
, and connect them with VXLAN meshes. It’s built for sysadmins who want flexibility—add storage, tune networks, or script custom setups—all from one hub. This guide walks you through setup, usage, and advanced tweaks to make your cluster hum.
Mesh Hypervisor is an MVP—still growing, but rock-solid for what it does. Start with Getting Started to launch your first node.
Key features include:
- Centralized Control: A single orchestration node manages all remote nodes via a flash drive image.
- Network Resilience: Automatic network scanning and proxy DHCP ensure adaptability to complex topologies.
- Deterministic and Individually Addressable Configurations: Hardware IDs provide consistent, reproducible and individualized node setups.
- Workload Support: KVM virtual machines (workloads) run on remote nodes, extensible to other formats.
- VXLAN Networking: Mesh networking with IPv6 for isolated, scalable communication.
Who is it For?
Mesh Hypervisor is designed for Linux system administrators who need a lightweight, distributed hypervisor solution. It assumes familiarity with Linux primitives, networking concepts (e.g., DHCP, PXE), and CLI tools. If you’re comfortable managing servers via SSH and crafting configuration files, Mesh Hypervisor provides the tools to build a robust virtualization cluster with minimal overhead.
This guide will walk you through setup, usage, and advanced operations, starting with Getting Started.
Getting Started
New to Mesh Hypervisor? This section gets you from zero to a running cluster. Check hardware needs, flash the central node, and boot your first remote node—all in a few steps. Dive in with Prerequisites.
Prerequisites
This section outlines the requirements for deploying Mesh Hypervisor on your systems. Mesh Hypervisor is a distributed hypervisor built on Alpine Linux, designed for diskless operation via PXE booting. Ensure your environment meets these hardware, software, and network specifications before proceeding to Installation.
Hardware Requirements
-
Central Orchestration Node:
- Architecture: x86_64.
- CPU: Minimum 2 cores.
- RAM: Minimum 2 GB (4 GB recommended).
- Storage: USB flash drive (at least 8 GB) for the boot image; no local disk required.
- Network: At least one Ethernet port.
-
Remote Nodes:
- Architecture: x86_64.
- CPU: Minimum 2 cores.
- RAM: Minimum 2 GB (4 GB recommended).
- Storage: Optional local disks for workloads; operates disklessly by default.
- Network: At least one Ethernet port, configured for network booting (PXE).
Mesh Hypervisor runs lightweight Alpine Linux, so resource demands are minimal. The central node can be an old laptop or embedded device, while remote nodes must support PXE in their BIOS/UEFI.
Software Requirements
- Familiarity with Linux system administration, including:
- Command-line operations (e.g., SSH, basic file editing).
- Alpine Linux package management (
apk
) is helpful but not mandatory.
- A system to prepare the flash drive image (e.g., a Linux host with
dd
or similar tools).
No additional software is pre-installed on target systems; Mesh Hypervisor provides all necessary components via the orchestration node.
Network Requirements
- A physical Ethernet network connecting the central node and remote nodes.
- No pre-existing PXE server on the network (Mesh Hypervisor will host its own).
- Optional DHCP server:
- If present, Mesh Hypervisor will proxy it.
- If absent, Mesh Hypervisor will provide DHCP on detected subnets.
- Sufficient IP address space for automatic subnet assignment (e.g., from
10.11.0.0/16
or similar pools).
Mesh Hypervisor scans networks using ARP to detect topology and avoid conflicts, so no specific topology is assumed. Ensure remote nodes can reach the central node over Ethernet.
Knowledge Prerequisites
Mesh Hypervisor targets Linux sysadmins comfortable with:
- Network booting (PXE) concepts.
- Editing configuration files (e.g., manifest files, network configs).
- Debugging via logs and CLI tools.
If you can set up a PXE-booted Linux server and manage it via SSH, you’re ready for Mesh Hypervisor.
Proceed to Installation once these requirements are met.
Installation
This section covers installing Mesh Hypervisor by preparing and booting the central orchestration node from a flash drive image. Mesh Hypervisor uses a prebuilt Alpine Linux image that includes the orchestration software and a package mirror for remote nodes. Remote nodes connect via PXE once the central node is running. Confirm you’ve met the Prerequisites before proceeding.
Step 1: Download the Image
The Mesh Hypervisor image is available as a single file: fragmentation.latest.img
.
- Download it from:
https://fragmentation.dev/fragmentation.latest.img
. - Size: Approximately 7 GB (includes a prebuilt package mirror for MVP simplicity).
No checksum is provided currently—verify the download completes without errors.
Step 2: Write the Image to a Flash Drive
Use a USB flash drive with at least 8 GB capacity. This process overwrites all data on the device.
On a Linux system:
- Identify the flash drive’s device path (e.g.,
/dev/sdb
):
Look for the USB device by size; avoid writing to your system disk (e.g.,lsblk
/dev/sda
). - Write the image:
Replacesudo dd if=fragmentation.latest.img of=/dev/sdb bs=4M status=progress
/dev/sdb
with your device path. - Sync the write operation:
sync
- Safely eject the drive:
sudo eject /dev/sdb
For Windows or macOS, use tools like Rufus or Etcher—refer to their documentation.
Step 3: Boot the Orchestration Node
- Plug the flash drive into the system designated as the central orchestration node.
- Access the BIOS/UEFI (e.g., press
F2
,Del
, or similar during boot). - Set the USB drive as the first boot device.
- Save and reboot.
The system boots Alpine Linux and starts Mesh Hypervisor services automatically—no input needed.
Step 4: Verify the Node is Running
After booting, log in at the console:
- Username:
root
- Password:
toor
Run this command to check status:
mesh system logview
This opens logs in lnav
. If you see DHCP and PXE activity, the node is up and serving remote nodes.
Next Steps
The orchestration node is now active. Connect remote nodes and deploy workloads in Quick Start.
Quick Start
This section guides you through booting a remote node and running a workload with Mesh Hypervisor after installing the orchestration node (see Installation). Using the default configuration, you’ll boot a remote node via PXE and start a KVM workload with VNC access.
Step 1: Connect a Remote Node
- Verify the orchestration node is active:
Look for DHCP and PXE activity in the logs.mesh system logview
- Connect a remote node to the same Ethernet network:
- Requires x86_64, 2+ GB RAM, PXE support (see Prerequisites).
- Power on the remote node and enter its BIOS/UEFI:
- Set network/PXE as the first boot device.
- Save and reboot.
The remote node pulls its kernel and initramfs from the orchestration node, then boots Alpine Linux with a default APKOVL. A unique 8-character UUID is generated during boot using genid machine 8
, based on hardware DMI data.
Step 2: Verify Remote Node Boot
On the orchestration node’s console (logged in as root
/toor
):
- List online nodes:
This shows each node’s UUID (e.g.,mesh node info
a1b2c3d4
). The UUID is the first 8 characters of a SHA-512 hash of the node’s DMI modalias. - Note the UUID of your remote node.
If no nodes appear, check logs (mesh system logview
) for DHCP requests or HTTP downloads (kernel, initramfs). Ensure the remote node’s network port is connected.
Step 3: Start a Workload
Mesh Hypervisor includes a default KVM workload configuration named qemutest1
. To launch it:
- Use the remote node’s UUID from Step 2 (e.g.,
a1b2c3d4
). - Run:
mesh workload start -n a1b2c3d4 -w qemutest1
-n
: Node UUID.-w qemutest1
: Workload name (preconfigured in/host0/machines/default
).
This starts a KVM virtual machine with 500 MB RAM, 2 CPU threads, and an Alpine ISO (alpine-virt-3.21.3-x86_64.iso
) for installation.
Step 4: Access the Workload
The qemutest1
workload uses VNC for console access:
- Identify the remote node’s IP with
mesh node info
- From any system with a VNC client (e.g.,
vncviewer
or TigerVNC):vncviewer 192.168.x.y:5905
- Port
5905
is derived fromruntime.console.id=5
(5900 + 5).
- Port
- The VNC session displays the Alpine installer running in the workload.
Next Steps
You’ve booted a remote node and accessed a workload via VNC. See Usage for managing nodes, customizing workloads, or configuring networks.
Core Concepts
Mesh Hypervisor is built on a few key ideas: a central node, diskless remotes, VXLAN networking, and workloads. This section explains how they fit together—start with Architecture Overview.
Architecture Overview
Mesh Hypervisor is a distributed hypervisor based on Alpine Linux, designed to manage a cluster of diskless servers for virtualization. It uses a single central orchestration node to boot and configure remote nodes over a network. This section provides a high-level view of the system’s structure and primary components.
System Structure
Mesh Hypervisor consists of two main parts:
- Central Orchestration Node: A system booted from a flash drive, hosting all control services and configuration data.
- Remote Nodes: Diskless servers that boot from the central node and run workloads (currently KVM virtual machines).
The central node drives the system, distributing boot images and configurations to remote nodes, which operate as a unified cluster.
graph TD A[Central Orchestration Node<br>Flash Drive] -->|Boots| B[Remote Node 1] A -->|Boots| C[Remote Node 2] A -->|Boots| D[Remote Node N] B -->|Runs| E[Workload: KVM] C -->|Runs| F[Workload: KVM] D -->|Runs| G[Workload: KVM]
Core Components
- Flash Drive Image: The central node runs from a 7 GB Alpine Linux image, embedding Mesh Hypervisor software and a package mirror.
- Network Booting: Remote nodes boot via PXE, fetching their OS from the central node without local storage.
- Configurations: Node-specific setups are delivered as Alpine APKOVLs, tied to hardware UUIDs for consistency.
- Workloads: KVM virtual machines execute on remote nodes, managed via CLI from the central node.
Basic Flow
- The central node boots from the flash drive and starts network services.
- Remote nodes boot over the network, using configurations from the central node.
- Workloads launch on remote nodes as directed by the central node.
Design Notes
Mesh Hypervisor emphasizes minimalism and determinism:
- Diskless remote nodes reduce hardware dependencies.
- A single control point simplifies management.
- Networking adapts to detected topology (details in Networking).
For deeper dives, see Central Orchestration Node, Remote Nodes, Networking, and Workloads.
Central Orchestration Node
The central orchestration node is the backbone of Mesh Hypervisor. It boots from a flash drive and delivers the services required to initialize and manage remote nodes in the cluster. This section outlines its core functions and mechanics.
Role
The central node handles:
- Boot Services: Provides PXE to start remote nodes over the network.
- Configuration Delivery: Distributes Alpine APKOVLs for node setups.
- System Control: Runs the
mesh
CLI tool to manage the cluster.
It operates as an Alpine Linux system with Mesh Hypervisor software preinstalled, serving as the single point of control.
Operation
- Startup: Boots from the flash drive, launching Mesh Hypervisor services.
- Network Scanning: Uses ARP to detect connected Ethernet ports and map network topology dynamically.
- Service Deployment:
- TFTP server delivers kernel and initramfs for PXE booting.
- DHCP server (or proxy) assigns IPs to remote nodes.
- HTTP server hosts APKOVLs and a package mirror.
- Control: Executes
mesh
commands from the console to oversee nodes and workloads.
Diagram
graph LR A[Central Node<br>Flash Drive] -->|TFTP: PXE| B[Remote Nodes] A -->|HTTP: APKOVLs| B A -->|DHCP| B
Key Features
- Flash Drive Storage: Stores configs and data in
/host0
—back this up for recovery. - ARP Scanning: Sequentially sends ARP packets across ports, listening for replies to identify network connections (e.g., same switch detection).
- Package Mirror: Hosts an offline Alpine package repository for remote nodes, ensuring consistent boots without internet.
- Network Flexibility: Starts DHCP on networks lacking it, proxies existing DHCP elsewhere.
Configuration
Core settings live in /host0
on the flash drive, including:
- Subnet pools for DHCP (e.g.,
10.11.0.0/16
). - Default package lists for the mirror.
- Network configs for topology adaptation.
See Configuration Reference for details.
Notes
The central node requires no local disk—just an Ethernet port and enough RAM/CPU to run Alpine (see Prerequisites). It’s built for plug-and-play operation. Setup steps are in Installation.
Next, see Remote Nodes for how they rely on the central node.
Remote Nodes
Remote nodes are the operational units of Mesh Hypervisor, executing tasks in the cluster. They boot from the central orchestration node over the network and typically operate disklessly. This section outlines their role and mechanics.
Role
Remote nodes handle:
- Workload Execution: Run KVM virtual machines (workloads) using configurations from the central node.
- Bare-Metal Tasks: Can execute scripts or applications directly (e.g., HPC setups like MPI or Slurm) via prebuilt configuration groups.
They depend on the central node for booting and initial setup, offering versatility in deployment.
Operation
- Boot: Initiates via PXE, pulling kernel and initramfs from the central node’s TFTP server.
- Identification: Generates an 8-character UUID from hardware data.
- Configuration: Requests an APKOVL from the central node’s HTTP server:
- Fetches a custom APKOVL matching the UUID if available.
- Uses a default APKOVL for unrecognized nodes.
- Runtime: Boots Alpine Linux, joins the cluster, and runs workloads or bare-metal tasks.
Diagram
sequenceDiagram participant R as Remote Node participant C as Central Node R->>C: PXE Boot Request C-->>R: Kernel + Initramfs (TFTP) Note over R: Generate UUID R->>C: Request APKOVL (HTTP) C-->>R: APKOVL (Default or Custom) Note over R: Boot Alpine + Run Tasks
Key Features
- Flexible Execution: Supports KVM workloads or direct application runs (e.g., MPI groups preinstalled via
/host0/machines/
). - UUID-Based Identity: Uses
genid machine 8
for a deterministic UUID, linking configs consistently. - APKOVLs: Delivers node-specific overlays (files, scripts, permissions) from the central node.
- Storage Option: Boots and runs the OS without local storage; local disks are optional for workloads or data.
Notes
Remote nodes need PXE-capable hardware and minimal resources (x86_64, 2+ GB RAM—see Prerequisites). Without local storage, they’re stateless, rebooting fresh each time. Local disks, if present, don’t affect the boot process or OS—only workload or task storage. See Workloads for VM details and Networking for connectivity.
Next, explore Networking for cluster communication.
Networking
Networking in Mesh Hypervisor enables remote nodes to boot from the central orchestration node and connect as a cluster. It uses PXE for booting and VXLAN for flexible, overlapping node networks. This section outlines the key networking elements.
Boot Networking
Remote nodes boot over Ethernet via PXE:
- Central Node Services:
- TFTP: Serves kernel and initramfs.
- DHCP: Assigns IPs, proxying existing servers or providing its own.
- Detection: ARP scanning maps network topology by sending packets across ports and tracking responses.
This ensures nodes boot reliably across varied network setups.
Cluster Networking
Post-boot, nodes join VXLAN-based IPv6 networks:
- VXLAN Meshes: Virtual Layer 2 networks link nodes, with a default mesh for all and custom meshes configurable.
- Membership: Nodes can belong to multiple meshes, defined in
/host0/network/
. - Addressing: Deterministic IPv6 addresses are derived from node UUIDs.
Workloads on nodes can attach to these meshes for communication.
Diagram
graph TD C[Central Node] subgraph Mesh1[Default IPv6 Mesh] subgraph R1[Remote Node 1] W1a[Workload 1a] W1b[Workload 1b] end subgraph R2[Remote Node 2] W2a[Workload 2a] W2b[Workload 2b] end end subgraph Mesh2[Custom IPv6 Mesh] subgraph R2[Remote Node 2] W2a[Workload 2a] W2b[Workload 2b] end subgraph R3[Remote Node 3] W3a[Workload 3a] W3b[Workload 3b] end end C -->|Manages| Mesh1 C -->|Manages| Mesh2
Key Features
- ARP Scanning: Adapts to topology (e.g., multiple ports to one switch) dynamically.
- IPv6: Powers VXLAN meshes with UUID-based addresses, avoiding collisions.
- Multi-Mesh: Nodes can join multiple VXLAN networks as needed.
- Time Sync: Crony aligns clocks to the central node for consistent state reporting.
Notes
Mesh Hypervisor requires Ethernet for PXE (see Prerequisites). VXLAN meshes use IPv6 over this base network. Custom setups are detailed in Network Configuration.
Next, see Workloads for what runs on the nodes.
Workloads
Workloads in Mesh Hypervisor are the tasks executed on remote nodes. They include KVM virtual machines and bare-metal applications, managed via configurations from the central orchestration node. This section explains their role and operation.
Role
Workloads enable Mesh Hypervisor’s flexibility:
- KVM Virtual Machines: Virtualized instances (e.g., Alpine VMs) running on remote nodes.
- Bare-Metal Tasks: Direct execution of scripts or applications (e.g., MPI for HPC) on node OS.
Both types leverage the same boot and config system, tailored to node needs.
Operation
- Configuration: Defined in files under
/host0/machines/
on the central node, distributed as APKOVLs. - Deployment: Launched via
mesh workload
commands (VMs) or preinstalled groups (bare-metal). - Execution: Runs on remote nodes, accessing node resources (RAM, CPU, optional disks).
- Access: VMs offer VNC/serial consoles; bare-metal tasks use node-level tools (e.g., SSH).
Diagram
graph TD C[Central Node<br>/host0/machines/] R[Remote Node] C -->|APKOVL| R R -->|KVM| V[VM Workload<br>e.g., Alpine VM] R -->|Bare-Metal| B[Task<br>e.g., MPI Process] V -->|VNC/Serial| A[Access]
Key Features
- KVM Support: VMs use QEMU/KVM with configurable RAM, CPU, and disk images (e.g., QCOW2, ISO).
- Bare-Metal Groups: Prebuilt scripts or apps (e.g., MPI, Slurm) run directly on Alpine, no virtualization.
- Config Files: Specify VM settings (e.g.,
platform.memory.size
) or bare-metal files/permissions. - Determinism: UUID-based APKOVLs ensure consistent deployment across reboots.
Notes
Workloads require remote nodes to be booted (see Remote Nodes). KVM VMs are the default focus, with bare-metal as an alternative for specialized use cases. Local storage is optional for VM disks or task data—see Prerequisites. Full setup details are in Configuring Workloads.
This concludes the Core Concepts. Next, explore Usage.
Usage
Ready to run Mesh Hypervisor? This section shows you how: manage nodes, configure workloads, tweak networks—all via the mesh
CLI. Kick off with The mesh
Command.
The mesh
Command
The mesh
command is the primary CLI tool for managing Mesh Hypervisor. It runs on the central orchestration node, controlling the system, nodes, and workloads. This section explains its structure and basic usage.
Overview
mesh
is a wrapper script grouping commands into three categories: system
, node
, and workload
. Each category has subcommands for specific tasks, executed from the central node’s console.
Usage
Run mesh <category> <action> [options]
from the console. Full syntax:
Usage: mesh <category> <action> [options]
Categories and Actions:
system:
stop Stop the system
configure Configure system settings
start Start the system
logview View system logs
download Download system components
node:
info Display node information
ctl -n <node> [command]
workload:
list [-n <node>]
start -n <node> -w <workload>
hard-stop -n <node> -w <workload>
soft-stop -n <node> -w <workload>
pause -n <node> -w <workload>
resume -n <node> -w <workload>
download -n <node> -w <workload> [-f]
createimg -n <node> -w <workload> [-f]
snapshot-take -n <node> -w <workload> [-s <snapshot name>]
snapshot-list -n <node> -w <workload>
snapshot-revert -n <node> -w <workload> -s <snapshot name>
snapshot-delete -n <node> -w <workload> -s <snapshot name>
Options:
-n, --node <uuid> Node UUID
-w, --workload <uuid> Workload UUID
-s, --snapshot-name <string> Snapshot name
-f, --force Force the operation
-V, --version Show version information
-h, --help Show this help message
Key Commands
- System:
mesh system start
: Launches PXE, DHCP, and HTTP services.mesh system logview
: Opens logs inlnav
for debugging.
- Node:
mesh node info
: Lists online nodes with UUIDs.mesh node ctl -n <uuid>
: Runs a shell command (e.g.,apk upgrade
) or logs in via SSH.
- Workload:
mesh workload start -n <uuid> -w <name>
: Starts a KVM VM.mesh workload list -n <uuid>
: Shows running workloads on a node.
Notes
- Commands execute over SSH for remote nodes, using preinstalled root keys (configurable in
/host0
). - Node UUIDs come from
mesh node info
; workload names match config files (e.g.,qemutest1
). - See subsequent sections for specifics: Managing Nodes, Configuring Workloads.
Next, explore Managing Nodes.
Managing Nodes
Remote nodes in Mesh Hypervisor are managed from the central orchestration node using the mesh node
commands. This section covers how to list nodes, control them, and handle basic operations.
Prerequisites
- Central node is running (
mesh system start
executed). - Remote nodes are booted via PXE (see Quick Start).
Commands run from the central node’s console.
Listing Nodes
To see online nodes:
mesh node info
Output shows each node’s UUID (e.g., a1b2c3d4
), generated from hardware data. Use these UUIDs for other commands.
If no nodes appear, check logs with mesh system logview
for PXE or DHCP issues.
Controlling Nodes
The mesh node ctl
command interacts with a specific node:
mesh node ctl -n <uuid> [command]
-n <uuid>
: Targets a node by its UUID frommesh node info
.
Examples
-
Run a Command:
mesh node ctl -n a1b2c3d4 "apk upgrade"
Updates packages on the node, pulling from the central node’s mirror.
-
Shell Access:
mesh node ctl -n a1b2c3d4
Opens an SSH session as root to the node. Exit with
Ctrl+D
orlogout
. -
Reboot:
mesh node ctl -n a1b2c3d4 "reboot"
Restarts the node, triggering a fresh PXE boot.
Batch Operations
For all nodes at once, use the all
keyword:
mesh node ctl -n all "apk upgrade"
Executes sequentially across all online nodes.
Notes
- SSH uses preinstalled root keys from
/host0
—configurable if needed. - Nodes are stateless without local storage; reboots reset to their APKOVL config.
- Local storage (if present) persists data but doesn’t affect boot (see Remote Nodes).
- Full node config details are in Manifest Files.
Next, see Configuring Workloads for running tasks on nodes.
Configuring Workloads
Workloads in Mesh Hypervisor are KVM virtual machines running on remote nodes, configured and managed from the central orchestration node. This section covers creating and controlling KVM workloads using config files and mesh workload
commands. The orchestration node is CLI-only (no GUI); access VMs via tools like VNC from another machine. For bare-metal tasks, see Configuring Groups.
Prerequisites
- Remote nodes are online (check with
mesh node info
). - Central node is running (
mesh system start
executed).
Commands and edits run from the central node’s console.
Creating a Workload Config
KVM workload configs are stored in /var/pxen/monoliths/
on the central node. Each file defines a VM, uploaded to nodes when started.
Example: Create /var/pxen/monoliths/qemutest1.conf
:
name=qemutest1
uuid=qweflmqwe23
platform.volume.1.source=http://host0/isos/alpine-virt-3.21.3-x86_64.iso
platform.volume.1.type=iso
platform.volume.1.path=/tmp/alpine.iso
platform.memory.size=500M
platform.cpu.threads=2
platform.cpu.mode=Penryn-v1
runtime.console.type=vnc
runtime.console.id=5
- Sets up a VM with an Alpine ISO, 500 MB RAM, 2 threads, and VNC access.
Managing Workloads
Use mesh workload
with a node UUID (from mesh node info
) and workload name matching the config file (e.g., qemutest1
).
Commands
-
Download Resources:
mesh workload download -n a1b2c3d4 -w qemutest1
Fetches the ISO to
/tmp/alpine.iso
on the node. Add-f
to force redownload. -
Start:
mesh workload start -n a1b2c3d4 -w qemutest1
Uploads
qemutest1.conf
to the node and launches the VM. -
Access:
vncviewer <node-ip>:5905
From a separate machine with a VNC viewer (e.g., TigerVNC), connect to the console (port 5900 + id=5). Find
<node-ip>
inmesh system logview
. -
Stop:
mesh workload soft-stop -n a1b2c3d4 -w qemutest1
Gracefully stops the VM. Use
hard-stop
to force it. -
List:
mesh workload list -n a1b2c3d4
Lists running workloads on the node.
Notes
- Configs upload on
start
, stored in/var/pxen/monoliths/
on nodes after. - Workloads need node resources (RAM, CPU); check Prerequisites.
- Snapshot commands (e.g.,
snapshot-take
) are KVM-only—see Themesh
Command. - Full syntax is in Workload Config.
Next, see Network Configuration or Configuring Groups.
Configuring Nodes
In Mesh Hypervisor, remote nodes are configured to run bare-metal tasks directly on their Alpine OS using machine folders and group folders in /host0/
. Machine folders target specific nodes with custom settings, while group folders provide reusable configurations across multiple nodes. These are combined into node-specific APKOVLs by the compile-apkovl
script, applied during PXE boot. This section walks through simple setup steps, run from the central orchestration node’s CLI. For KVM workloads, see Configuring Workloads.
Prerequisites
Before starting, ensure:
- Remote nodes are online (check with
mesh node info
). - Central node is running (
mesh system start
executed).
All actions occur on the central node’s console.
Understanding Node Configuration
Node configurations are stored in /host0/
subdirectories. Here’s the basics:
- Machine Folders: In
/host0/machines/
with user-chosen names (e.g.,my-server
,default
). Each ties to a node via auuid
file. - Group Folders: In
/host0/groups/
(e.g.,timezone-est
,baseline
). These are shared setups applied to machines. - Compilation: The
mesh system configure
command merges group and machine settings into an APKOVL, stored in/srv/pxen/http/
for nodes to fetch.
Files in Machine and Group Folders
Both folder types can include:
manifest
(Required): Defines files to install or modify on the node.packages
(Optional): Lists Alpine packages (e.g.,chrony
) to add to the node’s/etc/apk/world
.
Machine folders also use:
uuid
(Required): Holds the node’s 8-character UUID (e.g.,10eff964
) ordefault
for new nodes.groups
(Optional): Lists group folders (e.g.,baseline
,timezone-est
) to apply, in order—only these groups are included.SKIP
(Optional): An empty file; if present, skips APKOVL compilation for that machine.
Special Machine Folders
Two special cases exist:
default
: UUID isdefault
. Configures new nodes until a matching UUID folder is set.initramfs
: Builds the initramfs APKOVL, used system-wide during boot.
Configuring a Group
Set up a reusable group to set the EST timezone on multiple nodes:
- Create the folder
/host0/groups/timezone-est/
. - Add
/host0/groups/timezone-est/manifest
:
This creates a soft link to set the node’s timezone to EST.L SRC=/usr/share/zoneinfo/EST TGT=/etc/localtime
Configuring a Machine
Configure a node with UUID 10eff964
to use a manual hostname and the EST timezone:
- Create the folder
/host0/machines/my-server/
(name is arbitrary). - Add
/host0/machines/my-server/uuid
:
This links the folder to the node’s UUID (discovered via10eff964
mesh node info
). - Add
/host0/machines/my-server/groups
:
This appliesbaseline timezone-est
baseline
(for essential setup) andtimezone-est
. Only listed groups are included. - Add
/host0/machines/my-server/manifest
:
This sets a manual hostname (e.g.,O MODE=root:root:0644 SRC=/host0/machines/my-server/hostname TGT=/etc/hostname
node1
). - Add
/host0/machines/my-server/hostname
:
This is the source file for the hostname.node1
Applying the Configuration
Apply the setup to the node:
- Compile the APKOVLs:
This builds an APKOVL for each machine folder inmesh system configure
/host0/machines/
. - Reboot the node:
The node fetches its APKOVL and applies the settings.mesh node ctl -n 10eff964 "reboot"
Running Tasks
Verify the setup:
mesh node ctl -n 10eff964 "cat /etc/hostname"
This should output node1
.
Notes
- Groups do not nest—only machine folders use a
groups
file to reference top-level/host0/groups/
folders, and only those listed are applied (e.g.,baseline
must be explicit). - The
groups
file order sets manifest application, with the machine’smanifest
overriding last. - If
packages
are added, runmesh system download
first to update the mirror. - For manifest syntax, see Manifest Files; for node control, see Managing Nodes.
Next, explore Network Configuration.
Network Configuration
Networking in Mesh Hypervisor spans PXE booting and cluster communication via VXLAN meshes. This section shows how to configure node networking—static IPs, bridges, or custom VXLAN networks—using machine folders and group folders in /host0/
, applied via APKOVLs. All steps are run from the central orchestration node’s CLI. For an overview, see Networking.
Prerequisites
Ensure the following:
- Remote nodes are online (check with
mesh node info
). - Central node is running (
mesh system start
executed).
Configuring a Static IP
Nodes boot with DHCP by default, but you can set a static IP using a machine or group folder’s manifest
. For a node with UUID 10eff964
:
- In
/host0/machines/my-server/
(from Configuring Nodes), add tomanifest
:
This installs the static IP config.O MODE=root:root:0644 SRC=/host0/machines/my-server/interfaces TGT=/etc/network/interfaces
- Create
/host0/machines/my-server/interfaces
:
This setsauto eth0 iface eth0 inet static address 192.168.1.100 netmask 255.255.255.0 gateway 192.168.1.1
eth0
to a static IP. - Compile and apply:
The node reboots with the new IP.mesh system configure mesh node ctl -n 10eff964 "reboot"
Use a group folder (e.g., /host0/groups/net-static/
) to apply to multiple nodes.
Configuring a Network Bridge
Bridges connect physical interfaces to workloads or VXLANs. For node 10eff964
with a bridge br0
:
- In
/host0/machines/my-server/manifest
, add:
This sets up the bridge and QEMU support.O MODE=root:root:0644 SRC=/host0/machines/my-server/interfaces TGT=/etc/network/interfaces A MODE=root:root:0644 SRC=/host0/machines/my-server/modules TGT=/etc/modules A MODE=root:root:0644 SRC=/host0/machines/my-server/packages TGT=/etc/apk/world A MODE=root:root:0644 SRC=/host0/machines/my-server/bridge.conf TGT=/etc/qemu/bridge.conf A MODE=root:root:0644 SRC=/host0/machines/my-server/bridging.conf TGT=/etc/sysctl.d/bridging.conf
- Create these files in
/host0/machines/my-server/
:interfaces
:
This bridgesauto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.168.1.100 netmask 255.255.255.0 gateway 192.168.1.1 bridge_ports eth0
eth0
tobr0
with a static IP.modules
:
This loads KVM bridge modules.tun tap
packages
:
This installs bridge tools.bridge
bridge.conf
:
This permits QEMU to useallow br0
br0
.bridging.conf
:
This enables bridge forwarding and skips iptables.net.ipv4.conf.br0_bc_forwarding=1 net.bridge.bridge-nf-call-iptables=0
- Compile and apply:
The node reboots withmesh system download mesh system configure mesh node ctl -n 10eff964 "reboot"
br0
ready for workloads.
Workloads can attach to br0
—see Configuring Workloads.
Configuring a VXLAN Network
Custom VXLAN meshes extend the default network. Define them in /host0/network/
and install via a machine’s manifest
:
- Create
/host0/network/manage.conf
:name=manage prefix=fd42:2345:1234:9abc::/64 vni=456 key=456
name
: Identifies the mesh.prefix
: IPv6 ULA prefix.vni
: Virtual Network Identifier.key
: Seed for addressing. The central node (host0
) is the default reflector.
- In
/host0/machines/my-server/manifest
, add:
This installs the config.O MODE=root:root:0644 SRC=/host0/network/manage.conf TGT=/var/pxen/networks/manage.conf
- Compile and apply:
The node joins themesh system configure mesh node ctl -n 10eff964 "reboot"
manage
mesh, creating bridgebr456
(format:br<vni>
).
Workloads attach to br456
. Add more nodes by repeating step 2 in their manifest
.
Notes
- The central node adapts to existing DHCP or provides it—see Central Orchestration Node.
- Static IPs and bridges override PXE DHCP after boot. VXLAN bridge names follow
br<vni>
(e.g.,br456
for VNI 456), auto-created on nodes with the config. - For syntax, see Manifest Files and Network Config; for node control, see Managing Nodes.
Next, explore Manifest Files.
Manifest Files
Manifest files in Mesh Hypervisor define how files are installed or modified on remote nodes. Stored in /host0/machines/
or /host0/groups/
, they’re compiled into APKOVLs by mesh system configure
and applied on node boot. This section shows practical examples for common tasks, run from the central orchestration node’s CLI. For node setup, see Configuring Nodes.
Prerequisites
Ensure:
- Remote nodes are online (check with
mesh node info
). - Central node is running (
mesh system start
executed).
Using Manifest Files
Each manifest
file is a list of entries, one per line, specifying an action, permissions, source, and target. Lines starting with #
are comments. Here’s how to use them:
Installing a File
To set a custom hostname on node 10eff964
:
- In
/host0/machines/my-server/manifest
(UUID10eff964
):O MODE=root:root:0644 SRC=/host0/machines/my-server/hostname TGT=/etc/hostname
O
: Overwrites the target.MODE
: Sets permissions (root:root:0644
).SRC
: Source file on the central node.TGT
: Target path on the remote node.
- Create
/host0/machines/my-server/hostname
:node1
- Apply:
mesh system configure mesh node ctl -n 10eff964 "reboot"
Creating a Soft Link
To set the EST timezone (e.g., in a group):
- In
/host0/groups/timezone-est/manifest
:L SRC=/usr/share/zoneinfo/EST TGT=/etc/localtime
L
: Creates a soft link.- No
MODE
—links inherit target perms.
- Link to a machine’s
groups
file (e.g.,/host0/machines/my-server/groups
):baseline timezone-est
- Apply:
mesh system configure mesh node ctl -n 10eff964 "reboot"
Appending to a File
To add a package (e.g., chrony
):
- In
/host0/machines/my-server/manifest
:A MODE=root:root:0644 SRC=/host0/machines/my-server/packages TGT=/etc/apk/world
A
: Appends to the target.
- Create
/host0/machines/my-server/packages
:chrony
- Update the mirror and apply:
mesh system download mesh system configure mesh node ctl -n 10eff964 "reboot"
Creating a Directory
To make a mount point:
- In
/host0/machines/my-server/manifest
:D MODE=root:root:0755 TGT=/mnt/data
D
: Creates a directory.
- Apply:
mesh system configure mesh node ctl -n 10eff964 "reboot"
Removing a File
To disable dynamic hostname:
- In
/host0/machines/my-server/manifest
:R TGT=/etc/init.d/hostname
R
: Removes the target.- No
MODE
orSRC
needed.
- Apply:
mesh system configure mesh node ctl -n 10eff964 "reboot"
Notes
Order matters—later entries override earlier ones (e.g., group manifest
then machine manifest
). Use A
to append safely, O
to replace. For full syntax and actions, see Manifest Syntax. For applying configs, see Configuring Nodes; for node control, see Managing Nodes.
Next, explore Troubleshooting.
Configuration Reference
Mesh Hypervisor configs live in files—/host0/
, workloads, networks, manifests. This section breaks down every key and option. Begin with Orchestration Node Config.
Orchestration Node Config
The central orchestration node in Mesh Hypervisor is configured primarily via /host0/
on its flash drive, with static defaults in /etc/pxen/host0.conf
. This section details /host0/
structure and mentions tweakable options in host0.conf
. For usage, see Configuring Nodes.
Primary Config: /host0/
/host0/
drives PXE booting, DHCP, package mirroring, and node setups. Changes require mesh system configure
to rebuild APKOVLs, applied on node reboot.
Directory Structure
/host0/machines/
: Node-specific configs.- Subfolders (e.g.,
my-server
,default
) named arbitrarily. - Files:
UUID
(Required): Node’s 8-char UUID (e.g.,10eff964
) ordefault
.manifest
(Required): File actions (see Manifest Syntax).groups
(Optional): List of group folders (e.g.,baseline
).packages
(Optional): Alpine packages (e.g.,chrony
).SKIP
(Optional): Empty; skips APKOVL build.
- Subfolders (e.g.,
/host0/groups/
: Reusable configs for multiple nodes.- Subfolders (e.g.,
timezone-est
). - Files:
manifest
(Required): File actions.packages
(Optional): Alpine packages.
- Subfolders (e.g.,
/host0/network/
: VXLAN configs.- Files (e.g.,
manage.conf
): Network settings (see Network Config).
- Files (e.g.,
/host0/packages
: Top-level package list for the offline mirror (e.g.,mdadm
).
Example Configs
- Machine:
/host0/machines/my-server/
UUID
:10eff964
groups
:baseline timezone-est
manifest
:O MODE=root:root:0644 SRC=/host0/machines/my-server/hostname TGT=/etc/hostname
hostname
:node1
- Group:
/host0/groups/timezone-est/
manifest
:L SRC=/usr/share/zoneinfo/EST TGT=/etc/localtime
- Top-Level:
/host0/packages
chrony bridge
Static Config: /etc/pxen/host0.conf
/etc/pxen/host0.conf
sets static defaults for the central node—paths, DHCP, and networking. It’s rarely edited; comments in the file explain options. Key tweakable settings include:
- Subnet Pools:
subnet_pool
(e.g.,"10.11.0.0/16" "192.168.0.0/23"
)—defines DHCP auto-assigned ranges. - Default Subnet Size:
default_subnet_size
(e.g.,"25"
)—sets subnet mask for new networks. - Manual Subnets:
manual_subnets
(e.g.,{ {demo9} {10.0.43.0/25} }
)—assigns fixed subnets by interface or MAC. - DHCP Retries:
dhcp_retries
(e.g.,"5"
) anddhcp_retry_pause
(e.g.,"3"
)—tunes DHCP request attempts. - DNS Settings:
dns_servers
(e.g.,"1.1.1.1" "8.8.8.8"
) andhost0_dns_hostname
(e.g.,"host0"
)—configures DNS behavior.
Edit with caution—defaults are optimized for most setups.
Notes
/host0/machines/default/
and /host0/machines/initramfs/
are special—see Configuring Nodes. Group manifests apply first, machine manifests override. Backup /host0/
before changes—see Upgrading the System. For node control, see Managing Nodes; for manifest details, see Manifest Syntax.
Next, explore Workload Config.
Workload Config
Workloads in Mesh Hypervisor are KVM virtual machines defined by config files in /var/pxen/monoliths/
on the central orchestration node, uploaded to remote nodes via mesh workload start
. This section fully explains the config structure, keys, and validation, based on the QEMU start script. For usage, see Configuring Workloads.
Overview
Each config file is a key-value text file (e.g., qemutest1.conf
) specifying a VM’s name, UUID, resources, disks, network, and console. It’s parsed on the remote node to build a qemu-system-x86_64
command. Keys use dot notation (e.g., platform.cpu.threads
), and invalid configs halt startup with errors.
Structure and Keys
Required Keys
name
:- Format: String (e.g.,
qemutest1
). - Purpose: VM identifier, matches
-w
inmesh workload
commands. - Validation: Must be set or startup fails.
- Format: String (e.g.,
uuid
:- Format: Unique string (e.g.,
qweflmqwe23
). - Purpose: Internal VM ID, used for monitor files (e.g.,
/tmp/pxen/<uuid>/
). - Validation: Must be unique and set, or startup fails.
- Format: Unique string (e.g.,
Platform Settings
platform.memory.size
:- Format: Number + unit (
K
,M
,G
,T
) (e.g.,4G
,500M
). - Purpose: Sets VM RAM as QEMU’s
-m
arg. - Validation: Must match
^[0-9]+[KMGT]$
, fit node’s available memory (MemAvailable
in/proc/meminfo
), or fail.
- Format: Number + unit (
platform.cpu.threads
:- Format: Integer (e.g.,
2
). - Purpose: Sets vCPU threads as QEMU’s
-smp
arg. - Validation: Must be a positive number, ≤ node’s CPU threads (
nproc
), or fail.
- Format: Integer (e.g.,
platform.cpu.mode
:- Format: QEMU CPU model (e.g.,
Penryn-v1
) orhost
. - Purpose: Sets CPU emulation as QEMU’s
-cpu
arg. - Validation: Must match
qemu-system-x86_64 -cpu help
models or behost
(uses KVM), defaults tohost
if unset.
- Format: QEMU CPU model (e.g.,
platform.volume.<id>.*
:- Subkeys:
type
(Required):qcow2
oriso
.path
(Required): Node-local path (e.g.,/tmp/alpine.iso
).source
(Optional): URL or path for download (e.g.,http://host0/isos/alpine.iso
will download thealpine.iso
from/srv/pxen/http/isos/
, if it has been downloaded to there. Otherwise if the remote host has access to the internet, the url can point directly to an iso download: e.g.https://dl-cdn.alpinelinux.org/alpine/v3.21/releases/x86_64/alpine-virt-3.21.3-x86_64.iso
).writable
(Optional):1
(read-write) or0
(read-only), defaults to0
.ephemeral
(Optional):1
(delete on stop/start) or0
(persist), defaults to1
.
- Format:
<id>
is a number (e.g.,0
,1
). - Purpose: Defines disks as QEMU
-drive
args. - Validation:
path
must exist on the node (viamesh workload download
).type=qcow2
:writable
toggles readonly.type=iso
:writable=1
fails (CDROMs are read-only).ephemeral=0
fails ifpath
exists pre-start.
- Subkeys:
Network Settings
network.<id>.*
:- Subkeys:
type
(Required):bridge
ornat
.bridge
(Required forbridge
): Bridge name (e.g.,br0
).mac
(Optional): MAC address (e.g.,52:54:00:12:34:56
).
- Format:
<id>
is a number (e.g.,0
). - Purpose: Configures NICs as QEMU
-netdev
and-device
args. - Validation:
type=bridge
:bridge
must exist on the node (e.g.,/sys/class/net/br0/
).mac
: If unset, generated fromuuid
,id
, andtype
(e.g.,52:54:00:xx:xx:xx
).- Uses
virtio-net-pci
device.
- Subkeys:
Runtime Settings
runtime.boot
:- Format: Comma-separated list (e.g.,
1,0,n0
). - Purpose: Sets boot order—numbers reference
platform.volume.<id>
ornetwork.<id>
(withn
prefix). - Validation: Must match existing volume/network IDs or fail. Unset defaults to first volume (
c
) or network (n
).
- Format: Comma-separated list (e.g.,
runtime.console.*
:- Subkeys:
type
(Optional):serial-tcp
,serial-socket
,vnc
, defaults toserial-tcp
.id
(Optional forserial-tcp
, required forvnc
): Integer (e.g.,5
).
- Purpose: Configures console access.
- Validation:
serial-tcp
: Port7900 + id
(e.g.,7905
), must be free.serial-socket
: Uses/tmp/pxen/<uuid>/serial.sock
.vnc
: Port5900 + id
(e.g.,5905
), must be free.
- Subkeys:
Example Config
/var/pxen/monoliths/qemutest1.conf
:
name=qemutest1
uuid=qweflmqwe23
platform.memory.size=4G
platform.cpu.threads=2
platform.cpu.mode=host
platform.volume.0.source=http://host0/isos/alpine-virt-3.21.3-x86_64.iso
platform.volume.0.type=iso
platform.volume.0.path=/tmp/alpine.iso
network.0.type=bridge
network.0.bridge=br0
runtime.console.type=vnc
runtime.console.id=5
- 4GB RAM, 2 vCPUs, Alpine ISO, bridged to
br0
, VNC on port 5905.
Notes
Configs are validated on the node—errors (e.g., missing path
, invalid threads
) halt startup with logs in /tmp/pxen/<uuid>/output.log
. Volumes must be downloaded (mesh workload download
) before start
. Only KVM is supported now; future types may expand options. For bridge setup, see Network Configuration; for node control, see Managing Nodes.
Next, explore Network Config.
Network Config
Network configs in Mesh Hypervisor define custom VXLAN meshes for node connectivity, stored in /host0/network/
on the central orchestration node. These files (e.g., manage.conf
) are installed to remote nodes via manifests and used to build VXLAN bridges like br456
. This section explains the keys and how they work, assuming you know basics like ssh
and cat
. For setup steps, see Network Configuration.
Overview
Each config file sets up a VXLAN mesh—a virtual network linking nodes over your physical Ethernet. On each node, a script reads the config, creates a vxlan<vni>
interface (the tunnel) and a br<vni>
bridge (where workloads connect), and assigns IPv6 addresses. The central node, called host0
, runs an HTTP API inside each VXLAN (e.g., on port 8000) to list all nodes in that mesh. Nodes use this API to find and connect to each other, updating their neighbor lists dynamically.
Here’s the tricky part: nodes need host0
’s address to join the VXLAN, but they’re not in it yet. Mesh Hypervisor solves this by giving host0
a fixed IPv6 address—always ending in 0001
(e.g., fd42:1234::1
). Nodes start by connecting to that, fetch the API data, then link up with everyone else. If a node disappears, the API updates, and others drop it. Simple, right?
Structure and Keys
Configs are plain text files with key=value
lines. Here’s what each key does:
name
:- Format: Any word (e.g.,
manage
). - Purpose: Names the VXLAN mesh—helps generate unique addresses and IDs.
- Example:
name=manage
—just a label you pick. - Must Have: Yes—if missing, the script fails with an error.
- Format: Any word (e.g.,
prefix
:- Format: IPv6 address with
/64
(e.g.,fd42:2345:1234:9abc::/64
). - Purpose: Sets the IPv6 range for the mesh—like a big address pool starting with
fd42:2345:1234:9abc:
. Every node gets a unique address from this. - Example:
prefix=fd42:2345:1234:9abc::/64
—host0
getsfd42:2345:1234:9abc::1
, others get random endings. - Must Have: Yes—needs to be
/64
(64-bit network part), or the script chokes.
- Format: IPv6 address with
vni
:- Format: A number (e.g.,
456
). - Purpose: Virtual Network Identifier—makes
vxlan456
andbr456
. Keeps meshes separate. - Example:
vni=456
—createsbr456
on nodes for workloads to join. - Must Have: Yes—duplicate VNIs crash the script; each mesh needs its own.
- Format: A number (e.g.,
key
:- Format: A number (e.g.,
456
). - Purpose: A seed number—feeds into
genid
to make unique IPv6 and MAC addresses for each node. - Example:
key=456
—ensures addresses likefd42:2345:1234:9abc:1234:5678:9abc:def0
are predictable. - Must Have: Yes—if missing, addressing fails. Same key across meshes might overlap, so mix it up.
- Format: A number (e.g.,
Example Config
/host0/network/manage.conf
:
name=manage
prefix=fd42:2345:1234:9abc::/64
vni=456
key=456
- Sets up a mesh called
manage
with bridgebr456
, IPv6 startingfd42:2345:1234:9abc:
, andkey=456
for address generation.
How It Works
When a node boots, it copies this config to /var/pxen/networks/
(via a manifest) and runs a script. Here’s what happens, step-by-step:
- VXLAN Interface: Creates
vxlan<vni>
(e.g.,vxlan456
)—a tunnel over your Ethernet.- Uses port 4789, MTU 1380 (hardcoded).
- Gets a MAC like
02:12:34:56:78:9a
fromgenid(name+vni)
.
- Bridge Interface: Creates
br<vni>
(e.g.,br456
)—a virtual switch.- Gets a MAC like
02:ab:cd:ef:01:23
fromgenid(bridge+name+vni)
. - Links
vxlan456
tobr456
so traffic flows through.
- Gets a MAC like
- IPv6 Address: Assigns the node an address like
fd42:2345:1234:9abc:1234:5678:9abc:def0
.- Uses
prefix
plus agenid(name+vni)
suffix—unique per node. host0
always getsprefix:0000:0000:0000:0001
(e.g.,fd42:2345:1234:9abc::1
).
- Uses
- Connect to host0: Adds
host0
’s IPv4 (from PXE boot URL) and MAC to the VXLAN’s neighbor list.- Starts talking to
host0
atfd42:2345:1234:9abc::1:8000
.
- Starts talking to
- Fetch Neighbors: Grabs a list of other nodes from
host0
’s HTTP API.- Format:
hostname ipv4 mac ipv6
per line. - Updates every 3 seconds—adds new nodes, drops missing ones.
- Format:
- Stay Alive: Pings
host0
’s IPv6 to keep the mesh active.
Workloads (e.g., VMs) plug into br<vni>
—like a virtual LAN cable.
Notes
Install configs with a manifest (e.g., O MODE=root:root:0644 SRC=/host0/network/manage.conf TGT=/var/pxen/networks/manage.conf
). The HTTP API runs only inside the VXLAN—nodes bootstrap via host0
’s 0001
address, not external access. Overlapping prefix
or vni
values break the mesh—check logs (mesh system logview
) if nodes don’t connect. For workload bridges, see Workload Config; for node control, see Managing Nodes.
Next, explore Manifest Syntax.
Manifest Syntax
Manifest files in Mesh Hypervisor tell the system how to set up files on remote nodes. They live in /host0/machines/
or /host0/groups/
on the central orchestration node, alongside source files they reference, and get compiled into APKOVLs during mesh system configure
. This section explains the syntax—actions, fields, and rules—so you can tweak nodes even if you just know ssh
and cat
. For usage, see Manifest Files.
Overview
A manifest is a text file named manifest
inside a folder like /host0/machines/my-server/
. It lists actions—one per line—like copying a file or making a link. Each line starts with a letter (the action) and has fields (like permissions or paths). That same folder holds the files it calls (e.g., hostname
next to manifest
). When you run mesh system configure
, Mesh Hypervisor reads these lines, applies them in order, and builds the node’s filesystem. If a machine folder uses groups (e.g., baseline
), their manifests run first, then the machine’s overrides.
Think of it like a recipe: “Copy hostname
to /etc/hostname
,” “Make /mnt/data
.” The compile-apkovl
script checks every line—miss a file or botch the syntax, and the whole mesh system configure
stops with an error. That’s on purpose: no silent failures allowed. You fix it, then rerun.
Syntax
Each line starts with an action letter, followed by space-separated fields (FIELD=value
). Paths must be full (e.g., /host0/machines/my-server/hostname
) since relative paths aren’t supported yet.
Actions
O
(Overwrite):- Purpose: Copies a file from the folder (e.g.,
/host0/machines/my-server/
) to the remote node, replacing what’s there. - Fields:
MODE=<user:group:perms>
(Required): Sets ownership and permissions (e.g.,root:root:0644
—read-write for owner, read for others).SRC=<full-path>
(Required): Source file in the folder (e.g.,/host0/machines/my-server/hostname
).TGT=<full-path>
(Required): Target on the remote node (e.g.,/etc/hostname
).
- Example:
O MODE=root:root:0644 SRC=/host0/machines/my-server/hostname TGT=/etc/hostname
- Copies
hostname
frommy-server/
to/etc/hostname
withrw-r--r--
.
- Copies
- Purpose: Copies a file from the folder (e.g.,
A
(Append):- Purpose: Adds a file’s contents from the folder to the end of a target file (creates it if missing).
- Fields: Same as
O
—MODE
,SRC
,TGT
. - Example:
A MODE=root:root:0644 SRC=/host0/machines/my-server/packages TGT=/etc/apk/world
- Appends
packages
to/etc/apk/world
, setsrw-r--r--
.
- Appends
D
(Directory):- Purpose: Makes a directory on the remote node.
- Fields:
MODE=<user:group:perms>
(Required): Sets ownership and permissions (e.g.,root:root:0755
—read-write-execute for owner, read-execute for others).TGT=<full-path>
(Required): Directory path (e.g.,/mnt/data
).
- Example:
D MODE=root:root:0755 TGT=/mnt/data
- Creates
/mnt/data
withrwxr-xr-x
.
- Creates
L
(Soft Link):- Purpose: Creates a symbolic link on the remote node.
- Fields:
SRC=<full-path>
(Required): What to link to (e.g.,/usr/share/zoneinfo/EST
).TGT=<full-path>
(Required): Link location (e.g.,/etc/localtime
).
- Example:
L SRC=/usr/share/zoneinfo/EST TGT=/etc/localtime
- Links
/etc/localtime
to/usr/share/zoneinfo/EST
—perms come from the target.
- Links
R
(Remove):- Purpose: Deletes a file or directory on the remote node.
- Fields:
TGT=<full-path>
(Required): Path to remove (e.g.,/etc/init.d/hostname
).
- Example:
R TGT=/etc/init.d/hostname
- Removes
/etc/init.d/hostname
.
- Removes
Fields Explained
MODE=<user:group:perms>
:- Format:
user:group:octal
(e.g.,root:root:0644
). - Purpose: Sets who owns the file and what they can do—
0644
is owner read-write, others read;0755
adds execute. - Used In:
O
,A
,D
—notL
(links use target perms) orR
(no perms to set). - Must Have: Yes for
O
,A
,D
—skip it, and the script errors out.
- Format:
SRC=<full-path>
:- Format: Complete path on the central node (e.g.,
/host0/machines/my-server/hostname
). - Purpose: Points to a file in the same folder as
manifest
—must exist whenmesh system configure
runs. - Used In:
O
,A
,L
—notD
(no source) orR
(nothing to copy). - Must Have: Yes for
O
,A
,L
—missing file stops the build.
- Format: Complete path on the central node (e.g.,
TGT=<full-path>
:- Format: Complete path on the remote node (e.g.,
/etc/hostname
). - Purpose: Where the action happens—parent dirs are auto-created.
- Used In: All actions (
O
,A
,D
,L
,R
). - Must Have: Yes—every action needs a target.
- Format: Complete path on the remote node (e.g.,
Example Manifest
/host0/machines/my-server/manifest
:
# Set a custom hostname from this folder
O MODE=root:root:0644 SRC=/host0/machines/my-server/hostname TGT=/etc/hostname
# Add packages from this folder
A MODE=root:root:0644 SRC=/host0/machines/my-server/packages TGT=/etc/apk/world
# Make a mount point
D MODE=root:root:0755 TGT=/mnt/data
# Remove default hostname service
R TGT=/etc/init.d/hostname
- Source files
hostname
andpackages
live in/host0/machines/my-server/
withmanifest
.
How It Works
When you run mesh system configure
, the compile-apkovl
script processes every manifest:
- Compilation: Reads group manifests (from
groups
) first, then the machine’s—later lines override earlier ones.- Example:
baseline
sets/etc/motd
,my-server
overwrites it,my-server
wins.
- Example:
- Actions: For each line:
O
: CopiesSRC
toTGT
, setsMODE
.A
: AppendsSRC
toTGT
(creates if missing), setsMODE
, adds a newline if needed.D
: MakesTGT
dir, setsMODE
.L
: LinksTGT
toSRC
.R
: DeletesTGT
(recursive for dirs).
- Validation: Checks
SRC
exists (forO
,A
),MODE
is valid, and paths are full—any error (e.g., missinghostname
) stops the entire build.- Fix it, rerun
mesh system configure
, or no APKOVLs get made.
- Fix it, rerun
The result lands in /srv/pxen/http/
, and nodes grab it on boot.
Notes
Source files sit in the same folder as manifest
(e.g., /host0/machines/my-server/hostname
)—keep them together. Paths must be full (e.g., /host0/...
)—relative paths like hostname
won’t work until a future update. A
is safer than O
for files like /etc/apk/world
—it adds, doesn’t wipe. For examples, see Configuring Nodes; for network configs, see Network Config.
This concludes the Configuration Reference.
Troubleshooting
When Mesh Hypervisor hits a snag, this section helps you diagnose and fix common issues. All steps are run from the central orchestration node’s CLI unless noted. For setup details, see Usage.
Prerequisites
Ensure:
- Central node is booted (see Installation).
- You’re logged in (
root
/toor
at the console).
Checking Logs
Start with logs—they’re your first clue:
mesh system logview
This opens lnav
in /var/log/pxen/
, showing DHCP, PXE, HTTP, and service activity. Scroll with arrows, filter with /
(e.g., /error
), exit with q
.
Common Log Issues
- DHCP Requests Missing: No nodes booting—check network cables or PXE settings.
- HTTP 403 Errors: Permissions issue on
/srv/pxen/http/
—runchmod -R 644 /srv/pxen/http/*
. - Kernel Downloaded, Then Stops: APKOVL fetch failed—verify UUID matches in
/host0/machines/<folder>/uuid
. Check permissions on/srv/pxen/http
.
Node Not Booting
If mesh node info
shows no nodes:
- Verify PXE: On the node, ensure BIOS/UEFI is set to network boot.
- Check Logs: In
mesh system logview
, look for DHCP leases and kernel downloads. - Test Network: From the central node:
Findping <node-ip>
<node-ip>
in logs (e.g., DHCP lease). No response? Check cables or switches.
Workload Not Starting
If mesh workload start -n <uuid> -w <name>
fails:
- Check Logs: Run
mesh system logview
—look for QEMU or KVM errors. - Verify Config: Ensure
/var/pxen/monoliths/<name>.conf
exists and matches-w <name>
—see Configuring Workloads. - Resources: SSH to the node:
Confirm RAM and CPU suffice (e.g., 500M RAM formesh node ctl -n <uuid> free -m; cat /proc/cpuinfo
qemutest1
). - Restart: Retry:
mesh workload soft-stop -n <uuid> -w <name> mesh workload start -n <uuid> -w <name>
Network Issues
If a node’s IP or VXLAN isn’t working:
- Check IP: On the node:
No static IP? Verifymesh node ctl -n <uuid> "ip addr"
interfaces
inmanifest
—see Network Configuration. - VXLAN Bridge: Check bridge existence:
Missing? Ensuremesh node ctl -n <uuid> "ip link show br456"
/var/pxen/networks/manage.conf
is installed. - Ping Test: From the node:
No reply? Check VXLAN config inmesh node ctl -n <uuid> "ping6 -c 4 fd42:2345:1234:9abc::1"
/host0/network/
.
Time Sync Problems
If nodes show as offline in mesh node info
:
- Check Time: On the node:
Off by hours? Time sync failed.mesh node ctl -n <uuid> "date"
- Fix Chrony: Ensure
ntp_sync
group is applied (e.g., ingroups
file)—see Configuring Nodes. - Restart Chrony: On the node:
mesh node ctl -n <uuid> "rc-service crond restart"
Notes
- Logs are verbose—most errors trace back to permissions, network, or config mismatches.
- If stuck, rebuild configs with
mesh system configure
and reboot nodes. - For manifest tweaks, see Manifest Files; for node control, see Managing Nodes.
Next, explore Advanced Topics.
Advanced Topics
Take Mesh Hypervisor further—upgrades, recoveries, and more. This section digs into the deep end, starting with Upgrading the System.
Setting Up RAID Storage
Mesh Hypervisor nodes are diskless by default, but you can add local RAID storage for data persistence—like for backups or file shares. This guide shows how to set up a RAID array with encryption on a remote node, using the storage
group and a custom machine config. Commands run via SSH to the central orchestration node’s CLI (e.g., ssh root@<central-ip>
). For node basics, see Configuring Nodes.
Prerequisites
Ensure:
- A remote node with spare disks (e.g.,
/dev/sda
,/dev/sdb
) is online (mesh node info
). - You’ve got a machine folder (e.g.,
/host0/machines/storage-node/
). - The
storage
group is in/host0/groups/storage/
—it’s prebuilt with RAID and encryption tools.
Step 1: Boot and Inspect the Node
- Add Storage Group: In
/host0/machines/storage-node/groups
:baseline storage
baseline
sets essentials;storage
addsmdadm
,cryptsetup
, etc.
- Set UUID: In
/host0/machines/storage-node/UUID
, use the node’s UUID (e.g.,10eff964
) frommesh node info
. - Apply: Rebuild and reboot:
mesh system configure mesh node ctl -n 10eff964 "reboot"
- SSH In: Connect to the node:
mesh node ctl -n 10eff964
- Check Disks: List available drives:
lsblk
- Example: See
/dev/sda
and/dev/sdb
—unpartitioned, ready for RAID.
- Example: See
Step 2: Create the RAID Array
- Build RAID: Make a RAID1 array (mirrored):
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
/dev/md0
: Array name.--level=1
: Mirror (RAID1)—swap for5
or10
if you’ve got more disks.- Adjust
/dev/sda
,/dev/sdb
to your drives.
- Save Config: Write the array details:
mdadm --detail --scan > /etc/mdadm.conf
- Example output:
ARRAY /dev/md0 metadata=1.2 name=q-node:0 UUID=abcd1234:5678...
- Example output:
- Monitor: Check progress:
cat /proc/mdstat
- Wait for
[UU]
—array’s synced.
- Wait for
Step 3: Encrypt the Array
- Create LUKS: Encrypt
/dev/md0
:cryptsetup luksFormat /dev/md0
- Enter a passphrase (e.g.,
mysecret
)—you’ll generate a keyfile next.
- Enter a passphrase (e.g.,
- Generate Keyfile: Make a random key:
dd if=/dev/urandom of=/etc/data.luks bs=4096 count=1 chmod 600 /etc/data.luks
- 4KB keyfile, locked to
root
.
- 4KB keyfile, locked to
- Add Key: Link it to LUKS:
cryptsetup luksAddKey /dev/md0 /etc/data.luks
- Enter the passphrase again—keyfile’s now an alternate unlock.
- Open LUKS: Unlock the array:
cryptsetup luksOpen /dev/md0 data --key-file /etc/data.luks
- Creates
/dev/mapper/data
.
- Creates
Step 4: Format and Mount
- Format: Use ext4 (or xfs, etc.):
mkfs.ext4 /dev/mapper/data
- Mount: Test it:
mkdir /mnt/data mount /dev/mapper/data /mnt/data df -h
- See
/mnt/data
listed—unmount withumount /mnt/data
after.
- See
Step 5: Configure the Machine
- Exit Node: Back to the central node:
Ctrl+D
- Update Manifest: In
/host0/machines/storage-node/manifest
:# RAID config O MODE=root:root:0644 SRC=/host0/machines/storage-node/mdadm.conf TGT=/etc/mdadm.conf # Encryption A MODE=root:root:0644 SRC=/host0/machines/storage-node/dmcrypt TGT=/etc/conf.d/dmcrypt O MODE=root:root:0600 SRC=/host0/machines/storage-node/data.luks TGT=/etc/data.luks # Filesystem mount A MODE=root:root:0644 SRC=/host0/machines/storage-node/fstab TGT=/etc/fstab D MODE=root:root:0755 TGT=/mnt/data
- Add Files: In
/host0/machines/storage-node/
:mdadm.conf
: Copy from node (scp root@<node-ip>:/etc/mdadm.conf .
).dmcrypt
:target=data source=/dev/md0 key=/etc/data.luks
data.luks
: Copy from node (scp root@<node-ip>:/etc/data.luks .
).fstab
:/dev/mapper/data /mnt/data ext4 defaults,nofail 0 2
- Apply: Rebuild and reboot:
mesh system configure mesh node ctl -n 10eff964 "reboot"
- Verify: SSH in, check:
mesh node ctl -n 10eff964 "df -h"
/mnt/data
should be mounted.
Notes
The storage
group handles boot-time RAID assembly and LUKS unlocking—your machine config locks in the specifics. RAID setup is manual first; configs make it persistent. For multi-disk setups (e.g., RAID5), adjust --level
and add drives—update dmcrypt
and fstab
too. See Managing Nodes for CLI tips; Recovery Procedures for RAID fixes.
Next, explore Running Docker.
Running Docker
Mesh Hypervisor nodes can run Docker containers bare-metal—great for lightweight services like a web server or app. This guide shows how to enable Docker on a remote node and spin up a test container. Commands run via SSH to the central orchestration node’s CLI (e.g., ssh root@<central-ip>
). For node setup, see Configuring Nodes.
Prerequisites
Ensure:
- A remote node is online (
mesh node info
). - You’ve got a machine folder (e.g.,
/host0/machines/docker-node/
). - The
docker
group exists in/host0/groups/docker/
—it’s prebuilt with Docker tools.
Step 1: Enable Docker
- Add Docker Group: In
/host0/machines/docker-node/groups
:baseline docker
baseline
sets essentials;docker
installs Docker and tweaks services.
- Set UUID: In
/host0/machines/docker-node/UUID
, use the node’s UUID (e.g.,10eff964
) frommesh node info
. - Apply: Update the mirror, rebuild, and reboot:
mesh system download mesh system configure mesh node ctl -n 10eff964 "reboot"
mesh system download
grabsdocker
packages;docker
group adds them to/etc/apk/world
.
Step 2: Run a Test Container
- SSH In: Connect to the node:
mesh node ctl -n 10eff964
- Verify Docker: Check it’s running:
docker version
- Should show Docker client and server versions—services auto-start via the group.
- Pull an Image: Grab a simple container:
docker pull hello-world
- Downloads the
hello-world
image from Docker Hub.
- Downloads the
- Run It: Start the container:
docker run hello-world
- Prints a “Hello from Docker!” message and exits—proof it works.
- Exit Node: Back to the central node:
Ctrl+D
Notes
The docker
group handles everything: installs docker
, docker-openrc
, etc., links services to /etc/runlevels/default/
, and sets rc_cgroup_mode=unified
in /etc/rc.conf
for cgroups. No manual CLI tweaks needed—just add the group and go. For persistent containers, add a manifest
entry (e.g., O MODE=root:root:0644 SRC=/host0/machines/docker-node/run.sh TGT=/usr/local/bin/run.sh
) and script your docker run
. See Managing Nodes for CLI tips; Configuring Nodes for group details.
Next, explore Configuring Samba.
Configuring Samba
Mesh Hypervisor nodes can run Samba bare-metal to share files over the network—like a folder for backups. This guide sets up Samba to share /mnt/data
on a remote node, showing how configs work in either a group or machine folder. Commands run via SSH to the central orchestration node’s CLI (e.g., ssh root@<central-ip>
). For node basics, see Configuring Nodes.
Prerequisites
Ensure:
- A remote node is online (
mesh node info
)—e.g., one with RAID at/mnt/data
(see Setting Up RAID Storage). - User
nobody
exists—Samba uses it for guest access (see Adding a User Account). - You’ve got a machine folder (e.g.,
/host0/machines/terrible-tuneup/
).
Step 1: Configure Samba
You can put Samba settings in a group (reusable) or directly in the machine folder (one-off)—both work the same. Here’s how:
Option 1: As a Group
- Make the Group: Create
/host0/groups/samba/
:mkdir /host0/groups/samba
- Add Manifest: Create
/host0/groups/samba/manifest
:A MODE=root:root:0644 SRC=/host0/groups/samba/packages TGT=/etc/apk/world L SRC=/etc/init.d/samba TGT=/etc/runlevels/default/samba O MODE=root:root:0644 SRC=/host0/groups/samba/smb.conf TGT=/etc/samba/smb.conf
- Add Packages: Create
/host0/groups/samba/packages
:samba samba-server samba-server-openrc
- Add Config: Create
/host0/groups/samba/smb.conf
:[global] workgroup = WORKGROUP server string = network file share netbios name = Mesh Hypervisor-SAMBA security = user map to guest = Bad User guest account = nobody [data] path = /mnt/data browsable = yes writable = yes guest ok = yes read only = no force user = nobody create mask = 0777 directory mask = 0777
- Link to Machine: In
/host0/machines/terrible-tuneup/groups
:baseline storage samba
Option 2: In the Machine Folder
- Add to Machine: In
/host0/machines/terrible-tuneup/manifest
:A MODE=root:root:0644 SRC=/host0/machines/terrible-tuneup/samba-packages TGT=/etc/apk/world L SRC=/etc/init.d/samba TGT=/etc/runlevels/default/samba O MODE=root:root:0644 SRC=/host0/machines/terrible-tuneup/smb.conf TGT=/etc/samba/smb.conf
- Add Files: In
/host0/machines/terrible-tuneup/
:samba-packages
:samba samba-server samba-server-openrc
smb.conf
: Same as above.
Step 2: Apply to the Node
- Set Hostname: Mesh Hypervisor generates hostnames like
terrible-tuneup
fromgenid
using noun/adjective lists—check yours inmesh node info
. - Set UUID: In
/host0/machines/terrible-tuneup/UUID
, use the node’s UUID (e.g.,10eff964
) frommesh node info
. - Apply: Update, rebuild, reboot:
mesh system download mesh system configure mesh node ctl -n 10eff964 "reboot"
Step 3: Test Samba
- SSH In: Connect to the node:
mesh node ctl -n 10eff964
- Verify Service: Check Samba’s running:
rc-service samba status
- Should say
started
—if not,rc-service samba start
.
- Should say
- Exit Node: Back to your desktop:
Ctrl+D
- Test Access: From another machine:
- Linux:
smbclient -L //terrible-tuneup -U nobody%
- Lists the
data
share.
- Lists the
- Windows: Open
\\terrible-tuneup
in Explorer—use node’s hostname or IP frommesh node info
. - Mount:
mount -t cifs //terrible-tuneup/data /mnt -o username=nobody,password=
- Empty password for guest.
- Linux:
Notes
Groups and machines are interchangeable—use a group to reuse Samba across nodes, or stick it in the machine folder for a one-off. The nobody
user is required—add it first. This example shares /mnt/data
with open perms (0777)—tighten create mask
(e.g., 0644) or guest ok = no
for security. For RAID setup, see Setting Up RAID Storage; for users, see Adding a User Account.
Next, explore Adding a User Account.
Adding a User Account
Mesh Hypervisor nodes don’t handle user accounts gracefully—Linux’s /etc/passwd
isn’t atomic. This guide shows a workaround: append user data via a group to add a user (e.g., mal
) across nodes consistently. Commands run via SSH to the central orchestration node’s CLI (e.g., ssh root@<central-ip>
). For node basics, see Configuring Nodes.
Prerequisites
Ensure:
- A remote node is online (
mesh node info
—e.g., hostnameterrible-tuneup
). - You’ve got a machine folder (e.g.,
/host0/machines/terrible-tuneup/
).
Step 1: Create the User on the Central Node
To avoid errors in mesh system configure
, add the user on the central node first—its compile-apkovl
script needs the UID/GID to exist.
- Add User: On the central node:
adduser mal
- Set password (e.g.,
mysecret
), fill optional fields (e.g., “Linux User” for full name), pick/bin/bash
. - UID (e.g.,
1000
) auto-increments—note it from/etc/passwd
.
- Set password (e.g.,
- Copy Lines: Extract user data:
grep "^mal:" /etc/passwd > /tmp/passwd-mal grep "^mal:" /etc/group > /tmp/group-mal grep "^mal:" /etc/shadow > /tmp/shadow-mal
- Saves
mal
’s lines—e.g.,mal:x:1000:1000:Linux User,,,:/home/mal:/bin/bash
.
- Saves
Step 2: Build the User Group
- Make the Group: Create
/host0/groups/useracct-mal/
:mkdir /host0/groups/useracct-mal
- Add Manifest: Create
/host0/groups/useracct-mal/manifest
:A MODE=root:root:0644 SRC=/host0/groups/useracct-mal/passwd TGT=/etc/passwd A MODE=root:root:0644 SRC=/host0/groups/useracct-mal/group TGT=/etc/group A MODE=root:root:0640 SRC=/host0/groups/useracct-mal/shadow TGT=/etc/shadow D MODE=1000:1000:0755 TGT=/home/mal
- Appends user data, makes
/home/mal
with UID:GID (notmal
, as it might not exist yet on nodes).
- Appends user data, makes
- Add Files: In
/host0/groups/useracct-mal/
:passwd
: Copy from/tmp/passwd-mal
(e.g.,mal:x:1000:1000:Linux User,,,:/home/mal:/bin/bash
).group
: Copy from/tmp/group-mal
(e.g.,mal:x:1000:
).shadow
: Copy from/tmp/shadow-mal
(e.g.,mal:$6$...hashed...mysecret...:20021:0:99999:7:::
).
cp /tmp/passwd-mal /host0/groups/useracct-mal/passwd cp /tmp/group-mal /host0/groups/useracct-mal/group cp /tmp/shadow-mal /host0/groups/useracct-mal/shadow
Step 3: Apply to the Node
- Link Group: In
/host0/machines/terrible-tuneup/groups
:baseline useracct-mal
baseline
for essentials,useracct-mal
adds the user.
- Set UUID: In
/host0/machines/terrible-tuneup/UUID
, use the node’s UUID (e.g.,10eff964
) frommesh node info
. - Apply: Rebuild and reboot:
mesh system configure mesh node ctl -n 10eff964 "reboot"
Step 4: Test the User
- SSH In: Connect to the node:
mesh node ctl -n 10eff964
- Verify User: Check
mal
exists:grep "^mal:" /etc/passwd ls -ld /home/mal
- Should show
mal:x:1000:1000...
anddrwxr-xr-x 1000 1000 /home/mal
.
- Should show
- Test Login: Switch user:
su - mal
- Enter
mysecret
—drops you to/home/mal
with/bin/bash
.
- Enter
- Exit: Back to root, then out:
exit Ctrl+D
Notes
This hack appends to /etc/passwd
, /group
, and /shadow
—not atomic, so pick unique UIDs (e.g., 1000
) manually across groups to avoid clashes. Create users on the central node first—compile-apkovl
fails if UIDs/GIDs don’t exist there. Hashes come from adduser
—copy them, don’t guess. Reuse this group (e.g., useracct-mal
) on multiple nodes for consistency. For RAID shares needing nobody
, see Configuring Samba.
Next, explore Upgrading the System.
FAQ
This section answers common questions about Mesh Hypervisor. For detailed guides, see Usage.
How do I back up the system?
Copy /host0
and keys from the central node via SSH:
scp -r root@<central-ip>:/host0 ./backup/
scp root@<central-ip>:/var/pxen/pxen.repo.rsa ./backup/
scp root@<central-ip>:/var/pxen/pxen.repo.rsa.pub ./backup/
This grabs configs and keys—restore with scp
to a new flash drive. See Upgrading the System.
Why IPv6 only for VXLAN?
IPv6 ensures deterministic, collision-free addressing using UUIDs (e.g., genid machine 8
). It’s simpler than NAT-heavy IPv4 meshes. See Networking.
What’s the default config?
New nodes boot with /host0/machines/default/
(UUID default
) until a matching UUID folder exists in /host0/machines/
. Edit it for baseline settings—see Configuring Nodes.
How do I update configs?
Edit /host0/
locally, then upload:
scp -r ./host0 root@<central-ip>:/
mesh system configure
Reboot nodes to apply (mesh node ctl -n ALL "reboot"
). See Managing Nodes.
What if a node’s hardware fails?
Boot new hardware, grab its UUID from mesh node info
, update the old machine’s UUID
file, rebuild APKOVLs, and reboot. See Recovery Procedures.
Next Steps
Explore Roadmap & Limitations.
Roadmap & Limitations
Mesh Hypervisor is a Minimum Viable Product (MVP)—a solid core with room to grow. This section covers its current limits and planned enhancements. For usage, see Usage.
Current Limitations
Mesh Hypervisor’s MVP status means some trade-offs:
- Security: No encryption—root SSH uses default keys (
toor
). Configs and data transfer over HTTP are unencrypted. - Workloads: KVM-only for now—other virtualization (e.g., Xen, containers) isn’t supported yet.
- Networking: Manual VXLAN setup; no dynamic routing or GUI management.
- Interface: CLI-only on the central node—no TUI or web dashboard.
- Storage: Diskless by default; local RAID/LUKS needs manual config (e.g.,
storage
group).
These keep Mesh Hypervisor simple and deterministic but limit its polish.
Roadmap
Future releases aim to address these:
- Encryption: Add SSH key management, HTTPS for APKOVLs, and VXLAN encryption (e.g., IPSec).
- Virtualization: Support Xen, LXC, or Docker alongside KVM for broader workload options.
- Network Automation: Dynamic VXLAN config, IPv6 routing, and bridge management tools.
- User Interface: Introduce a curses-based TUI for the central node, with a web UI later.
- Storage: Simplify RAID/LUKS setup with prebuilt groups or scripts.
- User Management: Replace root-only access with role-based accounts.
No timelines yet—focus is on stability first. Feedback drives priorities.
Notes
Mesh Hypervisor’s MVP trades features for simplicity—security and flexibility are next. For current workarounds, see Configuring Nodes and Network Configuration. Questions? Check FAQ.