Remote Nodes
Remote nodes are the operational units of Mesh Hypervisor, executing tasks in the cluster. They boot from the central orchestration node over the network and typically operate disklessly. This section outlines their role and mechanics.
Role
Remote nodes handle:
- Workload Execution: Run KVM virtual machines (workloads) using configurations from the central node.
- Bare-Metal Tasks: Can execute scripts or applications directly (e.g., HPC setups like MPI or Slurm) via prebuilt configuration groups.
They depend on the central node for booting and initial setup, offering versatility in deployment.
Operation
- Boot: Initiates via PXE, pulling kernel and initramfs from the central node’s TFTP server.
- Identification: Generates an 8-character UUID from hardware data.
- Configuration: Requests an APKOVL from the central node’s HTTP server:
- Fetches a custom APKOVL matching the UUID if available.
- Uses a default APKOVL for unrecognized nodes.
- Runtime: Boots Alpine Linux, joins the cluster, and runs workloads or bare-metal tasks.
Diagram
sequenceDiagram participant R as Remote Node participant C as Central Node R->>C: PXE Boot Request C-->>R: Kernel + Initramfs (TFTP) Note over R: Generate UUID R->>C: Request APKOVL (HTTP) C-->>R: APKOVL (Default or Custom) Note over R: Boot Alpine + Run Tasks
Key Features
- Flexible Execution: Supports KVM workloads or direct application runs (e.g., MPI groups preinstalled via
/host0/machines/
). - UUID-Based Identity: Uses
genid machine 8
for a deterministic UUID, linking configs consistently. - APKOVLs: Delivers node-specific overlays (files, scripts, permissions) from the central node.
- Storage Option: Boots and runs the OS without local storage; local disks are optional for workloads or data.
Notes
Remote nodes need PXE-capable hardware and minimal resources (x86_64, 2+ GB RAM—see Prerequisites). Without local storage, they’re stateless, rebooting fresh each time. Local disks, if present, don’t affect the boot process or OS—only workload or task storage. See Workloads for VM details and Networking for connectivity.
Next, explore Networking for cluster communication.