Architecture

Voltis is architected as a client-server model with a focus on lightweightness, portability, and edge-friendliness. The daemon runs on target devices, while the CLI provides remote control. This section details the internal components, data flow, and design of Voltis.

High-Level Design

Voltis follows a declarative model: Define desired state in workloads, and the daemon reconciles reality to that state. Key principles:

  • Lightweight: No heavy dependencies; uses SQLite for storage, systemd for services.
  • Idempotent: Operations are safe to retry via status checks.
  • Portable: Workloads as tarballs for easy transfer; cross-platform taskfiles.
  • Observable: Structured logging and reconciliation loops for monitoring.

Components:

  • CLI: User interface; builds/sends workloads via API.
  • Daemon: Core server; orchestrates reconciliation.
  • API: HTTP endpoints for control.
  • Store: Persistence for workloads/services/jobs.
  • Controllers: Manage component states and workloads.
  • Format: Table output for CLI.

Daemon Components

The daemon is the central orchestrator, initialized on startup.

Core Structure

  • Reconciliation Ticker: Triggers reconciliation every few seconds. Checks active workload, service states, and runs tasks if drifted.
  • Server: HTTP API for remote control.
  • Reporter: Handles structured logging and potential metrics export.
  • Workload Controller: Manages active workload; handles push/activate/reset.
  • State Controller: Reconciles service states (current vs. desired).
  • Database: SQLite for storing system state.

Startup flow:

  1. Initialize database and run any necessary migrations.
  2. Start API server.
  3. Start logging reporter.
  4. Enter reconciliation loop to maintain desired state.
  5. Handle graceful shutdown on termination signals.

Reconciliation Loop

The heart of Voltis: Periodically ensures system matches desired state.

  1. Load Active Workload: Retrieve the currently active workload from storage.
  2. Check Services: For each service in the workload:
    • Compare current state to desired state.
    • If mismatch (e.g., desired “active” but “inactive”), run activation tasks.
    • If service is not installed, run installation tasks.
  3. Check Packages: Verify installation status; reinstall if needed.
  4. Check Jobs: Verify completion; rerun if necessary.
  5. Update Storage: Log changes and set status messages.
  6. Error Handling: On failure, record errors and retry in the next cycle.

This loop is driven by the controllers to detect and correct drifts.

Database

SQLite tables provide persistence across restarts:

  • Application State: Tracks the active workload.
  • Workloads: Stores workload metadata and data (as blobs).
  • Services: Tracks installation, current/desired states, and errors.
  • Packages: Tracks installation status and history.
  • Jobs: Tracks completion status for one-time or continuous tasks.

All components are linked to their parent workload for organization.

The database uses a single file for simplicity; backups can be made by copying the file.

API Layer

  • Server: HTTP server with endpoints for managing workloads, services, and more.
  • Models: Data structures for request/response serialization to JSON.
  • Handlers:
    • GET endpoints for listing resources by querying storage.
    • POST endpoints for pushing new workloads, parsing inputs, and triggering installations.
    • Streaming support for real-time logs.

Security focuses on input validation (e.g., limiting payload sizes) for a simple API.

CLI Layer

  • Commands: Subcommands for workloads, services, etc., using a command-line framework.
  • Client: API client for building tarballs from directories and interacting with the daemon.
  • Output: Formatted tables for displaying lists and statuses.
  • Configuration: Flags for server address and per-command options.

Data Flow Example: Deploying a Workload

  1. CLI Build: Create a tarball from a directory containing the workload definition.
  2. CLI Push: Send the tarball to the daemon via API, specifying name and status.
  3. API Processing: Store the workload, extract contents, parse configuration, and execute installation tasks in order.
  4. Controller Activation: Set as active and run state management tasks (e.g., starting services).
  5. Reconciliation: Ongoing loop detects and corrects any state drifts.
  6. CLI Query: Retrieve and display status via formatted output.

Extensibility

  • Custom Tasks: Extend with additional task definitions or variables.
  • Plugins: Potential for adding functionality via workload packages.
  • Storage: Can swap SQLite for other backends by implementing storage interfaces.
  • Controllers: Add new controllers for additional resource types.

Performance Considerations

  • Edge Optimized: SQLite suitable for low-volume operations on single nodes.
  • Memory: Daemon uses minimal resources; supports workloads up to several hundred MB.
  • Scaling: For multi-node setups, use CLI per node or integrate with orchestration tools.

Next: Getting Started