Architecture
KubeSolo is a single-node Kubernetes distribution with clustering and HA logic removed, ingress simplified, and defaults tuned for resource-limited environments.
There is no etcd quorum logic, no leader election, no multi-node networking overlay, and no control plane components that assume peer nodes exist. The Kubernetes API server, scheduler, controller manager, and kubelet all run in a single process. State is persisted locally using a lightweight embedded datastore.
The result is a full Kubernetes control loop that is operationally equivalent to a standard Kubernetes cluster for all workload-facing APIs, but with a fraction of the resource footprint, because none of the clustering machinery is present even in dormant form.
Configuration
KubeSolo supports the following command-line flags and environment variable equivalents. All flags can be set either on the command line or as environment variables (shown in brackets).
| Flag | Env var | Default | Description |
|---|---|---|---|
| --version | N/A | Show the version and exit | |
| --path | KUBESOLO_PATH | /var/lib/kubesolo | Path to the directory containing KubeSolo configuration files |
| --portainer-edge-id | KUBESOLO_PORTAINER_EDGE_ID | "" | Portainer Edge ID |
| --portainer-edge-key | KUBESOLO_PORTAINER_EDGE_KEY | "" | Portainer Edge Key |
| --portainer-edge-async | KUBESOLO_PORTAINER_EDGE_ASYNC | false | Enable Portainer Edge Async Mode |
| --apiserver-extra-sans | KUBESOLO_APISERVER_EXTRA_SANS | "" | Comma-separated list of additional SANs (IP addresses or DNS names) for the API server TLS certificate |
| --local-storage | KUBESOLO_LOCAL_STORAGE | true | Enable local storage |
| --local-storage-shared-path | KUBESOLO_LOCAL_STORAGE_SHARED_PATH | "" | Path to the shared filesystem for local storage |
| --debug | KUBESOLO_DEBUG | false | Enable debug logging |
| --pprof-server | KUBESOLO_PPROF_SERVER | false | Enable pprof server for profiling |
Portainer integration via install script
To connect KubeSolo to a Portainer server at install time, pass your Edge credentials as environment variables:
Networking
KubeSolo includes an integrated CNI plugin that handles pod networking on a single node. Because there are no remote nodes to route between, the networking configuration is substantially simpler than a multi-node cluster.
All three standard Kubernetes service types are supported: ClusterIP, NodePort, and LoadBalancer. Each is handled natively without requiring an external controller or cloud provider integration.
LoadBalancer services
KubeSolo ships with a built-in LoadBalancer service class that exposes services directly on the host's IP address. Unlike NodePort, which binds on high ports (30000–32767), the LoadBalancer service class lets you expose services on any port including low ports (below 1024): port 80, 443, or 53 work without NAT or port-mapping workarounds.
When you create a LoadBalancer service, KubeSolo assigns the host's IP as the EXTERNAL-IP and binds the declared port directly on that interface. There is no cloud provider, no external load balancer, and no additional configuration required.
After applying, kubectl get svc my-web will show the node's IP in the EXTERNAL-IP column. Traffic arriving on port 80 of the host is forwarded directly to the matching pods.
Service type comparison
| Type | Port range | External IP | Use when |
|---|---|---|---|
| ClusterIP | Any (internal only) | None | Internal service-to-service communication |
| NodePort | 30000–32767 | Node IP + high port | Simple external access, port doesn't matter |
| LoadBalancer | Any, including <1024 | Node IP on declared port | Exposing HTTP/S, DNS, or any well-known port directly |
CNI
KubeSolo uses the default CNI plugins provided by containerd: bridge, host-local, portmap, and loopback, along with kube-proxy for Kubernetes networking.
DNS
KubeSolo includes CoreDNS, configured and optimized for single-node operation. It runs as a pod in kube-system and handles both internal service discovery (service.namespace.svc.cluster.local) and external DNS resolution for pods. The default configuration strips out cluster federation and multi-zone settings that only add overhead on a single node.
Storage
By default, KubeSolo includes Rancher's local-path-provisioner to support both static and dynamic PersistentVolume provisioning. If more advanced storage features are required, KubeSolo also supports full CSI (Container Storage Interface) drivers, including the option to install the Local Storage CSI driver.
Portainer integration
KubeSolo is a Portainer product. The two integrate via Portainer Edge Agent, which runs as a pod inside KubeSolo and establishes an outbound connection to your Portainer server. This means KubeSolo nodes behind NAT or in air-gapped environments can still be managed centrally. They initiate the connection outbound, so no inbound firewall rules are needed.
Portainer adds: centralized fleet management, GitOps deployments via the Portainer operator, RBAC across nodes, an application template library, and a full UI for container lifecycle management.
See the Portainer documentation for Edge Agent setup.
FAQ
What is KubeSolo actually designed for?
KubeSolo was built explicitly for extremely resource-constrained environments: devices with less than 1 GB of RAM, basic CPUs, and simple SD card storage. If your hardware is a 512 MB RAM, single-core ARM processor, KubeSolo was made for it. Normal operation sits around 200 MB RAM.
If my devices have 1 GB or more RAM, should I still use KubeSolo?
Probably not. With 1 GB or more, standard lightweight distributions like K3s, K0s, or MicroK8s are generally a better fit. They are CNCF-certified and more widely adopted for general-purpose use. That said, K3s and K0s typically need around 400 MB to run comfortably, and MicroK8s around 2 GB. The gauge below illustrates where each distribution sits:
RAM footprint comparison
Is KubeSolo CNCF-certified?
Not currently. The optimizations required to get the footprint to 200 MB mean it doesn't fully meet CNCF certification criteria today. That said, the plan is to engage with CNCF and make the case for recognizing these constraints as valid and necessary for the embedded/edge use case.
Why use KubeSolo instead of Docker or Podman?
Docker and Podman remain the lightest way to run containers, ideal if absolute minimal resource use is the only priority. But Kubernetes has become the industry standard, particularly in IoT, Industrial IoT, and Industry 4.0. Many off-the-shelf industrial software packages explicitly require it. KubeSolo is the compromise: Kubernetes within the tightest constraints.
Why does KubeSolo sometimes appear to use more than 200 MB?
Linux aggressively uses available memory for caching to optimize performance. On a device with more than 1 GB RAM, monitoring tools may show KubeSolo using more. That reported figure includes cache that the OS allocated speculatively. When memory contention occurs, KubeSolo releases it and operates within its actual ~200 MB target. The chart below shows the difference between reported and actual working set:
KubeSolo memory breakdown
Why use Kubernetes at the edge at all?
The Kubernetes ecosystem gives you mature tooling for deploying and operating software with real complexity. Operators, Helm charts, and the wider community output remove the need to build and maintain your own automation. A few hundred megabytes of RAM buys access to everything the ecosystem already provides and continues to improve... that is usually a very reasonable trade.
Why not run the device as a worker node with cloud-based control plane nodes?
Worker nodes depend on a remote control plane, which breaks when edge locations experience intermittent or intentional loss of connectivity. When the connection drops, kubelet can restart containers but the rest of the automation stops. A single-node cluster keeps the control plane on the device, so scheduling, updates, and reconciliation continue even when the site is offline.
Does KubeSolo support multi-node or multi-cluster configurations?
Single-node by design. To centrally manage many standalone KubeSolo instances (for example, hundreds of edge devices), you need a multi-cluster management solution. Portainer gives you centralized visibility, configuration, and control over each KubeSolo instance through a single UI. CNCF Open Cluster Management is another option.
How does KubeSolo differ from KubeEdge?
KubeEdge requires edge devices to act as worker nodes in a centrally managed, network-connected Kubernetes cluster. That architecture isn't suitable for environments with unreliable or intentionally offline connectivity. KubeSolo provides fully autonomous, self-contained single-node clusters designed for standalone and offline edge deployments.
How do I manage KubeSolo?
KubeSolo is standard Kubernetes. Any Kubernetes client works: VSCode with the Kubernetes extension, OpenLens, Headlamp, K9s, or Portainer. ArgoCD can also connect remotely via the Kubernetes API. You are not locked into any specific tooling.
What hardware was tested?
The following hardware has been confirmed to run KubeSolo: NVIDIA Jetson with Ubuntu, Atom X5 with Arch Linux, Zimaboard with Ubuntu, Raspberry Pi CS5 with Armbian Bookworm, and Siemens IOT2050 with Siemens Industrial OS. KubeSolo is also well suited to industrial compute hardware from Wago (CC100 series), Bosch, Siemens (SIMATIC IOT2000), Phoenix Contact (PLCnext AXC F 1152), Beckhoff (CX7000), and Advantech compact edge series.
Troubleshooting
Node not reaching Ready
Check service logs. Most startup failures are port conflicts or insufficient permissions.
kubectl: connection refused
Verify the kubeconfig exists and the server address is reachable.
Pods stuck in Pending
Check that your manifests do not reference nodeSelector or tolerations that exclude the local node. Describe the pod for details: