Error from server: Get https://10.16.1.19:10250/containerLogs/kube-system/kubernetes-dashboard-57c9bfc8c8-fxk9s/kubernetes-dashboard: dial tcp 10.16.1.19:10250: connect: no route to host
If you’re getting that error (or similar), and if you’re working in VMs on ProxMox, the issue may well be that you don’t have vxlan support implemented correctly at the physical host. The easiest way to fix that is to just use OpenVSwitch: https://pve.proxmox.com/wiki/Open_vSwitch
In my case, I spent a week(!) trying to build a PoC Kubernetes cluster for my employer, and absolutely could not get the dashboard to work, could not get intra-container networking to work.
None of the support articles / threads / forums mentioned this issue in a nested virtualization scenario (and, frankly, it never occurred to me that the underlying issue might be at the hypervisor layer).
On the official Kubernetes Git project forum, most of the related posts are from Kubernetes newbies (like me), and most of the replies are from developers and/or seasoned admins offering very little in the way of useful information (and in many cases, the responses boil down to, “It Just Works™, you must be an idiot.”).
Very frustrating all the way around.
Finally, I recalled a ProxMox article (that I’m unable to find now) that cited limitations in the default Linux bridge offering. Playing the hunch, I quickly built a (smaller) Kubernetes cluster on VMWare on my laptop (which only features 4 cores / 8 threads, which is why I’m doing most of my work on the ProxMox server); my new 3-node cluster did, in fact, Just Work™.
A quick Duck Duck Go session pointed to either a fairly complex iptables solution on native Linux bridging at the physical ProxMox host (which, without great care, would impact all hosted VMs and containers).
OR…
Just use OVS for bridging with no muss, fuss, or custom configuration.

