-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeone doesn't update NO_PROXY
and no_proxy
in kube-proxy
Daemonset and static pods
#3310
Comments
But we don't set any proxy environment variables in kube-proxy daemonset, nether in static pods. The only change we do in static pods is we add /etc/ssl/certs volume and SSL_CERT_FILE env to the kube-controller-manager.yaml |
But seems like kubeadm is doing this... |
In that case, I suppose I should open an issue in kubeadm. Besides that, is there a chance you can or plan to do something about it? |
Let us investigate the possibilities. And please link here future kubeadm issue in case you'd create one. |
Hello, I created a new issue in kubeadm repo. kubernetes/kubeadm#3099 Besides that, there is a workaround. You can patch the no proxy env variables in static pods and
EDIT: I followed kubernetes/kubeadm#2771 (comment) |
thanks for updating! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
What happened?
Kubeone didn't update the
NO_PROXY
andno_proxy
env variables in static pods andkube-proxy
DaemonSet.I built a cluster with 1 master and 1 worker node. I utilized an HTTP proxy in that process. The cluster was built as expected. Then I added another worker node to the
staticWorkers.hosts
array and its public + private IPs to theproxy.noProxy
attribute. The new node was added to the k8s cluster as expected, however, its public and private IPs weren't added to theNO_PROXY
andno_proxy
env variables inkube-proxy
DaemonSet and the static pods in the cluster.Expected behavior
The
NO_PROXY
andno_proxy
env variables should be updated every time the user changes aproxy.noProxy
configuration in the kubeone YAML file and runskubeone apply
.How to reproduce the issue?
Create 3 VMs in any Cloud provider. They have to be connected through the private network and have public IPs.
Replace all the
<>
with the real values. Runkubeone apply -m <path-to-the-below-config>
to build a cluster using your HTTP proxy server.When the previous command finishes run
kubectl describe daemonsets.apps -n kube-system kube-proxy
and check theNO_PROXY
andno_proxy
values in theEnvironment
. They will have<master-private-IP>,<worker1-private-IP>,<master-public-IP>,<worker1-public-IP>
at the end, as expected. The same goes for all the static pods (kube-apiserver, kube-scheduler, kube-scheduler).Take the configuration below, because it adds a new static worker node, and replace
<>
with the real values again. Then runkubeone apply -m <path-to-the-below-config>
.When the kubeone finishes run
kubectl describe daemonsets.apps -n kube-system kube-proxy
. You should see that<worker2-private-IP>,<worker2-public-IP>
isn't in theNO_PROXY
andno_proxy
values.What KubeOne version are you using?
What cloud provider are you running on?
In this example, I spawned the VMs in Azure, but the same goes for Hetzner and AWS. I think it doesn't depend on the Cloud provider.
What operating system are you running in your cluster?
Ubuntu 22.04
Additional information
I use the Squid proxy as an HTTP proxy while building the k8s cluster.
The text was updated successfully, but these errors were encountered: