-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
disable gke deployment for clusters with installation by default if GKE deployment is not requested #679
disable gke deployment for clusters with installation by default if GKE deployment is not requested #679
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: verult The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -379,3 +394,10 @@ func getKubeClient() (kubernetes.Interface, error) { | |||
} | |||
return kubeClient, nil | |||
} | |||
|
|||
func isGKEDeploymentInstalledByDefault(clusterVersion string) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to query the cluster to see if it's already enabled, instead of encoding versions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could parse the output of gcloud beta container clusters describe
. It might be good to test these version bounds for defaulting tho, and it's also a teeny bit less work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function should check if we detect a pdcsi-node daemonset in kube-system namespace.(as this indirectly indicates that the component is enabled). Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's still good to use version bounds so that defaulting is only done in the versions we expect, but not a huge benefit since other test suites already cover defaulting testing. No strong preference either way
Calling kubectl to get DaemonSet info also requires parsing, in which case I would prefer getting the addonsConfig
from clusters describe
which is the definitive state of whether GKE deployment should be running.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree that checking addonsConfig is the right way to do it.
The only issue with checking k8s version, is that we would need to revisit and update the logic again, if there is a on-by-default is enabled for 1.17 clusters for e.g and also then it may get complicated with all the patch version comparision.
For immediate relief to the test failures I am ok with the k8s version check, but eventually we should check addonsConfig.
WDYT @msau42 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Long term I agree checking addons config is the right way. This is fine for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah right yeah all the backports. Will add a TODO here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually once the GKE go client refactor happens, we could just always explicitly enable/disable on cluster create. Defaulting logic is already tested elsewhere so not having coverage here is OK.
Local Prow run passing |
@msau42 @saikat-royc Created #680 to track refactor to use GKE go client. Updating regional clusters takes way too long through gcloud. |
test/k8s-integration/cluster.go
Outdated
@@ -379,3 +394,10 @@ func getKubeClient() (kubernetes.Interface, error) { | |||
} | |||
return kubeClient, nil | |||
} | |||
|
|||
func isGKEDeploymentInstalledByDefault(clusterVersion string) bool { | |||
cv := apimachineryversion.MustParseSemantic(clusterVersion) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is GKE version, should we use this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the one used here I believe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check this https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/master/test/k8s-integration/main.go#L530 generateGKETestSkip()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah gotcha, updated
…KE deployment is not requested
423ae53
to
090ad76
Compare
/lgtm |
tagging @mattcary to be in the loop |
What type of PR is this?
/kind failing-test
What this PR does / why we need it: 1.18 driver tests are failing when an overlay is used in a GKE deployment because the GKE deployment is enabled by default. This PR explicitly disables it.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
/assign @saikat-royc