-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[envTest] Feature request: Enable kube-controller-manager #3083
Comments
I understand this can become a slippery slope but if a concious decision is the made so then no "runtime-bound" controllers are enabled like the deployment-controller (we don't really want pods running for real, you know) and only the satellite controllers like namespace-controller and garbage-collector-controller are, this would already help heaps! Another possibility is to document the use of EnvTest "UseExistingCluster": https://github.com/kubernetes-sigs/controller-runtime/blob/v0.20.0/pkg/envtest/server.go#L157 |
I think this would be a nice thing to have. I'm not sure how hard is to implement (or if there are any showstoppers). In general, it should be possible to use "UseExistingCluster" with kind. |
My take here is that if you want a functional cluster, you should be using kind. Maybe we can try to make it easy to use kind from envtest? I am not a fan of teaching envtest how to setup a controller-manager (and soon after a scheduler, because that will be the next ask) because then we have to deal with keeping its config up to date with upstream changes and generally functioning, but kind already does all of that and likely better than we would |
Technically you could call kind as a library to create a kind cluster. We do this in CAPI. But I don't want a Go dependency to kind in the CR go module |
I was browsing yesterday and found that the E2E Framework worked around this by wrapping That got me thinking that the E2E framework would be more suitable for this kind of setup. I would still miss the simplicity of I'd be happy if some more examples and docs could be made to provide clearer path to use the Providing an example on the docs on how one could use |
I've got a snippet on how to control kind via Golang that can help setting up package utils
import (
"errors"
"os"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
kindapidefaults "sigs.k8s.io/kind/pkg/apis/config/defaults"
kindapiv1alpha4 "sigs.k8s.io/kind/pkg/apis/config/v1alpha4"
kindcluster "sigs.k8s.io/kind/pkg/cluster"
kindcmd "sigs.k8s.io/kind/pkg/cmd"
"sigs.k8s.io/yaml"
)
var kindDockerProvider = kindcluster.NewProvider(
kindcluster.ProviderWithDocker(),
kindcluster.ProviderWithLogger(
kindcmd.NewLogger(),
),
)
func SetupKindCluster(
clusterName string,
) error {
activeClusters, err := kindDockerProvider.List()
if err != nil {
return err
}
for _, activeClusterName := range activeClusters {
if activeClusterName == clusterName {
// nothing to do
return nil
}
}
clusterConfig := &kindapiv1alpha4.Cluster{
TypeMeta: kindapiv1alpha4.TypeMeta{
APIVersion: "kind.x-k8s.io/v1alpha4",
Kind: "Cluster",
},
Nodes: []kindapiv1alpha4.Node{
{
Role: kindapiv1alpha4.ControlPlaneRole,
Image: kindapidefaults.Image,
},
{
Role: kindapiv1alpha4.WorkerRole,
Image: kindapidefaults.Image,
},
{
Role: kindapiv1alpha4.WorkerRole,
Image: kindapidefaults.Image,
},
},
}
kindapiv1alpha4.SetDefaultsCluster(clusterConfig)
yamlBytes, err := yaml.Marshal(clusterConfig)
if err != nil {
return err
}
tmpFile, err := os.CreateTemp(
os.TempDir(),
"kind-config-*.yaml",
)
if err != nil {
return err
}
bytesWritten, err := tmpFile.Write(yamlBytes)
if bytesWritten != len(yamlBytes) {
return errors.New("failed to write the expected number of bytes to kind cluster config")
}
if err != nil {
return err
}
if err := tmpFile.Sync(); err != nil {
return err
}
if err := kindDockerProvider.Create(
clusterName,
kindcluster.CreateWithConfigFile(tmpFile.Name()),
); err != nil {
return err
}
return nil
}
func GetRestConfigForKindCluster(
clusterName string,
) (*rest.Config, error) {
kubeConfigStr, err := kindDockerProvider.KubeConfig(
clusterName,
false,
)
if err != nil {
return nil, err
}
return clientcmd.RESTConfigFromKubeConfig(
[]byte(kubeConfigStr),
)
} You can feed this to: testEnv := &envtest.Environment{
Config: restConfig,
UseExistingCluster: ptr.To(true),
} |
I'm still having some issues with Webhooks now refusing connections, thus causing errors in all of my tests.
I've checked the ports and they are the correct ones, dynamically configured by envTest itself. I have a feeling this has something to do with the certs used to configure the Webhooks. Since now there is a real cluster with a real API Server running, there might be some extra bits that need to be configured. What I've got working previously with
I'm not sure if anything different needs to be done when using an external cluster. Removing the registration of the webhooks, all tests pass again. Webhooks can be made to work if you override |
@migueleliasweb I think we made this work in Cluster API (not sure if I remember correctly it was a while ago) Can you please check if there is something useful here? https://github.com/kubernetes-sigs/cluster-api/blob/f9cd33fa58926b73cb31beb335c75a41c80e4181/internal/test/envtest/environment.go#L279-L286 |
In my case, I was forced to use my Docker Gateway address so then kind that is running on it's own docker network can reach back to my controller runtime manager that is running on a separate container running inside VSCode's devcontainer (which runs with network=host). 😂 It's a bit all over the place. I tried |
My question around this is the node image that kind uses for this? That seems like if embedded with envtest it will cause a lot of grief trying to deal with older clusters who want to use the goodness of what envtest offers. I am with @sbueringer and @alvaroaleman around this part:
and also agree the use of the existing cluster may be sufficient to fulfill the needs of the k8s components that kind already offers. |
I think we wouldn't embed the kind node image in envtest. If we would add kube-controller-manager to envtest, we would just include the kube-controller-manager binary like we include the kube-apiserver binary today (see the release attachments here: https://github.com/kubernetes-sigs/controller-tools/releases/tag/envtest-v1.32.0) |
Hi all,
the fact that kube-controller-manager is missing from
envTest
means that some of the most core things in Kubernetes are basically impossible to test. Things like:Here are some of the previously opened issues related to the missing kube-controller-manager in
envTest
:Personally, I'm really keen on having the garbage collection working. Without it, one would need to add some not-nice hacks to envtest to pretend things are working.
This forces users to perform E2E tests on a "real cluster" (something like kind) but then the main issue becomes the substantially longer turnaround time and the inability to easily debug tests with breakpoints (although still possible, it's a pain to setup).
If this capability is intentionally disabled and is not planned to be enabled, I'd wish this to be documented so then users won't come and ask for it on similar issues again. Other than that, if this could be worked out, it would be AMAZING.
Thanks in advance!
The text was updated successfully, but these errors were encountered: