-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support for externally managed control plane #106
Conversation
Quality Gate failedFailed conditions |
Before providing more test coverage, may I ask for a simple review of the proposed changes, from a workflow perspective? It's not clear to me if the maintainers are open to skipping the validation of the |
Hi @prometherion, thank you for your contribution. Skipping validation for optional fields is fine, but please make sure your changes don't alter the validation behavior of other fields. Users should see errors as early as possible. Does Kamaji set the port and address separately? I'm asking, because my understanding was that the endpoint is always written in full or not at all. This would mean that you could merge the two conditions into one |
Kamaji is doing in a single transaction, yes, I can uniform those checks. |
Anything needed here? |
Thanks for the heads up @mcbenjemaa, I'm planning to work on this to make the PR ready for review, by the end of the week or the following one. Pretty busy days, sorry. |
take your time mate |
@prometherion |
Finally, I'm revamping it, feeling sorry for being late @mcbenjemaa. Let me know if we're ready to get this tested. |
can you provide a use-case to test it with Kamaji? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is close to done. The core is there but I'm not sold on the details.. This could use some test cases which would've uncovered the inconsistency I pointed out inline.
However, I don't think I like the 'magic' behaviour of empty host
and(?) port
meaning externally managed. Someone could omit host/port by accident without actually intending to use Kamaji or so. This needs very clear documentation. (Technically, this doesn't actually make the fields optional (-:)
Personally, I would prefer an optional bool field in ProxmoxClusterSpec
like ControlPlaneEndpointExternallyManaged
and to require that either ControlPlaneEndpointExternallyManaged
or ControlPlaneEndpoint
is set. This would make the intent clear.
That said, I would be fine with just clearly, explicitly documenting that setting host="" and port=0 means we'll wait for an externally managed endpoint. The check in proxmoxcluster_controller.go would need to be exactly the same as in the validation func i.e. host=="" && port==0.
// Skipping the validation of the Control Plane endpoint in case of an empty value: | ||
// this is the case of externally managed Control Plane which eventually provides the LB. | ||
if ep.Host == "" && ep.Port == 0 { | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the case where someone accidentally doesn't provide the control plane endpoint?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although outdated, we're keeping up with Cluster conditions.
@@ -94,6 +95,11 @@ func (*ProxmoxCluster) ValidateUpdate(_ context.Context, _ runtime.Object, newOb | |||
|
|||
func validateControlPlaneEndpoint(cluster *infrav1.ProxmoxCluster) error { | |||
ep := cluster.Spec.ControlPlaneEndpoint | |||
// Skipping the validation of the Control Plane endpoint in case of an empty value: | |||
// this is the case of externally managed Control Plane which eventually provides the LB. | |||
if ep.Host == "" && ep.Port == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could use some test cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, default Port is 6443.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outdated, with the new flag no need anymore for this check.
@@ -168,6 +168,22 @@ func (r *ProxmoxClusterReconciler) reconcileNormal(ctx context.Context, clusterS | |||
// If the ProxmoxCluster doesn't have our finalizer, add it. | |||
ctrlutil.AddFinalizer(clusterScope.ProxmoxCluster, infrav1alpha1.ClusterFinalizer) | |||
|
|||
cpe := clusterScope.ControlPlaneEndpoint() | |||
switch { | |||
case cpe.Host == "": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This only checks host but not port, yet the validation func checks both.
Documentation is a great place, of course, wondering if we could implement this kind of check in a different way. The Cluster API has a contract for an externally managed control plane thanks to the status key When a user is open to using Kamaji, or any other Control Plane provider which is satisfying that status key contract, we could skip the validation requiring a filled |
c4173be
to
dec4c67
Compare
@mcbenjemaa I just fixed the broken generated files, an e2e would be cool despite it seems a bit flaky according to the latest runs. I can try also to provide a small recorded smoke test by showing the integration with Proxmox and Kamaji: unfortunately, providing a proper test is a bit complicated since the dependencies between the moving parts. |
Signed-off-by: Dario Tranchitella <[email protected]>
dec4c67
to
c30f022
Compare
Quality Gate passedIssues Measures |
Can you share with me the manifest used to provision a proxmox cluster? |
Quality Gate passedIssues Measures |
@prometherion, we are happy to add support for this, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Finally! Thank you for your contribution! :) |
@mcbenjemaa absolutely, it's on my todo list! |
Sharing also here just a reference to get this working with Kamaji as externally managed Control Plane apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: proxmox-quickstart
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- REDACTED/REDACTED
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlane
name: proxmox-quickstart
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxCluster
name: proxmox-quickstart
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlane
metadata:
name: proxmox-quickstart
namespace: default
spec:
dataStoreName: default
addons:
coreDNS: { }
kubeProxy: { }
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- InternalIP
network:
serviceType: LoadBalancer
deployment:
replicas: 2
version: 1.29.7
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxCluster
metadata:
name: proxmox-quickstart
namespace: default
spec:
allowedNodes:
- pve
dnsServers:
- REDACTED
- REDACTED
externalManagedControlPlane: true
ipv4Config:
addresses:
- REDACTED-REDACTED
gateway: REDACTED
prefix: REDACTED
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: proxmox-quickstart-workers
namespace: default
spec:
clusterName: proxmox-quickstart
replicas: 2
selector:
matchLabels: null
template:
metadata:
labels:
node-role.kubernetes.io/node: ""
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: proxmox-quickstart-worker
clusterName: proxmox-quickstart
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxMachineTemplate
name: proxmox-quickstart-worker
version: v1.29.7
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxMachineTemplate
metadata:
name: proxmox-quickstart-worker
namespace: default
spec:
template:
spec:
disks:
bootVolume:
disk: scsi0
sizeGb: REDACTED
format: qcow2
full: true
memoryMiB: REDACTED
network:
default:
bridge: REDACTED
model: virtio
numCores: REDACTED
numSockets: REDACTED
sourceNode: pve
templateID: REDACTED
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: proxmox-quickstart-worker
namespace: default
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
provider-id: proxmox://'{{ ds.meta_data.instance_id }}'
users:
- name: root
sshAuthorizedKeys:
- REDACTED I wasn't able to let worker nodes join the Control Plane mostly because I'm working on a But overall everything's look good from the
|
Issue #95
Description of changes:
Supporting empty Control Plane endpoint when
ProxmoxCluster
is used by an externally managed Control Plane.The
ProxmoxCluster
controller will wait for a valid IP before proceeding to mark the infrastructure ready.Testing performed:
N.A.