Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added possibility to specify webhook port #893

Open
wants to merge 4 commits into
base: 2022.4
Choose a base branch
from

Conversation

ultramaxim
Copy link
Contributor

Hello, i offer to add possibility specify webhook port, because AQUA used earlier non-standard webhook port - 8443
see for example https://github.com/kubernetes-sigs/controller-runtime/blob/main/pkg/webhook/server.go#L39
there is 9443 default port for webhook.

added possibility to specify webhook port, because AQUA used earlieir non-standard webhook port - 8443
added possibility to specify webhook port, because AQUA used earlier non-standard webhook port - 8443
get port value from Values.yaml
added possibility to specify webhook port, because AQUA used earlier non-standard webhook port - 8443
set custom port for deployment from Values.yaml. Use named port
added possibility to specify webhook port, because AQUA used earlier non-standard webhook port - 8443
set named port from Deployment in Service
kube-enforcer/values.yaml Show resolved Hide resolved
@@ -81,7 +81,8 @@ spec:
{{- include "extraSecretEnvironmentVars" .Values | nindent 10 }}
ports:
{{- if not .Values.kubeEnforcerAdvance.enable }}
- containerPort: 8443
- containerPort: {{ .Values.ports.tlsContainerPort }}
name: https
{{- else }}
- containerPort: 8449
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about taking these defaults out to the values.yaml?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I exactly suggested move these values ​​to values.yaml =)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we talking about ports 8449 and 8442 in the else statement?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe same change should be made also in else statement (if kubeEnforcerAdvance "enable" true).
current PR focus on "regular" KE mode.

and i don't know, why is used port 8449 insted of smth else in KE Advence mode

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @ultramaxim

From the below output, we can see the "kubeenforceradvance.enable" section updated to "containerPort: {{ .Values.ports.tlsContainerPort }}".
So, We need to enable the kube enforcer advance section in the values.yaml highlighted below to set the port values to be changed as per requirement.

Please let us know if you agree with this.

image
image

Regards,
Srikanth Reddy

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hello, but the if statement (line 83) contains boolean operation NOT (if NOT .values.kubeEnforcerAdvance.enable)

thats why my settings will not in kubeEnforcerAdvance section in values.yaml

@@ -81,7 +81,8 @@ spec:
{{- include "extraSecretEnvironmentVars" .Values | nindent 10 }}
ports:
{{- if not .Values.kubeEnforcerAdvance.enable }}
- containerPort: 8443
- containerPort: {{ .Values.ports.tlsContainerPort }}
name: https
{{- else }}
- containerPort: 8449
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we talking about ports 8449 and 8442 in the else statement?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants