-
-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable cilium CNI option #72
Changes from 5 commits
c0e11b6
c129195
f4c42be
22ba04b
64bd30b
6e428e0
c0002dc
7d10fb0
a620425
7ed428e
415ed8b
14dd29c
645377e
c3c2e1a
d436a25
9ffc4fd
781cffc
5879358
ef3e74c
1f8fab3
db0be02
17946e0
528e59a
940d1c9
e0bfb05
e13c365
f12b19e
210c10f
5659a81
99a36c8
46f471b
ed97639
0c527dc
b6508ee
4594501
a05abcd
5cfcf04
2bf94ae
a97395a
4040826
a8ebde2
904cbae
a439b8c
80f2054
3f5f9ee
791621c
51a5b1f
c3fca22
c95d72e
fa4005f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,8 @@ | ||
--- | ||
cni_plugin: calico | ||
bgp_peer_address: 192.168.0.1 | ||
bgp_peer_asn: 64512 | ||
cilium_helm_version: 1.8.3 | ||
cilium_image_version: v1.8.3 | ||
k8s_service_host: "{{ hostvars[groups['masters'][0]]['ansible_host'] }}" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @RobReus -- This really should be VIP IP if it's presented, I'm at the edge of my Ansible skills. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. when using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. theoretically it should be outside of Cilium as it's a host level config, right? However, this would need to be an either or I think; if VIP present use that, otherwise use the master IP? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah i think you're right. it should be an either/or situation. i'm not sure if something could be achieved with jinja logic, but my assumption is that it would be possible by using some jinja operators on the template, in addition to some sanity checking for each var. |
||
k8s_service_port: 6443 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
--- | ||
- name: Install Helm v3 | ||
shell: | | ||
crutonjohn marked this conversation as resolved.
Show resolved
Hide resolved
|
||
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash | ||
args: | ||
warn: false | ||
|
||
- name: Add Cilium Repo | ||
command: | ||
cmd: helm repo add cilium https://helm.cilium.io/ | ||
|
||
- name: Deploy Cilium | ||
shell: | | ||
helm upgrade -i cilium cilium/cilium --version {{ cilium_helm_version }} \ | ||
--set global.registry="docker.io/cilium" \ | ||
--set global.tag="{{ cilium_image_version }}" \ | ||
--set global.tunnel="disabled" \ | ||
--set global.externalIPs.enabled="true" \ | ||
--set global.ipam.operator.clusterPoolIPv4PodCIDR="{{ cluster_pod_subnet }}" \ | ||
--set global.ipam.operator.clusterPoolIPv4MaskSize="24" \ | ||
--set global.endpointRoutes.enabled="true" \ | ||
--set global.hostServices.enabled="true" \ | ||
--set global.autoDirectNodeRoutes="true" \ | ||
--set global.nodePort.enabled="true" \ | ||
--set global.nodePort.mode="dsr" \ | ||
--set global.masquerade="false" \ | ||
--set global.hubble.enabled="true" \ | ||
--set global.hubble.ui.enabled="true" \ | ||
--set global.hubble.relay.enabled="true" \ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wonder if there is an easier way to store these values to allow them to be easier to configure for users, for example, a user using arm might not be able to enable hubble right now. Would it make sense if we stored these in a values.yaml under files/ and we copy that and just There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah we could template out a cilium-values.yaml file. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep, if we can template out a file that users can use to edit helm values that would be perfect! |
||
--set global.hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \ | ||
--set global.kubeProxyReplacement=strict \ | ||
--set global.k8sServiceHost={{ k8s_service_host }} \ | ||
--set global.k8sServicePort={{ k8s_service_port }} \ | ||
--set config.bpfMasquerade="false" \ | ||
--namespace kube-system | ||
|
||
- name: Create Manifests Directory | ||
file: | ||
path: /root/manifests | ||
state: directory | ||
mode: 0700 | ||
|
||
- name: "Deploy manifests" | ||
become: true | ||
template: | ||
src: "{{ item }}" | ||
dest: "/root/manifests/{{ item | basename | replace('.j2','') }}" | ||
mode: 0600 | ||
with_items: | ||
- "generic-kuberouter-only-advertise-routes.yaml.j2" | ||
|
||
- name: Applying manifests | ||
command: | ||
cmd: "kubectl apply -f /root/manifests/{{ item }}" | ||
with_items: | ||
- "generic-kuberouter-only-advertise-routes.yaml" | ||
|
||
- name: Remove Manifests Directory | ||
file: | ||
path: /root/manifests | ||
state: absent |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,129 @@ | ||
--- | ||
apiVersion: apps/v1 | ||
kind: DaemonSet | ||
metadata: | ||
labels: | ||
k8s-app: kube-router | ||
tier: node | ||
name: kube-router | ||
namespace: kube-system | ||
spec: | ||
selector: | ||
matchLabels: | ||
k8s-app: kube-router | ||
tier: node | ||
template: | ||
metadata: | ||
labels: | ||
k8s-app: kube-router | ||
tier: node | ||
spec: | ||
priorityClassName: system-node-critical | ||
serviceAccountName: kube-router | ||
containers: | ||
- name: kube-router | ||
image: "{{ kube_router_image }}" | ||
imagePullPolicy: Always | ||
args: | ||
- "--run-router=true" | ||
- "--run-firewall=false" | ||
- "--run-service-proxy=false" | ||
- "--bgp-graceful-restart=true" | ||
- "--enable-cni=false" | ||
- "--enable-pod-egress=false" | ||
- "--enable-ibgp=true" | ||
- "--enable-overlay=false" | ||
- "--peer-router-ips={{ bgp_peer_address }}" | ||
- "--peer-router-asns={{ bgp_peer_asn }}" | ||
- "--cluster-asn={{ bgp_cluster_asn }}" | ||
- "--advertise-cluster-ip=true" | ||
- "--advertise-external-ip=true" | ||
- "--advertise-loadbalancer-ip=true" | ||
env: | ||
- name: NODE_NAME | ||
valueFrom: | ||
fieldRef: | ||
fieldPath: spec.nodeName | ||
livenessProbe: | ||
httpGet: | ||
path: /healthz | ||
port: 20244 | ||
initialDelaySeconds: 10 | ||
periodSeconds: 3 | ||
resources: | ||
requests: | ||
cpu: 250m | ||
memory: 250Mi | ||
securityContext: | ||
privileged: true | ||
volumeMounts: | ||
- name: xtables-lock | ||
mountPath: /run/xtables.lock | ||
readOnly: false | ||
hostNetwork: true | ||
tolerations: | ||
- effect: NoSchedule | ||
operator: Exists | ||
- key: CriticalAddonsOnly | ||
operator: Exists | ||
- effect: NoExecute | ||
operator: Exists | ||
volumes: | ||
- name: xtables-lock | ||
hostPath: | ||
path: /run/xtables.lock | ||
type: FileOrCreate | ||
--- | ||
apiVersion: v1 | ||
kind: ServiceAccount | ||
metadata: | ||
name: kube-router | ||
namespace: kube-system | ||
--- | ||
kind: ClusterRole | ||
apiVersion: rbac.authorization.k8s.io/v1beta1 | ||
metadata: | ||
name: kube-router | ||
namespace: kube-system | ||
rules: | ||
- apiGroups: | ||
- "" | ||
resources: | ||
- namespaces | ||
- pods | ||
- services | ||
- nodes | ||
- endpoints | ||
verbs: | ||
- list | ||
- get | ||
- watch | ||
- apiGroups: | ||
- "networking.k8s.io" | ||
resources: | ||
- networkpolicies | ||
verbs: | ||
- list | ||
- get | ||
- watch | ||
- apiGroups: | ||
- extensions | ||
resources: | ||
- networkpolicies | ||
verbs: | ||
- get | ||
- list | ||
- watch | ||
--- | ||
kind: ClusterRoleBinding | ||
apiVersion: rbac.authorization.k8s.io/v1beta1 | ||
metadata: | ||
name: kube-router | ||
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: kube-router | ||
subjects: | ||
- kind: ServiceAccount | ||
name: kube-router | ||
namespace: kube-system |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,3 +4,4 @@ cni_supported_plugins: | |
- calico | ||
- flannel | ||
- weave | ||
- cilium |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be good to have a comment in here for users to roughly know why you'd have this disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm unfamiliar with the requirements for cilium, so i am under the assumption that this is a requirement for it. if that is the case, then this var should be scoped as such. explicitly setting this var for later use could be misleading to some users. having this var set based on what CNI you're hoping to deploy would be the ideal situation, likely from a conditional import vars based on CNI selection from the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not a requirement, but it would modify the installation options for cilium if you're not using kube-proxy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would still prefer that the variable be scoped, or at the very least some comments be added to detail what you're enabling/disabling.