Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for multiple managed identities per Kubernetes pod #408

Closed
Xitric opened this issue Apr 1, 2022 · 3 comments
Closed

Support for multiple managed identities per Kubernetes pod #408

Xitric opened this issue Apr 1, 2022 · 3 comments
Labels
enhancement New feature or request

Comments

@Xitric
Copy link

Xitric commented Apr 1, 2022

Is your feature request related to a problem? Please describe.

We are currently using pod managed identities to control access between our workloads in Kubernetes and various Azure resources, such as SQL Server and Key Vault. We have resolved (for the time being) to using a single managed identity per Kubernetes pod, since multiple pod managed identities were not supported yet. It was, however, on the roadmap (Azure/secrets-store-csi-driver-provider-azure#284).

We were just about to make an attempt at a PR ourselves, but then we saw that pod managed identities were being deprecated (Azure/secrets-store-csi-driver-provider-azure#837). Thus, we have now begun looking into migrating to workload identity federation. We wish to ensure that our use cases are supported / can be supported - otherwise we need to rethink. Looking at your current documentation, it seems that there might be a 1-1 binding
between:

Kubernetes pod <-> ServiceAccount <-> AD app / managed identity

Describe the solution you'd like

Once managed identities are supported with workload identity federation, we would like the ability for a pod to use one of multiple managed identities depending on the resource it is trying to access at any given time. For instance:

  • One managed identity provides read access to a single specific database
  • Another managed identity provides read access to a single specific key vault
  • etc.

This enables us to create a limited set of managed identities, each with a very restrictive set of permissions, and dynamically assign them to pods in Kubernetes depending on the needs of each individual pod. The less dependent we are on creating new managed identities for all kinds of permutations, the better in our opinion.

Describe alternatives you've considered

We have considered to instead rethink our model, and create a separate managed identity per pod in Kubernetes (or at least per permutation of permissions). However, we see some issues with this:

  • Every time we introduce a new workload pod in Kubernetes, we need additional logic to ensure that a managed identity has been prepared for it in Azure.
    • And in extension, adding additional permissions, or restricting the permissions of a pod, requires actions in Azure and cannot be restricted to e.g. a Helm upgrade on our deployed workloads.
  • For every new pod that requires access to a key vault, we need to make a new role assignment against that key vault, as opposed to giving read access to a single managed identity.
  • Knowing what permissions are assigned to a pod in Kubernetes requires inspecting all role assignments on the managed identity in Azure, and is not immediately obvious by looking at our Kubernetes resources.
  • If multiple pods need access to the same database (we have a lot of those use cases), we need to ensure that each corresponding managed identity has been mapped to a SQL user in that database with appropriate role assignments.
    • In our environment, this currently requires spinning up Azure functions through KEDA to create such SQL users, and as such it becomes unwieldy to do every time a new pod needs access to a database. If, for instance, we had a single managed identity with database read/write access (for that database), we can give new pods database access through a simple Helm upgrade.

Additional context

We are not against changing our current design, but we would at least be interested in best practices for handling our use cases with workload identity federation in whichever form it is released in. We know that managed identities are not currently supported (#325). When they are, however, is there any chance that it might support some of our needs?

@Xitric Xitric added the enhancement New feature or request label Apr 1, 2022
@aramase
Copy link
Member

aramase commented Apr 4, 2022

We were just about to make an attempt at a PR ourselves, but then we saw that pod managed identities were being deprecated (Azure/secrets-store-csi-driver-provider-azure#837). Thus, we have now begun looking into migrating to workload identity federation. We wish to ensure that our use cases are supported / can be supported - otherwise we need to rethink. Looking at your current documentation, it seems that there might be a 1-1 binding

Azure/secrets-store-csi-driver-provider-azure#837 issue tracks the deprecation of User-assigned managed identity and System-assigned managed identity access modes only. AAD Pod Identity mode in secrets-store-csi-driver-provider-azure will be deprecated and removed after Workload Identity federation support with managed identity is available: #325.

Workload identity to access keyvault is already supported with the Azure Key Vault Provider for Secrets Store CSI Driver: https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/configurations/identity-access-modes/workload-identity-mode/. That doesn't require the webhook that's provided in this repo. It supports multiple identities and there is no 1:1 mapping.

This enables us to create a limited set of managed identities, each with a very restrictive set of permissions, and dynamically assign them to pods in Kubernetes depending on the needs of each individual pod. The less dependent we are on creating new managed identities for all kinds of permutations, the better in our opinion.

Today, you can use multiple identities with the same pod. The annotation in the service account will be used to inject AZURE_CLIENT_ID env var which can be used default by SDKs. But if you want to use a specific identity based on the resource you are accessing, you can provide the client ID of the managed identity when requesting for a token.

The mutating webhook provided as part of this repo enhances the experience by injecting some of the default env vars used by the SDK. That doesn't mean you're pod is limited to just that single identity annotated in the service account. Your app can provide the specific client ID as part of the token request and you can setup federated identity credential for each managed identity that's used by your workload. The same service account token can then be exchanged for a valid AAD token for the requested identity.

@Xitric
Copy link
Author

Xitric commented Apr 4, 2022

Hello @aramase and thank you for the reply. We understand that pod managed identities are not deprecated as of yet, but the fact that they are already planned for removal means it is not something we wish to dedicate ourselves to at this point if we can avoid it.

So if I understand you correctly, the ServiceAccount annotation we saw in the documentation (azure.workload.identity/client-id: ${APPLICATION_CLIENT_ID}) is a completely optional annotation for ease of use. We are free to skip it, and refer to any other client ID as we wish in our SecretProviderClass or any other token request? As long as a federated identity credential has been set up between our AD and the AKS cluster OIDC issuer, we should be fine with the Azure secret store CSI driver alone?

If this is true, I thank you very much for clarifying the setup for us, and you are free to go ahead and close the issue.

@aramase
Copy link
Member

aramase commented Apr 4, 2022

So if I understand you correctly, the ServiceAccount annotation we saw in the documentation (azure.workload.identity/client-id: ${APPLICATION_CLIENT_ID}) is a completely optional annotation for ease of use. We are free to skip it, and refer to any other client ID as we wish in our SecretProviderClass or any other token request? As long as a federated identity credential has been set up between our AD and the AKS cluster OIDC issuer, we should be fine with the Azure secret store CSI driver alone?

That's right! The webhook and annotations as part of this project aren't for the CSI driver. For Secrets Store CSI Driver, the client id configuration is part of SecretProviderClass and no annotation is required. If you're using workload identity for accessing other resources in addition to the Secrets Store CSI Driver, you can leverage the mutating webhook as part of this repo to facilitate setting up the projected service account token volume and other env vars that can be useful for your workload.

If this is true, I thank you very much for clarifying the setup for us, and you are free to go ahead and close the issue.

Thank you! I'll close this issue now. If you run into any issues with using workload identity + Azure Key Vault Provider for Secrets Store CSI Driver, feel free to open an issue here. If you have any questions specifically for the webhook or how to use workload identity, you can open an issue in this repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants