Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add AuthN/AuthZ metrics #59557

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Add AuthN/AuthZ metrics #59557

wants to merge 6 commits into from

Conversation

MackinnonBuck
Copy link
Member

@MackinnonBuck MackinnonBuck commented Dec 18, 2024

Add AuthN/AuthZ metrics

Adds ASP.NET Core authentication and authorization metrics.

Description

This PR adds the following metrics:

  • Authentication:
    • Authenticated request duration
    • Challenge count
    • Forbid count
    • Sign in count
    • Sign out count
  • Authorization:
    • Count of requests requiring authorization

Ready to be reviewed, but the counter names, descriptions, and tags need to go through API review before this merges.

Fixes #47603

Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 5 out of 16 changed files in this pull request and generated 1 comment.

Files not reviewed (11)
  • src/Http/Authentication.Core/src/Microsoft.AspNetCore.Authentication.Core.csproj: Language not supported
  • src/Http/Authentication.Core/src/PublicAPI.Unshipped.txt: Language not supported
  • src/Security/Authentication/test/Microsoft.AspNetCore.Authentication.Test.csproj: Language not supported
  • src/Security/Authorization/Core/src/Microsoft.AspNetCore.Authorization.csproj: Language not supported
  • src/Security/Authorization/Core/src/PublicAPI/net10.0/PublicAPI.Unshipped.txt: Language not supported
  • src/Security/Authorization/Core/src/PublicAPI/net462/PublicAPI.Unshipped.txt: Language not supported
  • src/Security/Authorization/Core/src/PublicAPI/netstandard2.0/PublicAPI.Unshipped.txt: Language not supported
  • src/Security/Authorization/test/Microsoft.AspNetCore.Authorization.Test.csproj: Language not supported
  • src/Http/Authentication.Core/src/AuthenticationCoreServiceCollectionExtensions.cs: Evaluated as low risk
  • src/Http/Authentication.Core/src/AuthenticationMetrics.cs: Evaluated as low risk
  • src/Http/Authentication.Core/src/AuthenticationService.cs: Evaluated as low risk

@dotnet-policy-service dotnet-policy-service bot added the pending-ci-rerun When assigned to a PR indicates that the CI checks should be rerun label Dec 27, 2024
@davidfowl
Copy link
Member

@JamesNK @lmolkova

@MackinnonBuck MackinnonBuck requested a review from JamesNK January 3, 2025 23:10
@JamesNK
Copy link
Member

JamesNK commented Jan 4, 2025

Looking at this PR from a higher level, what are the main scenarios that people will use these? A bad outcome would be us going through the motions of adding metrics so we can say they're there. We should think about scenarios and make the metrics as useful as possible in real world scenarios.

For example, will people use authn/authz metrics to:

  • View instrument counts to verify that authn/authz is running correctly
  • See how long auth takes
  • Debug exceptions
  • All of the above?

I'm not an auth expert so I'd to hear from the folks who work with auth most often.

Debugging auth that isn't working, including exceptions, seems like it is the most valable scenario. It would be useful to collect metrics on an app that runs into some of the most common errors that people have with auth and see what can be done to make the output useful for them.

For example, in Kestrel from a developer's perspective it's hard to know why a connection was closed. The server knows why, so we have a set of known reasons for closing a connection. When Kestrel records that a connection is closed non-gracefully it includes that known reason as the error.type tag. See the error.type details at https://opentelemetry.io/docs/specs/semconv/dotnet/dotnet-kestrel-metrics/#metric-kestrelconnectionduration

Applying that idea to authn/authz: common reasons for errors could be included as the error.type in some of these metrics:

  • Policy name not found
  • Scheme not found
  • Scheme not specified
  • etc

Surfacing up known error reasons from auth handlers might also be useful. For example, would it be valuable for the cookie auth handler to add metadata to a metric that it wasn't able to authenticate the HTTP request because it couldn't decrypt the cookie payload?

@halter73
Copy link
Member

halter73 commented Jan 6, 2025

Also, do people care how long authenticating takes? Should this be a histogram rather than a counter? A histogram can be used for both measuring count and duration. This question applies to a lot of the metrics here because they're in async methods.

This is good feedback for AuthenticateAsync in particular. Normally it should be very fast because you're usually doing something like decrypting cookies with already-downloaded data protection keys or validating JWTs with already-downloaded JWKs, but this can be slowed down by things like the initial downloading of keys or user-defined callbacks.

I don't think histograms are as valuable for challenge, forbid, sign-in and sign-out though. Generally speaking, this are simply setting headers for things like redirects.

For example, in Kestrel from a developer's perspective it's hard to know why a connection was closed.

I think it's worth differentiating expected error cases like connection closed reasons in Kestrel vs. issues caused by misconfiguration.

I consider things like scheme not found and policy name not found to be misconfiguration. If someone were to run into such an error, that should be able to provide repro steps that would allow a developer to try the scenario themselves why the crank up the logging to see what's going on. I don't think we need metrics for these.

On the other hand, it would be interesting to have metrics for errors originating from calls to identity providers via the RemoteAuthenticationOptions.Backchannel HttpClient since these could be transient. Errors reported via the ?error=... query string parameter when identity providers redirect back to the remote authentication handler's CallbackPath could be transient or due to misconfiguration, but I think it'd be worth it to add metrics for these too. The known OAuth error types are defined in https://datatracker.ietf.org/doc/html/rfc6749#section-4.1.2.1

Both of these are specific to remote authentication handlers, so I think it could be addressed in a follow up PR.

Copy link

@lmolkova lmolkova left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few minor naming suggestions. James share the key point in the comments - these metrics should probably be histograms. Even if it does not make sense, and some should remain counters, the counters should be reported after operation completes and should include the error.type attribute in case exception has happened.

{
if (_authorizeCount.Enabled)
{
var resultTagValue = result.Succeeded ? "success" : "failure";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it always a boolean flag? E.g. could we add a error.type or some status differentiating failures?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There could be multiple reasons for a given failure, and each reason is an arbitrary string, so unfortunately I'm not sure we can create distinct categories for failures.

private readonly Counter<long> _signInCount;
private readonly Counter<long> _signOutCount;

public AuthenticationMetrics(IMeterFactory meterFactory)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be great to have a doc describing the meaning and format of each metric and hopefully add them to https://github.com/open-telemetry/semantic-conventions/blob/main/docs/dotnet/dotnet-aspnetcore-metrics.md. Happy to help if necessary.

src/Http/Authentication.Core/src/AuthenticationMetrics.cs Outdated Show resolved Hide resolved
src/Http/Authentication.Core/src/AuthenticationMetrics.cs Outdated Show resolved Hide resolved
@MackinnonBuck
Copy link
Member Author

Thanks for all the feedback, everyone!

We had a discussion offline about how to address the feedback here, and this is what we've landed on:

  • Change the instrument for AuthenticateAsync() to be a histogram
  • Report exceptions via the error.type tag/attribute
  • Add internal *Impl classes for AuthenticationService and DefaultAuthorizationService to avoid introducing new public API

The latest revisions of this PR include these changes.

Debugging auth that isn't working, including exceptions, seems like it is the most valuable scenario.

The current implementation just adds an error.type attribute if an exception gets thrown, but I'm not sure how useful that will be if most reported values are just, e.g., System.InvalidOperationException. Its primary use might be determining how often exceptions are getting thrown rather than what the details are. As @halter73 mentioned, the developer could change the log level if they want to debug more deeply. Do others agree with this, or do we need to include more exception/failure info in these metrics?

return result;
}

public override async Task<AuthorizationResult> AuthorizeAsync(ClaimsPrincipal user, object? resource, string policyName)
Copy link
Member

@JamesNK JamesNK Jan 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mentioned this earlier: passing in a bool for whether there is an authenticated user or not during authorization seems like it would be useful info.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks - I just added a new tag indicating whether the user was authenticated.

I also see there's an experimental user.roles tag defined here. Maybe that's useful information to include as well? cc @halter73

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't put that in a metric. The cardinality could be too high.

src/Http/Authentication.Core/src/AuthenticationMetrics.cs Outdated Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-security pending-ci-rerun When assigned to a PR indicates that the CI checks should be rerun
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Investigate AuthN/AuthZ metrics in ASP.NET Core
8 participants