Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposed AuthProvider API #219

Closed
wants to merge 9 commits into from
Closed

Conversation

HerbCaudill
Copy link
Collaborator

@HerbCaudill HerbCaudill commented Nov 1, 2023

Update: This proposal has been superseded by something along the lines of what @alexjg and I hashed out here.

This is a proposal for an AuthProvider API. This fixes #25 but has evolved a bit from that initial proposal.

Why

Automerge Repo currently offers no way to authenticate a peer, and very little in the way of access control.

Our current security model is the "Rumplestiltskin rule": If you know a document's ID, you can read that document, and everyone else who knows that ID will accept your changes.

That model is good enough for a surprising number of situations — the ID serves as an unguessable secret "password" for the document — but it has limitations. Without a way to establish a peer's identity, we can't revoke access for an individual peer — say if someone leaves a team, or if a device is lost. And we can't distinguish between read and write permissions, or limit access to specific documents.

An application might implement authentication and authorization in any number of ways, so this needs to be pluggable — like the existing network and storage adapters.

Initializing a repo with a specific auth provider might look something like this:

import { SuperCoolAuthProvider } from 'supercool-auth-library'

const authOptions = {
  // ...options specific to this type of authentication
}
const auth = new SuperCoolAuthProvider(options)

const repo = new Repo({ network, storage, auth })

Using the base AuthProvider directly

Most of these examples are from the tests.

Adding an AuthProvider to a repo

const permissive = new AuthProvider() // AuthProvider is maximally permissive by default

const aliceRepo = new Repo({
  network: [new MessageChannelNetworkAdapter(aliceToBob)],
  peerId: "alice" as PeerId,
  authProvider: permissive
})

Using the base (maximally permissive) AuthProvider without any overrides is the same as instantiating a repo with no AuthProvider.

Overriding the base adapter's methods

You can instantiate an AuthProvider with a config object containing any of the following:

export interface AuthProviderConfig {
  authenticate?: AuthenticateFn
  transform?: Transform
  okToAdvertise?: SharePolicy
  okToSync?: SharePolicy
}
okToAdvertise and okToSync

These are both SharePolicy functions. They have the same signature as the existing sharePolicy config option -- they take a peer ID and optionally a document ID, and return true or false; but they explicitly separate out two questions that the current sharePolicy muddles:

  • okToAdvertise Should we tell this peer about the existence of this document?
  • okToSync Should we provide this document & changes to it if requested?
authenticate

This is a function that takes a PeerId and a channel with which to (optionally) communicate with that peer. It returns a promise of an AuthenticationResult object indicating whether authentication succeeded, and, if not, why.

transform

A Transform consists of two functions, for transforming inbound and outbound messages, respectively. This might be use, for example, to encrypt messages using a shared secret.

Example: Maximally restrictive provider

const restrictive = new AuthProvider({
  authenticate: async () => {
    return {
      isValid: false,
      error: new Error("nope"),
    }
  },
  okToAdvertise: async () => false,
  okToSync: async () => false,
})

Example: Restricting access by peer

// only advertise new docs to the sync server
const authProvider = new AuthProvider({
  okToAdvertise: async (peerId, _documentId) => peerId === "syncserver.foo.com",
})

Example: Restricting access by document

const handle = collection.create()
const authProvider = new AuthProvider({
  okToAdvertise: async (_peerId, documentId) => documentId !== handle.documentId,
})

Example: Restricting access by peer and document

class NotForCharlieAuthProvider extends AuthProvider {
  excludedDocs: DocumentId[] = []

  // make sure that charlie never learns about excluded documents
  notForCharlie: SharePolicy = async (peerId, documentId) => {
    if (this.excludedDocs.includes(documentId!) && peerId === "charlie")
      return false
    return true
  }

  okToAdvertise = this.notForCharlie
  okToSync = this.notForCharlie
}

const authProvider = new NotForCharlieAuthProvider()

const notForCharlieHandle = aliceRepo.create<TestDoc>()
const notForCharlie = notForCharlieHandle.documentId
authProvider.excludedDocs.push(notForCharlie)

Extending AuthProvider

In the above examples, the provider is customized by passing a config object to the base class's constructor. Alternatively, a custom provider might extend the base class. Here's the maximally restrictive provider we saw earlier:

class RestrictiveAuthProvider extends AuthProvider {
  authenticate = async () => {
    return {
      isValid: false,
      error: new Error("nope"),
    }
  }
  okToAdvertise = async () => false
  okToSync = async () => false
}
const restrictive = new RestrictiveAuthProvider()

Example: Using network communication

We'll make a (very insecure) password auth provider that sends a password challenge, and compares the password returned to a hard-coded password list.

class PasswordAuthProvider extends AuthProvider {
  // The auth provider is initialized with a password response, which it will provide when
  // challenged.
  constructor(private passwordResponse: string) {
    super()
  }

  #challenge = "what is the password?"

  #passwords: Record<string, string> = {
    alice: "abracadabra",
    bob: "bucaramanga",
  }

  authenticate: AuthenticateFn = async (peerId, channel) => {
    return new Promise<AuthenticationResult>(resolve => {
      // send challenge
      channel.send(new TextEncoder().encode(this.#challenge))

      channel.on("message", msg => {
        const msgText = new TextDecoder().decode(msg)
        switch (msgText) {
          case this.#challenge:
            // received challenge, send password
            channel.send(new TextEncoder().encode(this.passwordResponse))
            break

          case this.#passwords[peerId]:
            // received correct password
            resolve(AUTHENTICATION_VALID)
            break

          default:
            // received incorrect password
            resolve(authenticationError("that is not the password"))
            break
        }
      })
    })
  }
}

Example: Transforming network traffic

The idea here is that rather than authenticate peers, the auth provider encrypts and decrypts messages using a secret key that each peer knows. No keys are revealed, but the peers can only communicate if they know the secret key.

function encryptingAuthProvider(secretKey: string) {
  return new AuthProvider({
    transform: {
      inbound: payload => {
        const decrypted = decrypt(payload.message, secretKey)
        return { ...payload, message: decrypted }
      },
      outbound: payload => {
        const encrypted = encrypt(payload.message, secretKey)
        return { ...payload, message: encrypted }
      },
    },
  })
}

@HerbCaudill HerbCaudill changed the title add error event to network adapter Auth provider interface Nov 1, 2023
@HerbCaudill HerbCaudill changed the title Auth provider interface Proposed Auth provider API Nov 1, 2023
@HerbCaudill HerbCaudill changed the title Proposed Auth provider API Proposed AuthProvider API Nov 1, 2023
@HerbCaudill HerbCaudill force-pushed the auth-provider-interface branch from 6ec3ab4 to f7f8e19 Compare November 1, 2023 17:11
@alexjg
Copy link
Contributor

alexjg commented Nov 1, 2023

Thanks for writing this up in such detail! I think it makes sense to introduce a richer API than sharePolicy but I'm worried about mixing concerns with things that feel more NetworkAdapter-ey to me. Specifically:

  • The API for authentication seems focused on protocols which run some kind of challenge/response over an existing channel. There are many situations where we already have auth information available by the time we receive a message on the NetworkAdapter (TLS certificates, SSH connections, HTTP Authorization headers in the upgrade request, Noise sockets, etc.). It seems to me like it would we hard to support such channels in this API without forcing the AuthChannel to be aware of how all the network adapters are implemented
  • It think it's likely that we would also want to make authorization decisions on a per network adapter basis. One example would be a shared worker where we want to run reads and writes from the sync server through authorization but we don't care about whether connections from other tabs (i.e. on MessageChannels) are authorized.
  • The transform functionality feels quite separate from authentication/authorization and more like a feature of the network transport.

Authorization specifically is complicated (as always) and whilst there are authorization decisions which sometimes involve the transport a message arrived on, I think it's generally easier to understand a system if the authorization rules are expressed in a central place.

My instinct is that we should put the responsibility for authentication in the NetworkAdapter and assume that any message which has been received has been authenticated to the satisfaction of the network adapter it came from; authorization however would live in AuthProvider which would now just consist of okToAdvertise, okToSync. In order to make network adapter specific authorization decisions we would pass the network adapter a message came on to these functions as well as the peer ID and document ID. This has the nice property that composing NetworkAdapters in different ways doesn't affect the way auth operates (e.g. you can use a NetworkAdapter which doesn't support the kind of challenge/response protocol that AuthChannel would require).

I wonder how this design (which I present as a straw man, having put much less thought into than you have put into this PR I think) sounds to you and if it conflicts with any design goals you have in mind (e.g. around implementing local first auth)?

Another question I have is whether auth/authzn applies to ephemeral messages. Currently we have a little homegrown gossip protocol for forwarding ephemeral messages from e.g. a tab to a shared worker and then on to a relay sync server. Messages can be forwarded an arbitrary number of hops so it's quite likely that a peer would receive ephemeral messages for which they have no direct connection and no authentication information. Should such messages be forwarded?

@HerbCaudill HerbCaudill force-pushed the auth-provider-interface branch from f7f8e19 to ab521f5 Compare November 3, 2023 14:37
@HerbCaudill HerbCaudill changed the base branch from storage-fix-rapidfire-changes to refactor-repo November 3, 2023 15:59
@HerbCaudill
Copy link
Collaborator Author

Thanks @alexjg . Let's talk next week. In the meantime, I've written this overview of how the LocalFirstAuthProvider currently works, for what it's worth:

Local-first auth provider for Automerge Repo

@HerbCaudill
Copy link
Collaborator Author

HerbCaudill commented Nov 8, 2023

OK @alexjg and I discussed this offline and I think we've arrived at a simpler and more flexible solution.

It'd be great to have a solution that automerge-repo doesn't even have to know about, so that we can introduce authentication without actually changing the automerge-repo API.

Current design

In the design currently proposed by this PR, the repo does three things with the auth provider:

1. Wrap network adapters

Repo uses the provider to wrap the network adapters, turning them into authenticated network adapters that guarantee that the peer event gives you an authenticated peer:

const authenticatedAdapters = networkAdapters.map(a => this.auth.wrapNetworkAdapter(a))
this.networkSubsystem = new NetworkSubsystem(authenticatedAdapters, peerId)

2. Provide storage

Repo injects the storage subsystem into the auth provider, so that it can persist its own state:

this.auth.useStorage(this.storageSubsystem)

3. Define sharing policies

CollectionSynchronizer uses the provider to provide authorization in the form of the okToSync and okToAdvertise functions:

const okToAdvertise = await this.repo.auth.okToAdvertise(peerId, documentId)
// etc.

Alternative design

Here's how these could be achieved without any changes to automerge-repo itself:

1. Wrap network adapters

The application can wrap the network adapters however it wants before it gives them to the Repo constructor:

A side benefit to this would be that the application could choose to authenticate some network adapters and not others: For example, a MessageChannel or BroadcastChannel used for communication between browser tabs and/or service workers wouldn't need to be authenticated.

2. Provide storage

The application can just take the same storage adapter that it gives to the Repo and give it to the auth provider.

In this case a provider would be using the storage adapter's slightly-lower-level API, and would be responsible for namespacing its keys.

3. Define sharing policies

For now we can just use the existing sharePolicy API. I do think we'll want to separate that into two distinct sharing policies, okToSync and okToAdvertise, but that can be a separate conversation. I don't need document-level permissions at this point, so I'm fine with kicking that can down the road a bit.

So instantiating automerge-repo with auth might look something like this:

const storage = new SomeStorageAdapter()

const authProvider = new LocalFirstAuthProvider({
  storage, // <- use the same storage adapter
  //...
})

const websocketAdapter = new BrowserWebSocketClientAdapter()
const broadcastAdapter = new BroadcastChannelNetworkAdapter()
const network = [
  authProvider.wrap(networkAdapter), // <- wrap one but not the other
  broadcastAdapter,
]

const sharePolicy = authProvider.getSharePolicy() // <- use the provider's state to make sharing decisions

const repo = new Repo({
  network,
  storage,
  sharePolicy,
})

@acurrieclark
Copy link
Collaborator

Does the share policy oktosync include a way to prevent a message from a peer updating a local handle? Essentially an ok to receive?

@alexjg
Copy link
Contributor

alexjg commented Nov 8, 2023

@acurrieclark I think we would need to add support for that to automerge first. Currently there's no way of calling receiveSyncMessage without accepting remote changes.

@acurrieclark
Copy link
Collaborator

So effectively if you are not ok to receive from a peer, then you wouldn't be able to sync your changes with that peer anyway?

@HerbCaudill
Copy link
Collaborator Author

HerbCaudill commented Nov 9, 2023

Yeah, the only way to really enforce write permissions would be to have the individual automerge changes signed or something. You can't do it on the basis of who you're syncing with, because they might be relaying someone else's changes.

@HerbCaudill
Copy link
Collaborator Author

Closing as this will be superseded by work in progress

@davetayls
Copy link

Hey @HerbCaudill, I'm looking for the result of what this was superseded by. Are you able to link to any more recent project? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants