Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added configure method to greenlight proto file, regenerated grpc fil… #275

Merged
merged 1 commit into from
Nov 1, 2023

Conversation

Randy808
Copy link
Collaborator

…es, and added python bindings.

libs/gl-plugin/src/lib.rs Outdated Show resolved Hide resolved
libs/gl-plugin/src/node/mod.rs Outdated Show resolved Hide resolved
@cdecker cdecker enabled auto-merge (rebase) September 29, 2023 12:57
@cdecker
Copy link
Collaborator

cdecker commented Oct 3, 2023

Somehow the tests seem to be hanging indefinitely. Could this be related to the configure method? @Randy808 can you take a look at this?

@Randy808
Copy link
Collaborator Author

Randy808 commented Oct 3, 2023

When running pytest -s -vvv /repo/libs/gl-testing/tests/test_gl_node.py::test_configure_close_to_addr, I'm getting this locally now:

[2023-10-03T14:19:52Z ERROR lightning_signer::util::status] BACKTRACE:
       0: lightning_signer::policy::error::policy_error
                 at /home/randy/.cargo/git/checkouts/vls-238adeeb7f2df4b9/b8d42d6/vls-core/src/policy/error.rs:135:13
       1: lightning_signer::policy::policy_error_with_filter
                 at /home/randy/.cargo/git/checkouts/vls-238adeeb7f2df4b9/b8d42d6/vls-core/src/policy/mod.rs:81:13
       2: <lightning_signer::policy::simple_validator::SimplePolicy as lightning_signer::policy::Policy>::policy_error
                 at /home/randy/.cargo/git/checkouts/vls-238adeeb7f2df4b9/b8d42d6/vls-core/src/policy/simple_validator.rs:131:9
       3: <lightning_signer::policy::simple_validator::SimpleValidator as lightning_signer::policy::validator::Validator>::validate_ready_channel
                 at /home/randy/.cargo/git/checkouts/vls-238adeeb7f2df4b9/b8d42d6/vls-core/src/policy/simple_validator.rs:430:17
       4: lightning_signer::node::Node::ready_channel
                 at /home/randy/.cargo/git/checkouts/vls-238adeeb7f2df4b9/b8d42d6/vls-core/src/node.rs:1737:9
       5: <vls_protocol_signer::handler::ChannelHandler as vls_protocol_signer::handler::Handler>::do_handle
                 at /home/randy/.cargo/git/checkouts/vls-238adeeb7f2df4b9/b8d42d6/vls-protocol-signer/src/handler.rs:956:17
       6: vls_protocol_signer::handler::Handler::handle
                 at /home/randy/.cargo/git/checkouts/vls-238adeeb7f2df4b9/b8d42d6/vls-protocol-signer/src/handler.rs:121:22
       7: gl_client::signer::Signer::process_request::{{closure}}
                 at libs/gl-client/src/signer/mod.rs:396:17
       8: gl_client::signer::Signer::run_once::{{closure}}
                 at libs/gl-client/src/signer/mod.rs:281:44
       9: gl_client::signer::Signer::run_forever_with_uri::{{closure}}
                 at libs/gl-client/src/signer/mod.rs:580:29
      10: gl_client::signer::Signer::run_forever::{{closure}}
                 at libs/gl-client/src/signer/mod.rs:505:67
      11: glclient::signer::Signer::run_in_thread::{{closure}}::{{closure}}
                 at libs/gl-client-py/src/signer.rs:49:87
      12: <core::pin::Pin<P> as core::future::future::Future>::poll
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/core/src/future/future.rs:125:9
      13: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}::{{closure}}
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/scheduler/current_thread/mod.rs:665:57
      14: tokio::runtime::coop::with_budget
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/coop.rs:107:5
          tokio::runtime::coop::budget
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/coop.rs:73:5
          tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}::{{closure}}
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/scheduler/current_thread/mod.rs:665:25
      15: tokio::runtime::scheduler::current_thread::Context::enter
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/scheduler/current_thread/mod.rs:410:19
      16: tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}}
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/scheduler/current_thread/mod.rs:664:36
      17: tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}}
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/scheduler/current_thread/mod.rs:743:68
      18: tokio::runtime::context::scoped::Scoped<T>::set
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/context/scoped.rs:40:9
      19: tokio::runtime::context::set_scheduler::{{closure}}
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/context.rs:176:26
      20: std::thread::local::LocalKey<T>::try_with
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/std/src/thread/local.rs:252:16
      21: std::thread::local::LocalKey<T>::with
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/std/src/thread/local.rs:228:9
      22: tokio::runtime::context::set_scheduler
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/context.rs:176:9
      23: tokio::runtime::scheduler::current_thread::CoreGuard::enter
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/scheduler/current_thread/mod.rs:743:27
      24: tokio::runtime::scheduler::current_thread::CoreGuard::block_on
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/scheduler/current_thread/mod.rs:652:19
      25: tokio::runtime::scheduler::current_thread::CurrentThread::block_on::{{closure}}
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/scheduler/current_thread/mod.rs:175:28
      26: tokio::runtime::context::runtime::enter_runtime
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/context/runtime.rs:65:16
      27: tokio::runtime::scheduler::current_thread::CurrentThread::block_on
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/scheduler/current_thread/mod.rs:167:9
      28: tokio::runtime::runtime::Runtime::block_on
                 at /home/randy/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.32.0/src/runtime/runtime.rs:347:47
      29: glclient::signer::Signer::run_in_thread::{{closure}}
                 at libs/gl-client-py/src/signer.rs:49:36
      30: std::sys_common::backtrace::__rust_begin_short_backtrace
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/std/src/sys_common/backtrace.rs:134:18
      31: std::thread::Builder::spawn_unchecked_::{{closure}}::{{closure}}
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/std/src/thread/mod.rs:526:17
      32: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/core/src/panic/unwind_safe.rs:271:9
      33: std::panicking::try::do_call
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/std/src/panicking.rs:485:40
      34: __rust_try
      35: std::panicking::try
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/std/src/panicking.rs:449:19
      36: std::panic::catch_unwind
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/std/src/panic.rs:140:14
      37: std::thread::Builder::spawn_unchecked_::{{closure}}
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/std/src/thread/mod.rs:525:30
      38: core::ops::function::FnOnce::call_once{{vtable.shim}}
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/core/src/ops/function.rs:250:5
      39: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/alloc/src/boxed.rs:1973:9
          <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/alloc/src/boxed.rs:1973:9
          std::sys::unix::thread::Thread::new::thread_start
                 at /rustc/90c541806f23a127002de5b4038be731ba1458ca/library/std/src/sys/unix/thread.rs:108:17
      40: start_thread
      41: clone
    
[2023-10-03T14:19:52Z ERROR vls_protocol_signer::handler] Signing(Status { code: FailedPrecondition, message: "policy failure: validate_ready_channel: holder_shutdown_script is not in wallet or allowlist" })
lightningd-1 2023-10-03T14:19:53.895Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-hsmd: Got WIRE_HSMD_CUPDATE_SIG_REQ
lightningd-1 2023-10-03T14:19:53.897Z DEBUG   hsmd: Client: Received message 3 from client
lightningd-2 2023-10-03T14:19:53.907Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-hsmd: Got WIRE_HSMD_CUPDATE_SIG_REQ
lightningd-2 2023-10-03T14:19:53.908Z DEBUG   hsmd: Client: Received message 3 from client
stdout b'DEBUG   lightningd: Printing\n'
stdout b'DEBUG   lightningd: Plugin gl-plugin returned from openchannel hook call\n'
stdout b'DEBUG   plugin-gl-plugin: ListdatastoreRequest response: Ok(ListdatastoreResponse { datastore: [ListdatastoreDatastore { key: [\\"glconf\\", \\"channel\\", \\"close_to_addr\\"], generation: Some(0), hex: Some(\\"62637274317166726378306b687537306d32653636397a656c616e77347461356e616b76743265656e737832\\"), string: Some(\\"bcrt1qfrcx0khu70m2e669zelanw4ta5nakvt2eensx2\\") }] })\n'
stdout b'DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-openingd-chan#1: peer_out WIRE_ACCEPT_CHANNEL\n'
stdout b'DEBUG   plugin-gl-plugin: Received request from hsmproxy: HsmRequest { request_id: 9, context: Some(HsmRequestContext { node_id: [2, 45, 34, 54, 32, 163, 89, 164, 127, 247, 247, 172, 68, 124, 133, 196, 108, 146, 61, 165, 51, 137, 34, 26, 0, 84, 193, 28, 30, 60, 163, 29, 89], dbid: 1, capabilities: 24 }), raw: [0, 31, 0, 0, 0, 0, 0, 0, 30, 118, 75, 0, 0, 0, 0, 0, 0, 0, 0, 32, 6, 25, 112, 15, 206, 63, 124, 156, 160, 92, 1, 161, 143, 112, 101, 104, 219, 136, 65, 68, 51, 51, 226, 253, 237, 50, 27, 18, 249, 58, 71, 0, 0, 0, 6, 0, 22, 0, 20, 72, 240, 103, 218, 252, 243, 246, 172, 235, 69, 22, 127, 217, 186, 171, 237, 39, 219, 49, 106, 0, 3, 82, 55, 104, 112, 40, 100, 96, 207, 123, 81, 155, 2, 225, 52, 45, 227, 20, 166, 60, 199, 128, 254, 119, 6, 206, 208, 80, 169, 160, 27, 122, 135, 3, 59, 250, 31, 182, 50, 60, 177, 155, 91, 200, 122, 253, 132, 205, 111, 192, 110, 210, 72, 237, 188, 195, 152, 205, 93, 32, 19, 48, 63, 64, 120, 125, 2, 199, 90, 146, 78, 33, 254, 95, 218, 15, 83, 176, 70, 217, 97, 223, 131, 246, 174, 240, 4, 16, 255, 67, 1, 143, 73, 106, 195, 42, 122, 189, 178, 3, 14, 113, 174, 193, 37, 64, 180, 133, 34, 69, 253, 247, 180, 28, 114, 231, 206, 58, 111, 48, 244, 66, 100, 243, 7, 245, 42, 241, 225, 85, 165, 173, 2, 199, 235, 228, 251, 255, 179, 155, 250, 191, 56, 103, 99, 122, 194, 175, 183, 248, 217, 221, 18, 243, 94, 70, 202, 28, 80, 74, 168, 58, 9, 192, 147, 0, 5, 0, 0, 0, 2, 16, 0], signer_state: [], requests: [] }\n'
stdout b'DEBUG   plugin-gl-plugin: Sending signer requests with 0 requests and 5 state entries\n'
stdout b'DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-openingd-chan#1: billboard: Incoming channel: accepted, now waiting for them to create funding tx\n'
stdout b'DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-openingd-chan#1: peer_in WIRE_FUNDING_CREATED\n'
stdout b'DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: seeker: startup peer finished\n'

I'll try rebasing

@Randy808
Copy link
Collaborator Author

Randy808 commented Oct 3, 2023

The removal of the permissive policy filter broke it (8277142). I think I can fix the policy failure: validate_ready_channel: holder_shutdown_script is not in wallet or allowlist error by adding the holder_shutdown_script (added by the configure method), to the signer's policy

Using https://gitlab.com/lightning-signer/validating-lightning-signer/-/blob/main/vls-core/src/setup_channel_tests.rs#L228 for reference*

@cdecker
Copy link
Collaborator

cdecker commented Oct 4, 2023

Yeah, that's a bit of a tricky one, but I'm glad we are actually triggering a policy violation by using an external scriptpubkey. We probably want to have the client sign off on the configure step, and pass that down to the signer at startup too. This way we can add the configured variables to the processing on the signer (by adding the scriptpubkey to the allowlist at startup of the signer, thus make it so the node accepts it).

In the meantime, can you try and see if adding an excemption to the FilterRule slice for now (https://github.com/Blockstream/greenlight/pull/283/files#diff-8d5abc718d7a05d6432918246fcf5f9919681ca385b057f72a2117e9878e1932R106-R109) and we can strengthen that validation once we have modified the configure method to be authenticated by a user signature?

@Randy808
Copy link
Collaborator Author

I added policy-mutual-destination-allowlisted to the policy filter but the test test_vls_crash_repro is failing for me now. It also fails when I change the branch to main, rebuild, and re-run the tests. Is there something I have to do locally to get it to work?

@cdecker
Copy link
Collaborator

cdecker commented Oct 16, 2023

I think we probably need to take a bit of a step back and see how we can convince the signer that it's allowed to close to a given address. Generally speaking the configuration must be settable only by authorized clients. This is very similar to how access control is already done for the RPC, so we could just reuse that technique:

  • The client has a client identity (and associated keypair) that is used to sign off on RPC payloads.
  • The plugin reads and caches the RPC payloads and their signatures in case it needs to forward them to the signer.
  • Instead, the configure RPC can take the serialized payload, and store it in the datastore where it'll be persisted across sessions.
  • We then load the signed configure call from the datastore at startup into memory.
  • Whenever there is a close to be processed we just inject the configure payload into the request context that is being passed to the signer
  • When processing the request context on the signer, we use the fake configure call to configure the signer on-the-fly so it knows it should for example accept the configured close_to parameter.

As a further simplification we can also just always prepend the configure payload to the signer context, and make it dynamic later, which could save us a couple of bytes per signer request.

Links

Here are the locations where we already handle RPC payloads and their signatures, to show where these customizations are needed:

RPC payload serialization and attestation via client identity signature:

fn call(&mut self, request: Request<BoxBody>) -> Self::Future {
use base64::Engine;
let engine = base64::engine::general_purpose::STANDARD_NO_PAD;
// This is necessary because tonic internally uses `tower::buffer::Buffer`.
// See https://github.com/tower-rs/tower/issues/547#issuecomment-767629149
// for details on why this is necessary
let clone = self.inner.clone();
let mut inner = std::mem::replace(&mut self.inner, clone);
let keypair = EcdsaKeyPair::from_pkcs8(
&signature::ECDSA_P256_SHA256_FIXED_SIGNING,
self.key.as_ref(),
)
.unwrap();
Box::pin(async move {
use bytes::BufMut;
use std::convert::TryInto;
use tonic::codegen::Body;
// Returns UTC on all platforms, no need to handle
// timezones.
let time = std::time::SystemTime::now()
.duration_since(std::time::SystemTime::UNIX_EPOCH)?
.as_millis();
let (mut parts, mut body) = request.into_parts();
let data = body.data().await.unwrap().unwrap();
// Copy used to create the signature (payload + associated data)
let mut buf = data.to_vec();
// Associated data that is covered by the signature
let mut ts = vec![];
ts.put_u64(time.try_into()?);
buf.put_u64(time.try_into()?);
let rng = rand::SystemRandom::new();
let pubkey = keypair.public_key().as_ref();
let sig = keypair.sign(&rng, &buf).unwrap();
// We use base64 encoding simply because it is
// slightly more compact and we already have it as
// a dependency from rustls. Sizes are as follows:
//
// - Pubkey: raw=65, hex=130, base64=88
// - Signature: raw=64, hex=128, base64=88
//
// For an overall saving of 82 bytes per request,
// and a total overhead of 199 bytes per request.
parts
.headers
.insert("glauthpubkey", engine.encode(&pubkey).parse().unwrap());
parts
.headers
.insert("glauthsig", engine.encode(sig).parse().unwrap());
parts
.headers
.insert("glts", engine.encode(ts).parse().unwrap());
trace!("Payload size: {} (timestamp {})", data.len(), time);
let body = crate::node::stasher::StashBody::new(data).into();
let request = Request::from_parts(parts, body);
debug!("Sending request {:?}", request);
let response = inner.call(request).await?;
Ok(response)

Middleware layer collecting the signature context:

pub struct SignatureContextLayer {
ctx: crate::context::Context,
}
impl SignatureContextLayer {
pub fn new(context: crate::context::Context) -> Self {
SignatureContextLayer { ctx: context }
}
}
impl<S> Layer<S> for SignatureContextLayer {
type Service = SignatureContextService<S>;
fn layer(&self, service: S) -> Self::Service {
SignatureContextService {
inner: service,
ctx: self.ctx.clone(),
}
}
}

Signature context tracking:

fn call(&mut self, request: hyper::Request<hyper::Body>) -> Self::Future {
// This is necessary because tonic internally uses `tower::buffer::Buffer`.
// See https://github.com/tower-rs/tower/issues/547#issuecomment-767629149
// for details on why this is necessary
let clone = self.inner.clone();
let mut inner = std::mem::replace(&mut self.inner, clone);
let reqctx = self.ctx.clone();
Box::pin(async move {
use tonic::codegen::Body;
let _ = RPC_BCAST.clone().send(super::Event::RpcCall);
let (parts, mut body) = request.into_parts();
let uri = parts.uri.path_and_query().unwrap();
let pubkey = parts
.headers
.get("glauthpubkey")
.map(|k| general_purpose::STANDARD_NO_PAD.decode(k).ok())
.flatten();
let sig = parts
.headers
.get("glauthsig")
.map(|s| general_purpose::STANDARD_NO_PAD.decode(s).ok())
.flatten();
use bytes::Buf;
let timestamp: Option<u64> = parts
.headers
.get("glts")
.map(|s| general_purpose::STANDARD_NO_PAD.decode(s).ok())
.flatten()
.map(|s| s.as_slice().get_u64());
if let (Some(pk), Some(sig)) = (pubkey, sig) {
// Now that we know we'll be adding this to the
// context we can start buffering the request.
let mut buf = Vec::new();
while let Some(chunk) = body.data().await {
let chunk = chunk.unwrap();
// We check on the MAX_MESSAGE_SIZE to avoid an unlimited sized
// message buffer
if buf.len() + chunk.len() > MAX_MESSAGE_SIZE {
debug!("Message {} exceeds size limit", uri.path().to_string());
return Ok(tonic::Status::new(
tonic::Code::InvalidArgument,
format!("payload too large"),
)
.to_http());
}
buf.put(chunk);
}
trace!(
"Got a request for {} with pubkey={}, sig={} and body size={:?}",
uri,
hex::encode(&pk),
hex::encode(&sig),
&buf.len(),
);
let req = crate::context::Request::new(
uri.to_string(),
<bytes::Bytes>::from(buf.clone()),
pk,
sig,
timestamp,
);
reqctx.add_request(req.clone()).await;
let body: hyper::Body = buf.into();
let request = hyper::Request::from_parts(parts, body);
let res = inner.call(request).await;
// Defer cleanup into a separate task, otherwise we'd
// need `res` to be `Send` which we can't
// guarantee. This is needed since adding an await
// point splits the state machine at that point.
tokio::spawn(async move {
reqctx.remove_request(req).await;
});
res.map_err(Into::into)
} else {
// No point in buffering the request, we're not going
// to add it to the `HsmRequestContext`
let request = hyper::Request::from_parts(parts, body);
inner.call(request).await.map_err(Into::into)
}
})
}

Signature context being added to the signature request:

req.request.requests = ctx.snapshot().await.into_iter().map(|r| r.into()).collect();

Where the signer resolves changes to the command that caused it:

Resolver::try_resolve(msg, &reqs)?;

We likely want to do the following:

  • On receiving the configure call, retrieve the payload and signature from the request context (do not re-serialize the payload, it may not match, causing a broken signature)
  • Store it in the datastore and in the plugin state
  • Add it as a virtual RPC call to the signer context (on-the-fly since the configuration may change without restarting)
  • Add a loop over the signer context in the signer before the try_resolve call in the last snippet to configure the signer based on the stashed configure call

Since the payload is already verified before we reach the try_resolve call we don't have to implement that verification anymore, and configuring before acting upon the request means by the time VLS looks at the allowlist it'll have been adjusted with the close_to.

It's quite a few moving pieces, but it gives a very nice insight into how E2E verification works from start to finish :-)

@Randy808 Randy808 force-pushed the add-configure-rpc branch 3 times, most recently from 795467f to dfcf1a1 Compare October 31, 2023 21:17
…es, and added python bindings.

Added functionality to allow the last configure request to be sent to the signer on every signing request.
@cdecker cdecker merged commit 5a670a8 into Blockstream:main Nov 1, 2023
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants