Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

trie: reduce allocations in stacktrie #30743

Merged
merged 11 commits into from
Jan 23, 2025
Merged

Conversation

holiman
Copy link
Contributor

@holiman holiman commented Nov 11, 2024

This PR uses various tweaks and tricks to make the stacktrie near alloc-free.

[user@work go-ethereum]$ benchstat stacktrie.1 stacktrie.7
goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/trie
cpu: 12th Gen Intel(R) Core(TM) i7-1270P
             │ stacktrie.1  │             stacktrie.7              │
             │    sec/op    │    sec/op     vs base                │
Insert100K-8   106.97m ± 8%   88.21m ± 34%  -17.54% (p=0.000 n=10)

             │   stacktrie.1    │             stacktrie.7              │
             │       B/op       │     B/op      vs base                │
Insert100K-8   13199.608Ki ± 0%   3.424Ki ± 3%  -99.97% (p=0.000 n=10)

             │  stacktrie.1   │             stacktrie.7             │
             │   allocs/op    │ allocs/op   vs base                 │
Insert100K-8   553428.50 ± 0%   22.00 ± 5%  -100.00% (p=0.000 n=10)

Also improves derivesha:

goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/core/types
cpu: 12th Gen Intel(R) Core(TM) i7-1270P
                          │ derivesha.1 │             derivesha.2              │
                          │   sec/op    │    sec/op     vs base                │
DeriveSha200/stack_trie-8   477.8µ ± 2%   430.0µ ± 12%  -10.00% (p=0.000 n=10)

                          │ derivesha.1  │             derivesha.2              │
                          │     B/op     │     B/op      vs base                │
DeriveSha200/stack_trie-8   45.17Ki ± 0%   25.65Ki ± 0%  -43.21% (p=0.000 n=10)

                          │ derivesha.1 │            derivesha.2             │
                          │  allocs/op  │ allocs/op   vs base                │
DeriveSha200/stack_trie-8   1259.0 ± 0%   232.0 ± 0%  -81.57% (p=0.000 n=10)

@holiman
Copy link
Contributor Author

holiman commented Nov 11, 2024

Update

goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/trie
cpu: 12th Gen Intel(R) Core(TM) i7-1270P
             │ stacktrie.2  │              stacktrie.3              │
             │    sec/op    │    sec/op     vs base                 │
Insert100K-8   78.51m ± ∞ ¹   69.50m ± 12%  -11.47% (p=0.010 n=5+7)
¹ need >= 6 samples for confidence interval at level 0.95

             │  stacktrie.2  │              stacktrie.3              │
             │     B/op      │     B/op      vs base                 │
Insert100K-8   6.931Mi ± ∞ ¹   4.640Mi ± 0%  -33.06% (p=0.003 n=5+7)
¹ need >= 6 samples for confidence interval at level 0.95

             │ stacktrie.2  │         stacktrie.3          │
             │  allocs/op   │  allocs/op   vs base         │
Insert100K-8   326.7k ± ∞ ¹   226.7k ± 0%  -30.61% (n=5+7)
¹ need >= 6 samples for confidence interval at level 0.95

@holiman
Copy link
Contributor Author

holiman commented Nov 11, 2024

New progress

goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/trie
cpu: 12th Gen Intel(R) Core(TM) i7-1270P
             │ stacktrie.3  │          stacktrie.4          │
             │    sec/op    │    sec/op     vs base         │
Insert100K-8   69.50m ± 12%   74.59m ± 14%  ~ (p=0.128 n=7)

             │ stacktrie.3  │             stacktrie.4             │
             │     B/op     │     B/op      vs base               │
Insert100K-8   4.640Mi ± 0%   3.112Mi ± 0%  -32.93% (p=0.001 n=7)

             │ stacktrie.3 │            stacktrie.4             │
             │  allocs/op  │  allocs/op   vs base               │
Insert100K-8   226.7k ± 0%   126.7k ± 0%  -44.11% (p=0.001 n=7)

@holiman
Copy link
Contributor Author

holiman commented Nov 11, 2024

Argh only this little thing left: returning a slice to a pool somehow leaks something. I guess somehow I expose an underlying array, or at least make the compiler think so, hence it reallocs the slice when I return it to the pool.

Screenshot 2024-11-11 at 22-17-15 trie test alloc_space
Screenshot 2024-11-11 at 22-18-40 trie test alloc_space

@namiloh
Copy link

namiloh commented Nov 11, 2024

Ah wait 24B that is the size of a slice: a pointer and two ints. It is the slice passed by value, escaped to heap (correctly so). But how to avoid it??

@holiman
Copy link
Contributor Author

holiman commented Nov 11, 2024

Solved!

[user@work go-ethereum]$ benchstat stacktrie.1 stacktrie.7
goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/trie
cpu: 12th Gen Intel(R) Core(TM) i7-1270P
             │ stacktrie.1  │             stacktrie.7              │
             │    sec/op    │    sec/op     vs base                │
Insert100K-8   106.97m ± 8%   88.21m ± 34%  -17.54% (p=0.000 n=10)

             │   stacktrie.1    │             stacktrie.7              │
             │       B/op       │     B/op      vs base                │
Insert100K-8   13199.608Ki ± 0%   3.424Ki ± 3%  -99.97% (p=0.000 n=10)

             │  stacktrie.1   │             stacktrie.7             │
             │   allocs/op    │ allocs/op   vs base                 │
Insert100K-8   553428.50 ± 0%   22.00 ± 5%  -100.00% (p=0.000 n=10)

@holiman holiman mentioned this pull request Nov 12, 2024
@holiman holiman changed the title trie: [wip] reduce allocs in stacktrie trie: reduce allocactions in stacktrie Nov 12, 2024
@holiman holiman changed the title trie: reduce allocactions in stacktrie trie: reduce allocations in stacktrie Nov 12, 2024
trie/stacktrie.go Outdated Show resolved Hide resolved
trie/stacktrie.go Outdated Show resolved Hide resolved
@holiman
Copy link
Contributor Author

holiman commented Nov 20, 2024

Did a sync on bench06 (leaving ancients intact). It synced up in ~2h15m, so absolutely no problem there!

@rjl493456442
Copy link
Member

rjl493456442 commented Nov 21, 2024

@holiman I think we should compare the relevant metrics (memory allocation, etc) during the snap sync and to see the overall impact brought by this change.

@@ -40,6 +40,20 @@ func (n *fullNode) encode(w rlp.EncoderBuffer) {
w.ListEnd(offset)
}

func (n *fullnodeEncoder) encode(w rlp.EncoderBuffer) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you estimate the performance and allocation differences between using this encoder and the standard full node encoder?

The primary distinction seems to be that this encoder inlines the children encoding directly, rather than invoking the children encoder recursively. I’m curious about the performance implications of this approach.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, so if I change it back (adding back rawNode, and then switching back the case branchNode: so that it looks like it did earlier, then the difference is:

[user@work go-ethereum]$ benchstat pr.bench pr.bench..2
goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/trie
cpu: 12th Gen Intel(R) Core(TM) i7-1270P
             │   pr.bench   │          pr.bench..2           │
             │    sec/op    │    sec/op     vs base          │
Insert100K-8   88.67m ± 33%   88.20m ± 11%  ~ (p=0.579 n=10)

             │   pr.bench   │                pr.bench..2                 │
             │     B/op     │      B/op        vs base                   │
Insert100K-8   3.451Ki ± 3%   2977.631Ki ± 0%  +86178.83% (p=0.000 n=10)

             │  pr.bench  │                pr.bench..2                 │
             │ allocs/op  │   allocs/op     vs base                    │
Insert100K-8   22.00 ± 5%   126706.50 ± 0%  +575838.64% (p=0.000 n=10)

When we build the children struct, all the values are copied, as opposed if we just use the encoder-type which just uses the same child without triggering a copy.

n := shortNode{Key: hexToCompactInPlace(st.key), Val: valueNode(st.val)}

n.encode(t.h.encbuf)
{
Copy link
Member

@rjl493456442 rjl493456442 Nov 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can also use shortNodeEncoder here?
I would recommend not to do the inline encoding directly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could, but here we know we have a valueNode. And the valueNode just does:

func (n valueNode) encode(w rlp.EncoderBuffer) {
	w.WriteBytes(n)
}

If we change that to a method which checks the size and theoretically does different things, then the semantics become slightly changed.

trie/node.go Outdated
//shortNodeEncoder is a type used exclusively for encoding. Briefly instantiating
// a shortNodeEncoder and initializing with existing slices is less memory
// intense than using the shortNode type.
shortNodeEncoder struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qq, does shortNodeEncoder need to implement the node interface?

It's just an encoder which coverts the node object to byte format?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just have a try, it's technically possible

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good thinking there!

Good thinking

@@ -27,6 +27,7 @@ import (

var (
stPool = sync.Pool{New: func() any { return new(stNode) }}
bPool = newBytesPool(32, 100)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any particular reason to not use sync.Pool?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! You'd be surprised, but the whole problem I had with extra alloc :

Screenshot 2024-11-11 at 22-18-40 trie test alloc_space

I solved that by not using a sync.Pool. I suspect it's due to the interface-conversion, but I don't know any deeper details about the why of it.

Copy link
Member

@rjl493456442 rjl493456442 Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can have a try to use the pool for slice pointer,

e.g.

// slicePool is a shared pool of hash slice, for reducing the GC pressure.
var slicePool = sync.Pool{
	New: func() interface{} {
		slice := make([]byte, 0, 32) // Pre-allocate a slice with a reasonable capacity.
		return &slice
	},
}

// getSlice obtains the hash slice from the shared pool.
func getSlice(n int) []byte {
	slice := *slicePool.Get().(*[]byte)
	if cap(slice) < n {
		slice = make([]byte, 0, n)
	}
	slice = slice[:n]
	return slice
}

// returnSlice returns the hash slice back to the shared pool for following usage.
func returnSlice(slice []byte) {
	slicePool.Put(&slice)
}

Copy link
Member

@rjl493456442 rjl493456442 Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, it doesn't work

(base) ➜  go-ethereum git:(stacktrie_allocs_1) ✗ benchstat bench1.txt bench2.txt
goos: darwin
goarch: arm64
pkg: github.com/ethereum/go-ethereum/trie
             │  bench1.txt  │           bench2.txt            │
             │    sec/op    │    sec/op     vs base           │
Insert100K-8   28.41m ± ∞ ¹   29.94m ± ∞ ¹  ~ (p=1.000 n=1) ²
¹ need >= 6 samples for confidence interval at level 0.95
² need >= 4 samples to detect a difference at alpha level 0.05

             │  bench1.txt   │             bench2.txt              │
             │     B/op      │       B/op        vs base           │
Insert100K-8   3.276Ki ± ∞ ¹   2972.266Ki ± ∞ ¹  ~ (p=1.000 n=1) ²
¹ need >= 6 samples for confidence interval at level 0.95
² need >= 4 samples to detect a difference at alpha level 0.05

             │ bench1.txt  │             bench2.txt             │
             │  allocs/op  │    allocs/op     vs base           │
Insert100K-8   21.00 ± ∞ ¹   126692.00 ± ∞ ¹  ~ (p=1.000 n=1) ²

bench1: channel
bench2: slice pointer pool

Copy link
Member

@rjl493456442 rjl493456442 Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue for using native sync pool is:

  • we indeed reuse the underlying byte slice
  • the slice descriptor (metadata, 24bytes) must be passed around when pool.Get is called

The key difference is: in channel approach, the slice is passed as a reference, without descriptor construction involved; in the sync pool approach, the slice is passed as a value, descriptor must be re-created for every Get

@@ -129,6 +143,12 @@ const (
)

func (n *stNode) reset() *stNode {
if n.typ == hashedNode {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the difference between HashedNode and Embedded tiny node?

For embedded node, we also allocate the buffer from the bPool and the buffer is owned by the node itself right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the thing is, that there's only one place in the stacktrie where we convert a node into a hashedNode.

	st.typ = hashedNode
	st.key = st.key[:0]

	st.val = nil // Release reference to potentially externally held slice.

This is the only place. So we know that once something has been turned into a hashedNode, the val is never externally held and thus it can be returned to the pool. The big problem we had earlier, is that we need to overwrite the val with a hash, but we are not allowed to mutate val. So this was a big cause of runaway allocs.

But now we reclaim those values-which-are-hashes since we know that they're "ours".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For embedded node, we also allocate the buffer from the bPool and the buffer is owned by the node itself right?

To answer your question: yes. So from the perspective of slice-reuse, there is zero difference.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, in my follow-up PR to reduce allocs in derivesha:#30747 , I remove this trick again.

In that PR, I always copy the value as it enters the stacktrie. So we always own the val slice, and are free to reuse via pooling.

Doing so is less hacky, we get rid of this special case "val is externally held unless it's an hashedNode because then we own it".


That makes it possible for the derivesha-method to reuse the input-buffer

@MariusVanDerWijden
Copy link
Member

lint is read after the latest commit

goos: linux
goarch: amd64
pkg: github.com/ethereum/go-ethereum/trie
cpu: 12th Gen Intel(R) Core(TM) i7-1270P
             │ stacktrie.3  │          stacktrie.4          │
             │    sec/op    │    sec/op     vs base         │
Insert100K-8   69.50m ± 12%   74.59m ± 14%  ~ (p=0.128 n=7)

             │ stacktrie.3  │             stacktrie.4             │
             │     B/op     │     B/op      vs base               │
Insert100K-8   4.640Mi ± 0%   3.112Mi ± 0%  -32.93% (p=0.001 n=7)

             │ stacktrie.3 │            stacktrie.4             │
             │  allocs/op  │  allocs/op   vs base               │
Insert100K-8   226.7k ± 0%   126.7k ± 0%  -44.11% (p=0.001 n=7)
@holiman
Copy link
Contributor Author

holiman commented Nov 25, 2024

Running a snapsync benchmarkers now, using partialwipe (ancients intact),

  • master on bench06
  • This PR on bench05

@holiman
Copy link
Contributor Author

holiman commented Nov 26, 2024

So, here are some charts. The first third of the charts is the snapsync, the latter two thirds are snap-gen + block by block sync. The yellow (this PR) finished slightly faster (13 minutes or so).

Screenshot 2024-11-26 at 12-18-25 Dual Geth - Grafana

This is despite that the 05 (this PR) struggled against larger iowait. Perhaps it was bottlenecked by the disk?

Screenshot 2024-11-26 at 12-19-55 Dual Geth - Grafana

The 05 had some block downloading to do in the beginnig, but after that the ingress rates were fairly equal:

Screenshot 2024-11-26 at 12-21-30 Dual Geth - Grafana

Memory charts doesn't show much, except that maybe the lead that 05 had after sync was increased further during generation, to about 25 minutes

Screenshot 2024-11-26 at 12-22-45 Dual Geth - Grafana

trie/stacktrie.go Outdated Show resolved Hide resolved
Copy link
Member

@gballet gballet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be curious to find out how that performs over a full snapshot regeneration? 100K is a big number for a benchmark, but the scale of what the stacktrie is being used for is $10^4$ times that.

trie/encoding.go Show resolved Hide resolved
case b := <-bp.c:
return b
default:
return make([]byte, 0, bp.w)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that there is an opportunity for reducing the amount of allocations even further:

  1. Pre-allocate a ~page-sized buffer of slices:
const nitems = syscall.Getpagesize()/sliceCap - 1 // reserve 1 for the slice header
var buf = make([]byte, nitems * sliceCap)
  1. Use subslices of it, you can still feed all these subslices into a channel if that's your favorite synchronization primitive.
func addBuf(bp *bytesPool, buf []byte) {
  for (i := 0; i < nitems; i++) {
     bp.c <- buf[i*bp.w:(i+1)*bp.w]
  }
}
  1. If the channel is empty, allocate another buffer:
func (bp *bytesPool) Get() []byte {
  // add a new buffer if all buffers are allocated
  if len(c) == 0 {
    var newbuf = make([]byte, bp.w * nitems)
    addBuf(bp, newbuf[:])
  }
  
  return <- bp.c
}

Advantages and issues of that approach:

  • it's possible to hold references to several almost-empty buffers, which if they are kept over a long time could cause to OOM. This is unlikely here, since a) the stacktrie is a transient structure b) it only "expands" one branch at a time c) the average depth at this time is ~13, which is enough not to need an extra buffer
  • less allocations. Note that the benchmarks are of only 100K, so it might not make a dent in the reported allocations... but if you benchmark snapshot allocation (over 1 billion keys now), it should.
  • It's easy to change the target size of the buffer allocation, depending on where it's used and how often it needs to allocate a new buf.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stacktrie is already nearly alloc-free. I think you can hit it with a billion keys and not have any allocations after the first 100K elements or so. I might be wrong, but I don't think there's any room for meaningful further improvement (by which I mean: other than changing a constant alloc-count to a different constant).

Comment on lines 62 to 63
kBuf: make([]byte, 0, 64),
pBuf: make([]byte, 0, 32),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it not make sense to reuse the bytes allocator here as well ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are two small slices that are used for the duration of the stacktrie runtime. Also, during 'Update', they are continuously in use. Returning them to the pool after use is pointless: at the next call to Update, the first thing we will do is borrow them back again, and hold them for the duration of the op.

So no, I don't see any benefit in having them be pooled.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at the next call to Update, the first thing we will do is borrow them back again, and hold them for the duration of the op.

that's the point I'm making: you can save allocations between calls to Update.

trie/node.go Outdated Show resolved Hide resolved
trie/stacktrie.go Outdated Show resolved Hide resolved
trie/stacktrie.go Outdated Show resolved Hide resolved
@@ -368,15 +392,23 @@ func (t *StackTrie) hash(st *stNode, path []byte) {
st.typ = hashedNode
st.key = st.key[:0]

st.val = nil // Release reference to potentially externally held slice.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but if it's created e.g. on L400, it should be returned to the pool. I'm not worried that it will leak, since there is a garbage collector, but if feels like we could avoid extra allocations by maybe ensuring that the passed values are always allocated with the bytes pool ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note line 392. That is the only place we ever set the type to be a hashNode. Only on hashNode types, do we "own" the val. And if the type was already hashedNode, it will exit early (at line 334).

For hashNodes, the return-to-pool happens during reset.

Copy link
Member

@gballet gballet Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean leafnode on line 382? nvm I see what you mean.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point I'm making only stands if you're also using the pool in between calls to Update. And it seems that this is already what you're saying here: #30743 (comment)

@holiman
Copy link
Contributor Author

holiman commented Jan 14, 2025

Also worth noting: #30747 . Whereas this PR takes ownership of st.val IFF the node is converted into a hashedNode, the other PR takes full ownership of st.val immediately as it enters the stacktrie.

Which means that it's less elaborate: the stacktrie always owns the st.val, we don't need thorough analysis to realise the safety of it. OTOH, that approach will always copy-on-ingress. For DeriveSha, the that approach is objectively better, since the copy-on-ingress will copy to a pool-managed slice, the outer caller can safely reuse it's own slice.

@holiman
Copy link
Contributor Author

holiman commented Jan 14, 2025

100K is a big number for a benchmark, but the scale of what the stacktrie is being used for is 10^4 times that.

Actually, for the most part, we use the stacktrie to validate sequences of accounts / slots during snap sync. These are typically on the order of 10K elements, IIRC.

We don't typically feed the entire trie into a stacktrie: you'd only do that if you want to do some of the deep verifications.

When we generate snapshot data from the trie data, I'm not sure the stacktrie is involved, really.

@rjl493456442 rjl493456442 self-assigned this Jan 21, 2025
@rjl493456442
Copy link
Member

I will redeploy it for a quick snap sync

gballet
gballet previously approved these changes Jan 21, 2025
Copy link
Member

@gballet gballet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

K, now that I see the context with the derivesha pr, I guess we don't need to futher optimize.

@rjl493456442
Copy link
Member

Snap sync benchmark results:

Sync Time, no big difference

  • PR: 2h36m
  • Master: 2h40m

Allocation, this pr has significantly reduce the memory allocation in the stackTrie
PR:

 4455.26MB 0.018% 91.41% 1021494.30MB  4.04%  github.com/ethereum/go-ethereum/trie.(*StackTrie).Update
 3392.55MB 0.013% 91.44% 1029848.11MB  4.08%  github.com/ethereum/go-ethereum/trie.(*StackTrie).hash
 313.02MB 0.0012% 91.51% 1017039.04MB  4.03%  github.com/ethereum/go-ethereum/trie.(*StackTrie).insert

Master:

367818.89MB  1.41% 82.06% 1644831.36MB  6.30%  github.com/ethereum/go-ethereum/trie.(*StackTrie).hash
56036.36MB  0.21% 91.05% 1664093.44MB  6.37%  github.com/ethereum/go-ethereum/trie.(*StackTrie).insert
4513.26MB 0.017% 92.01% 1860183.27MB  7.13%  github.com/ethereum/go-ethereum/trie.(*StackTrie).Update
截屏2025-01-22 10 38 25
  • the overall memory allocation has no big difference
  • PR has slightly higher GC pause

Conclusion: I think it's a good change to optimize the memory allocation in the stackTrie, although it won't bring a significant change to the overall system performance.

@gballet gballet merged commit d3cc618 into ethereum:master Jan 23, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants