-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(CodecV7): add CodecV7 to support upgrade 5.2 Euclid phase2 #33
base: main
Are you sure you want to change the base?
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
encoding/codecv6.go
Outdated
|
||
// TODO: add DecodeBlob to interface to decode the blob and transactions or reuse DecodeTxsFromBlob but only have a single "chunk" for all transactions in the batch? | ||
|
||
// TODO: which of the Estimate* functions are needed? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's used to give proof witnesses (from coordinator to provers in distributing tasks).
namely here: https://github.com/scroll-tech/scroll/blob/v4.4.83/coordinator/internal/logic/provertask/batch_prover_task.go#L255
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
previously it's used here: https://github.com/scroll-tech/scroll/blob/v4.4.83/common/types/message/message.go#L56.
now it is not used and may be removed.
encoding/codecv6.go
Outdated
|
||
// MaxNumChunksPerBatch returns the maximum number of chunks per batch. | ||
func (d *DACodecV6) MaxNumChunksPerBatch() int { | ||
return 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 1? Just a placeholder?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since Chunks
are not exposed to DA, there should be no details about chunks here. However, to keep compatibility and the way we build chunks in the relayer we could still keep using it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, rollup-relayer
should not use MaxNumChunksPerBatch
, right? Otherwise we'll keep producing batches with a single chunk.
"github.com/scroll-tech/go-ethereum/core/types" | ||
"github.com/scroll-tech/go-ethereum/crypto" | ||
"github.com/scroll-tech/go-ethereum/crypto/kzg4844" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
) | |
) | |
// Below is the encoding for `BatchHeader` V7, total 73 bytes. | |
// * Field Bytes Type Index Comments | |
// * version 1 uint8 0 The batch version | |
// * batchIndex 8 uint64 1 The index of the batch | |
// * blobVersionedHash 32 bytes32 9 The versioned hash of the blob with this batch’s data | |
// * parentBatchHash 32 bytes32 41 The parent batch hash |
I suggest you add this at the top for an easy reference.
) | ||
|
||
const ( | ||
blobPayloadV7EncodedLength = 8 + 2*common.HashLength + 8 + 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blobPayloadV7EncodedLength = 8 + 2*common.HashLength + 8 + 2 | |
blobPayloadV7MinimumLength = 8 + 2*common.HashLength + 8 + 2 |
|
||
const ( | ||
daBatchV7EncodedLength = 73 | ||
daBatchV7OffsetBlobVersionedHash = 9 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
daBatchV7OffsetBlobVersionedHash = 9 | |
daBatchV7BlobVersionedHashOffset = 9 |
Suggest making this consistent with the others (e.g. blobEnvelopeV7VersionOffset
)
|
||
const ( | ||
blobPayloadV7EncodedLength = 8 + 2*common.HashLength + 8 + 2 | ||
blobPayloadV7OffsetInitialL1MessageIndex = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
blobPayloadV7OffsetInitialL1MessageIndex = 0 | |
blobPayloadV7InitialL1MessageIndexOffset = 0 |
const ( | ||
blobPayloadV7EncodedLength = 8 + 2*common.HashLength + 8 + 2 | ||
blobPayloadV7OffsetInitialL1MessageIndex = 0 | ||
blobPayloadV7OffsetInitialL1MessageQueue = blobPayloadV7OffsetInitialL1MessageIndex + 8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Above you use absolute indices, and here relative indices. Again, I think consistency is desirable, but not a big deal, can also leave it as it is
} | ||
|
||
func encodeSize3Bytes(data uint32) []byte { | ||
return []byte{byte(data), byte(data >> 8), byte(data >> 16)} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this little-endian? We use big-endian everywhere else
@@ -131,6 +137,19 @@ func (b *Block) NumL1Messages(totalL1MessagePoppedBefore uint64) uint64 { | |||
return *lastQueueIndex - totalL1MessagePoppedBefore + 1 | |||
} | |||
|
|||
// NumL1MessagesNoSkipping returns the number of L1 messages in this block. | |||
// This method assumes that L1 messages can't be skipped. | |||
func (b *Block) NumL1MessagesNoSkipping() uint16 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could add a QueueIndex
continuity check here, just to be defensive
return nil, errors.New("block number is not uint64") | ||
} | ||
|
||
// note: numL1Messages includes skipped messages |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there are no skipped messages
return nil, errors.New("number of L1 messages exceeds max uint16") | ||
} | ||
|
||
// note: numTransactions includes skipped messages |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
|
||
// NewDAChunk creates a new DAChunk from the given Chunk and the total number of L1 messages popped before. | ||
// Note: For DACodecV7, this function is not implemented since there is no notion of DAChunk in this version. Blobs | ||
// contain the entire batch data, and it is up to a prover to decide the chunk sizes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is up to a prover to decide the chunk sizes.
This is not accurate, this will still be decided by rollup-relayer
.
} | ||
|
||
// JSONFromBytes converts the bytes to a DABatch and then marshals it to JSON. | ||
func (d *DACodecV7) JSONFromBytes(data []byte) ([]byte, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this unused anymore? previously in coordinator, while currently only utilizing NewDABatchFromBytes
, worth double-checking it.
return d.checkCompressedDataCompatibility(b) | ||
} | ||
|
||
// TODO: which of the Estimate* functions are needed? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
based on a previous discussion, gas limit (irrelevant to the number of L1 messages) and calldata length (constant length) will not exceed in v7. while (uncompressed) batch size and blob size are still needed.
batch size has a limit of 5 * blob size
, could ask @noel2004 or @roynalnaruto to check if v7 still has this limit.
return b.blobBytes | ||
} | ||
|
||
// MarshalJSON implements the custom JSON serialization for daBatchV3. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// MarshalJSON implements the custom JSON serialization for daBatchV3. | |
// MarshalJSON implements the custom JSON serialization for daBatchV7. |
Purpose or design rationale of this PR
PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Breaking change label
Does this PR have the
breaking-change
label?