-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error verifying proof when two missing values lead to the same empty slot. #50
Comments
So 5 stems are declared in the proof, but the proof only contains 4 "extension status"es, since one of them is de-duplicated. In order to reproduce the issue, I used my verkle block testing utility with the following arguments:
This command verifies most blocks (all of them until block 190, in fact). output:
Note that "storage root" corresponds to the post state, the pre state is |
Finally managed to attach the block file: |
Added |
And the corresponding rust version is found in this PR it fails, because the number of extension statuses is 2 instead of 1 |
I have modifed the PR to include the optimisation, so that the tests pass. From my understanding, we reduce the amount of space needed but increase the compute time and code complexity for the prover and verifier. I wonder if we can use a simpler compression algorithm, that does not rely on the keys being sorted in the verkle trie algorithm. For example, given VerificationHint : 1,1,1,1 Present, Present, None, None We could compress it by doing: 04 01 02 Present 02 None The 04 states that a depth of 01 has been deduplicated four times. This is just a suggestion, I don't think this is the best algorithm, though I found it beneficial to compare the two algorithms. The benefit of this algorithm that for large duplications, this gives a lot of savings and does not care about which stem/key it is doing the compression for so it can be done in a modular way; the keys can be sorted but this is a protocol level decision that can be done somewhere else up the protocol stack, instead of inside the verkle trie protocol itself. This algorithm does not however give as much savings as the algorithm implemented in the PR due to the fact that there is a "marker/length" byte in the suggested algorithm. Also the worse case for the suggested algorithm is not great in terms of space, though I'm not sure it can come up, we can also mitigate the worse case by having a byte which turns off the optimisation, but this increases code complexity. I think a difference between the two algorithms is that the one I outlined, is doing compression based on the structure, while the one in the PR is doing compression based on verkle trie specific features. The former could be added to the serialisation layer while the latter would not be suited to go there. In general compression algorithms for sending bytes over the wire, that work on the structure of the bytes are much nicer to implement and seem less ad-hoc imo. Though I'd like to check whether we need optimisations like these in the first place, ie how much it is really saving in terms of bytes versus the complexity being added. As an example, if its 10 bytes per block, then with roughly 2 million blocks a year, this PR would save 20MB a year |
Update to #49 and #48.
Proof:
Keys:
The root has no
0xee
child, and this block contains two stems whose absence is proven by aNone
extension status:ee390d5f79a75e40f22942a639ffa7b982e3de2b3e8927919210cf252355df
andee9707ef2846473a6fe4e10781a3b8b225ea950a7c89fb88f01c3544a41dfb
.The text was updated successfully, but these errors were encountered: