Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[1.0.4] Finality violation tests 1.0.0 #610

Open
wants to merge 83 commits into
base: release/1.0
Choose a base branch
from

Conversation

systemzax
Copy link
Member

@systemzax systemzax commented Aug 21, 2024

This PR adds tests to cover violation of finality rules #1, #2 and #3 and addresses issue : #91

systemzax added 30 commits July 29, 2024 06:18
…_policy_generation in savanna smart contracts and tests for clarity
generation used in finality violation tests
…scenarios where the fake block's timestamp is lower vs higher than block with conflicting range
@ericpassmore
Copy link
Contributor

ericpassmore commented Sep 3, 2024

Note:start
category: Tests
component: Internal
summary: Tests to cover violation of finality rule.
Note:end

@heifner heifner changed the title Finality violation tests 1.0.0 [1.0.1] Finality violation tests 1.0.0 Sep 5, 2024
@arhag arhag changed the title [1.0.1] Finality violation tests 1.0.0 [1.0.2] Finality violation tests 1.0.0 Sep 12, 2024
@arhag arhag changed the title [1.0.2] Finality violation tests 1.0.0 [1.0.3] Finality violation tests 1.0.0 Oct 2, 2024
@spoonincode spoonincode changed the title [1.0.3] Finality violation tests 1.0.0 [1.0.4] Finality violation tests 1.0.0 Oct 31, 2024
//verify that the quorum certificate over the finality digest is valid
void _check_qc(const quorum_certificate_input& qc, const checksum256& finality_digest, const finalizer_policy_input& finalizer_policy){

uint64_t aggregate_keys(const savanna::bitset& votes, const finalizer_policy_input& finalizer_policy, bls_g1& agg_pub_key){
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to return a pair of the aggregate key and the total weight.

if (qc.is_weak.has_value() && qc.is_weak.value() == true) message = create_weak_digest(finality_digest);
for (size_t i = 0 ; i < messages.size(); i++){
//verify signature validity
check(bls_signature_verify(agg_pub_keys[i], decode_bls_signature_to_g2(agg_sig), messages[i]), "signature verification failed");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't make sense as a way to check the aggregate signature. See how Spring does it.

// validate aggregated signature
EOS_ASSERT( bls12_381::aggregate_verify(pubkeys, digests, sig.jacobian_montgomery_le()),
invalid_qc_signature, "qc signature validation failed" );

check(qc.strong_votes.has_value(), "required strong votes are missing");
savanna::bitset strong_b(finalizer_count, qc.strong_votes.value());
weight = aggregate_keys(strong_b, finalizer_policy, agg_pub_key);
_verify({create_strong_digest(finality_digest)}, {agg_pub_key}, qc.signature);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This only works if the weak bitset is empty. If it is not empty, then the QC signature would be an aggregate signature formed from the aggregate public key of the strong votes (calculated here) corresponding to the strong digest AND the aggregate public key to the weak votes corresponding to the weak digest.

I think Spring will never bother generating a QC with mixed strong and weak votes when the strong votes are sufficient to meet the threshold. But as far as the protocol is concerned, if weak votes are present, the QC is treated as a weak QC and the aggregate signature must incorporate the effect of those weak votes.

I think these contracts should not necessarily assume that and be able to handle validating the signature with count_strong_only == true even if weak votes are present in the QC.

weight = aggregate_keys(strong_b, finalizer_policy, agg_pub_key);
_verify({create_strong_digest(finality_digest)}, {agg_pub_key}, qc.signature);
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically you just need the three cases within the largest else branch here. Although there is a cleaner way of structuring this code See:

void qc_sig_t::verify_signatures(const finalizer_policy_ptr& fin_policy,
const digest_type& strong_digest,
const weak_digest_t& weak_digest) const {
const auto& finalizers = fin_policy->finalizers;
auto num_finalizers = finalizers.size();
// no reason to use bls_public_key wrapper
std::vector<bls12_381::g1> pubkeys;
pubkeys.reserve(2);
std::vector<std::vector<uint8_t>> digests;
digests.reserve(2);
// utility to aggregate public keys for verification
auto aggregate_pubkeys = [&](const auto& votes_bitset) -> bls12_381::g1 {
const auto n = std::min(num_finalizers, votes_bitset.size());
std::vector<bls12_381::g1> pubkeys_to_aggregate;
pubkeys_to_aggregate.reserve(n);
for(auto i = 0u; i < n; ++i) {
if (votes_bitset[i]) { // ith finalizer voted
pubkeys_to_aggregate.emplace_back(finalizers[i].public_key.jacobian_montgomery_le());
}
}
return bls12_381::aggregate_public_keys(pubkeys_to_aggregate);
};
// aggregate public keys and digests for strong and weak votes
if( strong_votes ) {
pubkeys.emplace_back(aggregate_pubkeys(*strong_votes));
digests.emplace_back(std::vector<uint8_t>{strong_digest.data(), strong_digest.data() + strong_digest.data_size()});
}
if( weak_votes ) {
pubkeys.emplace_back(aggregate_pubkeys(*weak_votes));
digests.emplace_back(std::vector<uint8_t>{weak_digest.begin(), weak_digest.end()});
}
// validate aggregated signature
EOS_ASSERT( bls12_381::aggregate_verify(pubkeys, digests, sig.jacobian_montgomery_le()),
invalid_qc_signature, "qc signature validation failed" );
}

I don't think there is even a reason to have the count_strong_only boolean. You may just want to add a bool is_strong() const member function to quorum_certificate_input so that the IBC contract can do a separate check to see if the submitted QC is strong.

@@ -666,43 +666,43 @@ BOOST_AUTO_TEST_SUITE(svnn_ibc)
);

chain.push_action("ibc"_n, "testbitset"_n, "ibc"_n, mvo()
("bitset_string", "30") //bitset bytes are reversed, so we do the same to test
("bitset_string", "03")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still does not look correct to me.

If I want to check whether the third finalizer (i == 2) is present, I would expect to look at the i/4 == 0 nibble (using 0-based indexing and counting from left to right, i.e. most significant nibble to least significant nibble). Then converting that nibble into bits, I would expect the i%4 == 2 bit in that nibble (using 0-based indexing and counting from MSB to LSB) to represent the presence of that finalizer.

So, given a bit string of 011:
I would expect the 3rd most significant bit of the first nibble to be 1. Similarly for the 2nd most significant bit of the first nibble. But, I would expect the 1st most significant bit of the first nibble to be 0. Furthermore, I would expect all other bits that are needed for padding purposes to be 0.

So the first nibble should be the hex digit 6 (which has binary representation 0110). And if we need an even number of nibbles in the hex string, then we can pad with a 0 nibble at the end to get the hex representation 60 in this case.


check(time_range_conflict, "proofs must demonstrate a conflicting time range");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You still need to keep this time range conflict check: the one that verifies that the low_proof_timestamp is between the high_proof_last_claim_timestamp and the high_proof_timestamp. That is independent of the other time checks that you do below.

//Verify that the computed digest for the low proof doesn't appear in the list of reversible block digests committed to by the high proof
auto f_itr = std::find(reversible_blocks_digests.begin(), reversible_blocks_digests.end(), computed_digest);
//A time range conflict has occured if the high proof timestamp is greater than or equal to the low proof timestamp and the high proof parent timestamp is less than the low proof timestamp
bool time_range_conflict = high_proof_parent_timestamp < low_proof_timestamp && high_proof_timestamp >= low_proof_timestamp;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the time check here is trying to validate the assumptions of a valid rule 2 proof that I outline in:

First, the user must submit the contents of the leaf node in the reversible block Merkle tree (along with the proof of inclusion up to the reversible blocks root) for the block that has a timestamp greater than or equal to low_proof_timestamp and a parent_timestamp strictly less than low_proof_timestamp. Both of those inequalities must be validated otherwise the submitted proof is not correct.

But the checks on this line does not do that.

You want to make sure that (proof_of_inclusion.target.parent_timestamp < low_proof_timestamp) && (low_proof_timestamp <= proof_of_inclusion.target.timestamp) is true. If this is not true, then the proof is invalid and the action should abort.

//If the timestamp for the submitted reversible blocks leaf node is strictly greater than low_proof_timestamp, we know that the low proof block is not an ancestor of the high proof block
//and therefore, a rule 2 violation has occurred.
if(proof_of_inclusion.target.timestamp > low_proof_timestamp) finality_violation = true;
else if(proof_of_inclusion.target.timestamp == low_proof_timestamp) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need to have a finality_violation boolean.

Given that you will check the low_proof_timestamp is in the interval (proof_of_inclusion.target.parent_timestamp, proof_of_inclusion.target.timestamp] earlier above, you can simplify this if else to just the following if statement:

if (proof_of_inclusion.target.timestamp == low_proof_timestamp) {
   // If the timestamp for the submitted reversible blocks leaf node is exactly equal to low_proof_timestamp, 
   // we need to compare the finality digest of the low proof block to the finality digest of the 
   // submitted reversible blocks leaf node to check that they are not the same. 
   // If they are the same, the submitted proof is not correct. 
   // But if they are different, then we know that the low proof block is not an ancestor of the high proof block.
   check(finalizer_digests.second != proof_of_inclusion.target.finality_digest, "finality digest of low proof must be different from the finality digest of the submitted reversible blocks leaf node");
}

block_timestamp low_proof_last_claim_timestamp = low_proof.qc_block.level_3_commitments.value().latest_qc_claim_timestamp;
block_timestamp high_proof_last_claim_timestamp = high_proof.qc_block.level_3_commitments.value().latest_qc_claim_timestamp;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of these timestamps are still needed.

We still need to verify one of the timestamp assumptions of rule 3 violations:

bool lock_violation = 
       (high_proof_last_claim_timestamp <= low_proof_last_claim_timestamp) &&
       (low_proof_timestamp < high_proof_timestamp);
check(lock_violation, "proofs must demonstrate a lock violation");

This is independent of verifying that the target reversible block also satisfies the timestamp requirements:

bool appropriate_target_reversible_block =
       (proof_of_inclusion.target.parent_timestamp < low_proof_last_claim_timestamp) &&
       (low_proof_last_claim_timestamp <= proof_of_inclusion.target.timestamp);
check(appropriate_target_reversible_block, "proofs must include appropriate reversible target block");

finality_violation = true;
}

check(finality_violation, "proofs must demonstrate a finality violation");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again no need for finality_violation boolean.

If the appropriate time checks have been done earlier, then all that is needed is to handle the case where proof_of_inclusion.target.timestamp exactly equals low_proof_last_claim_timestamp:

if (proof_of_inclusion.target.timestamp == low_proof_last_claim_timestamp) {
   // If the timestamp for the submitted reversible blocks leaf node is exactly equal to low_proof_last_claim_timestamp, 
   // we need to compare the finality digest of the block claimed by the low proof block to the finality digest of the 
   // submitted reversible blocks leaf node to check that they are not the same. 
   // If they are the same, the submitted proof is not correct. 
   // But if they are different, then we know that the block claimed by the low proof block is not an ancestor of the high proof block.
   check(low_proof.qc_block.level_3_commitments.value().latest_qc_claim_finality_digest != proof_of_inclusion.target.finality_digest, "finality digest of block claimed by low proof block must be different from the finality digest of the submitted reversible blocks leaf node");
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Finality violation test coverage
5 participants