Skip to content

Description of Test Cases

mbarkowsky edited this page Nov 8, 2022 · 11 revisions

LDBC Social Network Benchmark

The project de.mdelab.ldbc.snb.incremental.rete.tests contains test cases for validating our implementation of the host-graph-sensitive RETE net construction technique using queries and data from the LDBC Social Network Benchmark.

The class SimpleQueryTest.java contains test cases that directly check the structure of RETE nets constructed by the STATIC strategy from the article for a number of primitive queries that constitute basic building blocks against the expected structure. In addition, the results for these queries over a small network generated by the LDBC data generator are checked for equality against the results produced by a reference pattern matcher (https://www.hpi.uni-potsdam.de/giese/public/mdelab/mdelab-projects/story-diagram-tools/), which uses the same query language but has a completely different implementation. We consider two scenarios: batch and incremental. In the batch scenario, the queries are simply executed over the full network. In the incremental scenario, queries are first executed over the network after replaying half of the corresponding change log. After checking the correctness of the results produced by the different strategies, the remaining half of the log is replayed (and query results are updated incrementally), before the results are checked again. In the batch scenario, we consider only the STATIC strategy. In the incremental scenario, we consider both STATIC and DYNAMIC.

The class BenchmarkQueryTest.java contains test cases that check the correctness of query results for the LDBC benchmark queries used in our evaluation (excluding the query INTERACTIVE_3, for which execution takes exceedingly long). Therefore, the queries are executed over a small generated social network using the strategies DYNAMIC, STATIC, and EMULATE from our article. The results are then checked for equality against the results produced by a reference pattern matcher. We consider two scenarios: batch and incremental. In the batch scenario, the queries are simply executed over the full network. In the incremental scenario, queries are first executed over the network after replaying half of the corresponding change log. After checking the correctness of the results produced by the different strategies, the remaining half of the log is replayed (and query results are updated incrementally), before the results are checked again.

Train Benchmark

We provide similar test cases for the queries from the Train Benchmark used in our evaluation in the de.mdelab.trainbenchmark.rete.tests project. In these test cases, the queries are executed over a small dataset generated using the benchmark's data generator using the DYNAMIC, STATIC, and EMULATE strategies from the article. Results are checked for equality against the results produced by a reference pattern matcher.

Clone this wiki locally