Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add document to p4c repo explaining to developers a little bit of what they need to know to add new tests #32

Open
jafingerhut opened this issue Dec 20, 2024 · 5 comments
Labels
smalltask A task that appears to require a small amount of work

Comments

@jafingerhut
Copy link
Collaborator

At the very least, it should explain:

  • for "positive" tests, i.e. ones where there is no error from the P4 compiler, what directory should you put the source file in, and what expected output files should be added (and in what directory for those)? Also how to update those if a change to p4c changes the expected outputs.

  • for "negative" tests, i.e. the ones where you expect the P4 compiler to catch a syntax or semantic error, what directory should you put the source file in, and what expected output files should be added?

  • How to mark a test as "expected to fail" (Xfail), and what does that mean? Note: At least for some existing CI tests, marking a positive test as Xfail means that it must fail, or else the overall CI run will fail. Marking a negative test as Xfail means that it must not give an error, or else the overall CI run will fail. However, because this is a confusing and subtle point, maybe not all p4c CI tests treat negative tests this way. It would be nice to make them all consistent, but at least we should document how they operate now, even if there is a TODO indicating the desire to change it in the future.

@fruffy fruffy added the smalltask A task that appears to require a small amount of work label Jan 9, 2025
@Vineet1101
Copy link

@jafingerhut I would like to work on this issue. Can you assign this to me. This is what i was thinking for the positive test part

[Adding New Tests to the P4C Repository

This document provides a guide for developers who want to add new tests to the P4C repository. It explains where to place test files, how to manage expected output files, and how to handle tests that are expected to fail (Xfail).

  1. Adding "Positive" Tests

Positive tests are those where the P4 compiler should process the input without any errors.

Steps for Adding Positive Tests

Source File Location:

Place the source file in the p4c/testdata/p4_16_samples directory.

Ensure the file is named descriptively to reflect its purpose (e.g., example_positive_test.p4).

Expected Output Files:

Run the P4 compiler on the source file using the command:

./build/p4test path/to/source_file.p4

This generates expected output files.

Place the generated expected output files in the p4c/testdata/p4_16_samples_outputs directory.

Updating Expected Outputs:

If a change to the P4 compiler modifies the expected outputs, you can regenerate them by rerunning the tests with:

./build/p4test path/to/source_file.p4 --update-outputs

Replace the old expected output files with the newly generated ones.

Verifying the Test

Use the CI system or run the test locally to verify that it passes as expected.](url)

@jafingerhut
Copy link
Collaborator Author

That is a good start.

There is no need to have all of the details before starting such a document, but here are some things that would be nice to add to that start:

  • there are other positive tests in other directories besides testdata/p4_16_samples. It would be good to describe the different directories, and their differences.
  • there are different back ends being tested, and different tools like p4testgen with ptf and stf options, some P4 programs with .stf files, some without. It can be a bit bewildering at first, and I'm hoping the document eventually gives a fairly complete picture of what is there.
  • There is a way to run tests using a ctest command, vs. directly executing the Bash scripts with names ending in .test. I am not sure if there are differences between ctest vs. directly running the Bash scripts, e.g. in options available, or behavior. It would be great if it could be described, and what the differences are, e.g. why would one want to run the tests one way vs. the other?

@Vineet1101
Copy link

Adding New Tests to the P4C Repository

This document provides a guide for developers who want to add new tests to the P4C repository. It explains where to place test files, how to manage expected output files, and how to handle tests that are expected to fail (Xfail).


1. Adding "Positive" Tests

Positive tests are those where the P4 compiler should process the input without any errors.

Steps for Adding Positive Tests

  1. Source File Location:

    • Positive tests are located in multiple directories based on the purpose and the backend/tool being tested:

      • p4c/testdata/p4_16_samples: General positive tests for the P4_16 language.
      • p4c/testdata/p4_16_samples_outputs: Expected outputs for these tests.
      • Other directories may contain tests for specific backends or scenarios (e.g., BMv2 backend or PSA architecture).
      • Review the structure of the testdata directory to understand the organization.
    • Ensure the file is named descriptively to reflect its purpose (e.g., example_positive_test.p4).

  2. Expected Output Files:

    • Run the P4 compiler on the source file using the command:
      ./build/p4test path/to/source_file.p4
    • This generates expected output files.
    • Place the generated expected output files in the appropriate directory, such as p4c/testdata/p4_16_samples_outputs.
  3. Updating Expected Outputs:

    • If a change to the P4 compiler modifies the expected outputs, you can regenerate them by rerunning the tests with:
      ./build/p4test path/to/source_file.p4 --update-outputs
    • Replace the old expected output files with the newly generated ones.

Verifying the Test

  • Use the CI system or run the test locally to verify that it passes as expected.

2. Adding "Negative" Tests

Negative tests are those where the P4 compiler is expected to catch a syntax or semantic error in the input source file.

Steps for Adding Negative Tests

  1. Source File Location:

    • Place the source file in the p4c/testdata/p4_16_errors directory.
    • Name the file appropriately to describe the expected error (e.g., example_negative_test.p4).
  2. Expected Output Files:

    • Run the P4 compiler on the source file to generate the error output:
      ./build/p4test path/to/source_file.p4
    • Save the generated error output in the p4c/testdata/p4_16_errors_outputs directory with a matching file name.
  3. Verifying the Test:

    • Run the test locally or through CI to confirm that the compiler generates the expected error.

3. Handling Different Backends and Tools

  • Different Backends: The P4C repository supports multiple backends like BMv2, PSA, etc. Each backend may require tests to be placed in specific directories and may generate backend-specific outputs.
  • Tools: Tools like p4testgen may introduce additional test workflows and configurations.
  • Tests with STF Files: Some P4 programs include .stf (Switch Test Framework) files, which define expected packet inputs and outputs. Ensure these are placed correctly alongside the P4 source files.

Refer to the repository documentation and existing test examples to understand these variations better.


4. Marking Tests as "Expected to Fail" (Xfail)

An Xfail test is one that is expected to fail due to a known issue or limitation in the P4 compiler. The CI system handles these tests differently based on their type (positive or negative).

Marking a Test as Xfail

  1. Adding the Xfail Mark:

    • Modify the test metadata to indicate that it is expected to fail.
    • For example, you might add an XFAIL tag in the test description or a specific configuration file.
  2. Behavior of Xfail Tests:

    • For positive tests:
      • If the test passes (i.e., the compiler does not fail), the overall CI run will fail.
    • For negative tests:
      • If the test fails (i.e., the compiler generates an error), the overall CI run will fail.
  3. Maintaining Consistency:

    • Note that not all CI systems may treat Xfail tests the same way. For now, document the current behavior in the repository's CI documentation.
    • Add a TODO comment or issue to make the handling of Xfail tests consistent across all CI workflows in the future.

5. Running Tests

Tests in the P4C repository can be executed in two main ways:

  1. Using Bash Scripts:

    • Each test script typically ends with .test.
    • Run them directly from the command line, e.g.,
      ./test_script_name.test
  2. Using CTest:

    • CTest provides an alternative way to execute tests, often integrated with the CI pipeline.
    • Example command:
      ctest -R test_name

Differences Between Bash Scripts and CTest

  • Options Available: CTest may have additional or different options compared to Bash scripts. Check the documentation for details.
  • Behavior: Some CI environments may prefer one method over the other for consistency. Use CTest if you want to leverage its filtering and summary capabilities.
  • Recommendation: Document the preferred approach for specific use cases in the repository README or this guide.

Conclusion

This document serves as a starting point for adding new tests to the P4C repository. It is recommended to follow these guidelines to ensure consistency and reliability in the testing process. If you encounter issues or ambiguities, consult the repository maintainers or create a GitHub issue for clarification.

@Vineet1101
Copy link

Hey @jafingerhut have a look at this and if you need any further things to add i can add into it and then i will generate the pr

@jafingerhut
Copy link
Collaborator Author

jafingerhut commented Jan 25, 2025

I think it is a good start from which to create a PR in the https://github.com/p4lang/p4c repository, but please note that by saying that, I fully expect that others (or perhaps even I) will still have more suggestions for changes before we approve it.

For example, if you want to write a test on which the BMv2 back end is run, with packets, I believe that the file name must match the pattern *-bmv2.p4. I do not recall which of the existing checked-in files controls those patterns, but it would be good to find out and mention it in documentation like this, for future test writers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
smalltask A task that appears to require a small amount of work
Projects
None yet
Development

No branches or pull requests

3 participants