-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support literal translations (map type !natural_language) #570
Comments
@niknetniko Does this make sense to you? We want to use explicit YAML-types to make a distinction between different types of maps. For example, as a value of the
We would also like to introduce a new type of map for natural language translations (proposed type: |
To my knowledge, TESTed actually has no heuristic at the moment. I think, if a YAML map is encountered with
I don't think there is currently support for specifying a return value in multiple languages. If added, I think an explicit tag is probably the way to go. I experimented a bit, and for example, the following: return:
"python": "{1, 2, 3}"
"javascript": "new Set([1, 2, 3])" Is interpreted as a testcase where return value must be a map with the two keys I do think the explicit requirement makes sense if adding other map types. However, adding support specifically for language literals also requires adding support for actually comparing these values. This would probably involve implementing a new oracle that supports comparisons in a target programming language, similar to how the programming-language-specific oracles work (or this is how I would implement it at least). |
Probably the only place where TESTed-DSL currently allows to specify programming-language specific values is in I would not introduce programming_language differentiation for Good to hear that we don't use heuristics for map-resolution in TESTed yet. This is good design and we have to make sure that we keep it this way. |
@BrentBlanckaert: Is it clear for you that we don't want the |
So |
I'm a bit confused by seeing natural language pop up here. We had a meeting where we decided that we would keep natural languages outside of tested itself and first solve this in preprocessing. |
As I understood it the plan was to build in support in TESTed for translations using multiple files. Eg. In a next step we would write a preprocessing step to be able to write these two files in a single file. |
@bmesuere We learned from our analysis of 916 existing Python-exercises with translations that only 53 out of these 916 exercises could take advantage of having separate test suites for each individual language (and for some of those this is not even the case as the separation is only done for a few but not all units). Generating separate test suites only require minor changes to TESTed. In the
We need indeed need to extend this such that we can have separate test suites for each language supported by the exercise and then select the appropriate test suite based on the natural language setting passed to TESTed.
This is a separate issue from this one, and indeed we need to take it up so good that you remind us about it. However, I'm not supportive of converting all Python exercises (or generating new ones) using separate test suites for each language. That would not favor maintainability and ergonomics of supporting automated assessment. I made a separate issue for this: #571 |
This is not about ergonomics. The end result experience for the user should be the same. This is about separation of concerns. TESTed is already a rather complex project. Adding language support as a preprocessing step makes this a separate maintainable package, without increasing the complexity of the current tested project. |
We could actually keep everything out of the hands of TESTed. Since Dodona is calling the judge, we could also have a separate As an author of multilingual programming exercises I'm definitely concerned about my experience as a user. Having to maintain separate test suites for each individual language is not a good user experience for me, as 99% of these test suites is shared content in my experience. The part that is translated is only a thin layer, so why not keep it that way? So yes, for me this is about keeping things simple. If there was a meeting where we decided that we would keep natural languages outside of TESTed itself, I wasn't in that meeting. |
I would not be against this idea, actually. As it is probably better to call the preprocessor in Dodona on exercise change, instead of on every judge run.
This was brought up by Bart and me in the meeting where you showed us the detailed analysis of your 916 exercises to convert. But since you have a different recollection of that meeting, I'll try to reiterate my takeaways from that meeting. I think the solution I took away would also resolve your needs. From your presentation:
Bart and I agreed that such a templating system could be relevant for advanced users. Other university courses could also benefit from auto translations in exercises. But we also had some hesitations:
This is when the preprocessing step was suggested as a solution. We didn't discuss implementation details, leaving this to you. But for clarity this is how I imagined the next steps:
I think this solution solves all of our requirements. |
You were in that meeting Peter, but you are misinterpreting the implications. We're explicitly saying you should not maintain 2 separate files. The conclusion was that we don't want to complicate TESTed itself with natural language support since that would distract from what TESTed does well and because it would add a maintenance burden to already complicated code. The proposed solution was indeed to feed TESTed a specific test suite file based on the natural language. This would allow novice users to easily benefit from translated output without having to learn new syntax and would require minimal changes to our code. For advanced users who don't want to maintain separate suite files, we proposed to use a template system that runs as a preprocessor. You basically write a single test suite file Since you would be the primary user, we suggested that you would propose a format and see what works and what doesn't from the experience in converting the old Python exercises. Initially, you could run the preprocessor yourself when committing and if it becomes a more mature format, we could run it on Dodona when processing exercise updates or as part of test execution. |
This issue specifically focuses on defining how mappings in a
return
in the DSL should be handled for different programming languages. Currently, TESTed relies on heuristics to determine whether a mapping is intended for translations or programming languages.A potential solution is to explicitly specify this in the
return
statement within a test suite, as illustrated below:The text was updated successfully, but these errors were encountered: