Skip to content

Commit

Permalink
Add --skip-xfail command line argument to prevent xfail to be ran
Browse files Browse the repository at this point in the history
when in compare mode

cf #37
  • Loading branch information
yohanboniface committed Feb 28, 2018
1 parent 7d68f79 commit 1643c64
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 2 deletions.
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,9 @@ Then compare when running a new version

Note: in compare mode, only new failures will appear as "FAILED" and their
traceback will be rendered; already known failures will appear as "xfail" and
in yellow instead of red.
in yellow instead of red. If you want those known to fail tests not to be run at
all (thus you'll don't know how many of them now pass), you can use the `--skip-xfail`
command line argument.


## Adding search cases
Expand Down
8 changes: 7 additions & 1 deletion conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,8 @@ def pytest_itemcollected(item):
if d != ".":
item.add_marker(d)
if item.nodeid in CONFIG.get('COMPARE_WITH', []):
item.add_marker('xfail')
item.add_marker(
pytest.mark.xfail(run=not CONFIG['SKIP_XFAIL']))


def pytest_addoption(parser):
Expand Down Expand Up @@ -63,13 +64,18 @@ def pytest_addoption(parser):
dest="compare_report",
help="Path where to load the report to compare with."
)
parser.addoption(
'--skip-xfail', action="store_true", dest="skip_xfail",
help="Do not run the tests known to fail when in compare mode."
)


def pytest_configure(config):
CONFIG['API_URL'] = config.getoption('--api-url')
CONFIG['MAX_RUN'] = config.getoption('--max-run')
CONFIG['LOOSE_COMPARE'] = config.getoption('--loose-compare')
CONFIG['GEOJSON'] = config.getoption('--geojson')
CONFIG['SKIP_XFAIL'] = config.getoption('--skip-xfail')
if config.getoption('--compare-report'):
with open(config.getoption('--compare-report')) as f:
CONFIG['COMPARE_WITH'] = []
Expand Down

4 comments on commit 1643c64

@karussell
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cannot see that comparisons with this option are faster. It seems the query is either executed or the creation of it takes the most time.

@yohanboniface
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pytest is a two steps process: collection then run.
Collection cannot be reduced by tagging tests given it's basically during the collection that filters are used. AFAIK, the only way to reduce the collection is to reduce the dataset, for example pointing to a single directory: py.test world/germany/xxxx, in that point only this target is loaded.
As far as I can tell, with this new behaviour, xfail tests are not run at all (I see nothing in my search engine server log), so I'm not sure what can be done more on this direction.
What I usually do is to use a small subset (world/france/nordpasdecalais for example) when I'm actively coding and testing, and run a bigger set when I feel like I'm good.

@karussell
Copy link
Contributor

@karussell karussell commented on 1643c64 Mar 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

As far as I can tell, with this new behaviour, xfail tests are not run at all (I see nothing in my search engine server log)

This is strange, I have to log searches and investigate this, because the time it takes is not really reduced.

@yohanboniface
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is because your server is sooooo fast! :)

Please sign in to comment.