Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix CI and test related to deprecated warning #194

Merged
merged 8 commits into from
Jan 19, 2022
Merged

Fix CI and test related to deprecated warning #194

merged 8 commits into from
Jan 19, 2022

Conversation

eggplants
Copy link
Contributor

@eggplants eggplants commented Jan 13, 2022

After #191, I will convert into ready to merge.


@eggplants
Copy link
Contributor Author

eggplants commented Jan 13, 2022

@eggplants eggplants marked this pull request as draft January 14, 2022 00:38
@eggplants eggplants mentioned this pull request Jan 14, 2022
Copy link
Member

@nicholascar nicholascar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine by me so approving now so this can be merged when taken out of Draft.

python -m unittest -v |& tee log.txt
grep '^FAIL:' log.txt > fails.txt
# if all failures is not raised in `test.test_wrapper`:
if [ "$(grep 'test.test_wrapper' fails.txt | wc -l)" -ne "$(wc -l < fails.txt)" ]; then
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't expect some tests to pass would it not be better to just remove them or skip them?

If the endpoints are unreliable it may be better to rather investigate something like testcontainers and run containers for the tests or pytest-recording.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also another option is to use pytest markers (e.g. @pytest.mark.unreliable), then mark the tests that are unreliable, and then run:

pytest -m 'not unreliable'
## Below will ignore the exit code
pytest -m 'unreliable' || :

But regardless, I think we should have a decent answer as to why we are running tests if we don't expect them to pass, and see if there is not some pattern in pytest that is appropriate for dealing with them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I understood.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fine for now, we can consider it as part of a move to pytest later.

@@ -870,31 +871,31 @@ def _mime_vs_type(mime, requested_type):
self.assertEqual(0, _mime_vs_type("text/n3", N3))
self.assertEqual(0, _mime_vs_type("text/turtle", TURTLE))
self.assertEqual(0, _mime_vs_type("application/turtle", TURTLE))
self.assertEqual(0, _mime_vs_type("application/json", JSON)) # Warning
self.assertEqual(0, _mime_vs_type("application/json", JSON))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should reconsider these asserts, it seems the count is very arbitrary, and I'm not sure that is what matters, maybe we should rather check that the result is actually json rather or something.

Copy link
Member

@aucampia aucampia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm okay with this, but we should re-evaluate the tests we run if we don't expect them to pass.

@eggplants eggplants marked this pull request as ready for review January 16, 2022 20:11
@eggplants
Copy link
Contributor Author

Please squash

@aucampia aucampia merged commit 3716009 into RDFLib:master Jan 19, 2022
@aucampia
Copy link
Member

Merged, thanks again for all your contributions @eggplants

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants