Hello Hackers,
We have good news, we just finished porting all the tests to pytest/grappa. Please see the attached diff with the multiple commits for this. Bare with us in this HUGE patch. We split it as follow:
0001 - Convert all the Unit Tests to pytest/grappa(Patch previously sent, with some tests that were not running previously
0002 - Change of README, makefile and package.json to use the new syntax
0003 - Convert all the Feature Tests to pytest/grappa
From now on, we can launch 1 test at a time if we want to by using pytest. (In this gist https://gist.github.com/kwmiebach/3fd49612ef7a52b5ce3a there are some examples on how to use it) Because we are using the pytest CLI all the options are made available to use, the community also as a ton of plugins if we ever need something else.
We updated all README’s with the following information as well.
As we discussed previously in order to use the pytest binary directly you need to point the PYTHONPATH
to $PGADMIN_SRC/web
If you do not want to do that then you can use it like this web $ python -m pytest -q -s pgadmin
We also changed the feature test, they now are under regression/feature_tests
as they do no longer need to be in the application path in order to run.
To address the password issue we added to the makefile
python -m pytest --tb=short -q --json=regression/test_result.json --log-file-level=DEBUG --log-file=regression/regression.log pgadmin
the --tb=short
options is the one responsible for removing a chunk of the traceback, making it smaller and hiding the variable printing
Known issues:
- Python 2.7, the library we are using for assertions (Grappa) is failing while trying to assert on strings. We created a PR to the library: https://github.com/grappa-py/grappa/pull/43 as soon as this gets in all the tests should pass
- We found out that the tests in
pgadmin/browser/tests
where not running because they needed SERVER_MODE to be True. We ported these tests but 2 tests are still failing TestResetPassword.test_success
and TestChangePassword.test_success
. They are currently failing for 2 different reasons, but when we went back to master
branch they were also failing. So we kept them with the mark xfail
, this mark allow us to run the tests and do not report the failure. You can read more one the topic at (https://docs.pytest.org/en/3.6.0/skipping.html). Unfortunately we were not able to correct the issues, if someone could look into these test it would be great. - Jenkins server need a change. Because we now run tests for a single database at a time the Jenkins flow need to change. Our proposal for this is to isolate each database in its own task something similar to the pipeline that we currently use internally: