Thread: [pgadmin-hackers] pgAdmin4: Test result enhancement patch
Attachment
Hi On Wed, Mar 22, 2017 at 6:03 AM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: > Hi Dave, > > Please find the patch for test result enhancement. > What's in the patch: > 1. The test result summary will store in JSON file. > 2. Removed some redundant code from regression/test_utils.py > 3. To print test scenario names in failed and skipped test cases, I override > apply_scenario() function in regression/test_utils.py When running with the patch: 1) The browser isn't closed, and the script never exits - it just sits indefinitely at: ===== Please check output in file: /Users/dpage/git/pgadmin4/web/regression/regression.log make: *** [check] Error 1 ===== without returning to a shell prompt. The browser exits when I hit Ctrl+C. 2) I get the following failures consistently: IndexConstraintGetTestCase (Fetch primary Key constraint of table, Fetch unique Key constraint of table) IndexConstraintDeleteTestCase (Delete primary Key constraint of table, Delete unique Key constraint of table) IndexConstraintUpdateTestCase (Update primary Key constraint of table, Update unique Key constraint of table) runTest (pgadmin.browser.server_groups.servers.databases.schemas.tables.constraints.index_constraint.tests.test_index_constraint_delete.IndexConstraintDeleteTestCase) Delete primary Key constraint of table ... Traceback (most recent call last): File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_groups/servers/databases/schemas/tables/constraints/index_constraint/tests/utils.py", line 47, in create_index_constraint pg_cursor.execute(query) ProgrammingError: syntax error at or near "constraint" LINE 1: ...onstraint_a7d98 ADD CONSTRAINT Delete primary Key constraint... ^ FAIL runTest (pgadmin.browser.server_groups.servers.databases.schemas.tables.constraints.index_constraint.tests.test_index_constraint_delete.IndexConstraintDeleteTestCase) Delete unique Key constraint of table ... Traceback (most recent call last): File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_groups/servers/databases/schemas/tables/constraints/index_constraint/tests/utils.py", line 47, in create_index_constraint pg_cursor.execute(query) ProgrammingError: syntax error at or near "Key" LINE 1: ...ndexconstraint_a7d98 ADD CONSTRAINT Delete unique Key constr... ^ FAIL runTest (pgadmin.browser.server_groups.servers.databases.schemas.tables.constraints.index_constraint.tests.test_index_constraint_get.IndexConstraintGetTestCase) Fetch primary Key constraint of table ... Traceback (most recent call last): File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_groups/servers/databases/schemas/tables/constraints/index_constraint/tests/utils.py", line 47, in create_index_constraint pg_cursor.execute(query) ProgrammingError: syntax error at or near "Fetch" LINE 1: ..._e7902.table_indexconstraint_569ed ADD CONSTRAINT Fetch prim... ^ FAIL runTest (pgadmin.browser.server_groups.servers.databases.schemas.tables.constraints.index_constraint.tests.test_index_constraint_get.IndexConstraintGetTestCase) Fetch unique Key constraint of table ... Traceback (most recent call last): File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_groups/servers/databases/schemas/tables/constraints/index_constraint/tests/utils.py", line 47, in create_index_constraint pg_cursor.execute(query) ProgrammingError: syntax error at or near "Fetch" LINE 1: ..._e7902.table_indexconstraint_569ed ADD CONSTRAINT Fetch uniq... ^ FAIL runTest (pgadmin.browser.server_groups.servers.databases.schemas.tables.constraints.index_constraint.tests.test_index_constraint_put.IndexConstraintUpdateTestCase) Update primary Key constraint of table ... Traceback (most recent call last): File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_groups/servers/databases/schemas/tables/constraints/index_constraint/tests/utils.py", line 47, in create_index_constraint pg_cursor.execute(query) ProgrammingError: syntax error at or near "constraint" LINE 1: ...onstraint_788bf ADD CONSTRAINT Update primary Key constraint... ^ FAIL runTest (pgadmin.browser.server_groups.servers.databases.schemas.tables.constraints.index_constraint.tests.test_index_constraint_put.IndexConstraintUpdateTestCase) Update unique Key constraint of table ... Traceback (most recent call last): File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_groups/servers/databases/schemas/tables/constraints/index_constraint/tests/utils.py", line 47, in create_index_constraint pg_cursor.execute(query) ProgrammingError: syntax error at or near "Key" LINE 1: ...ndexconstraint_788bf ADD CONSTRAINT Update unique Key constr... ^ FAIL > I have also attached the sample JSON file with the test result. Tell me any > modification if any. I would suggest the following changes: - Use "tests_failed", "tests_passed" and "tests_skipped" for the names. - Add the error message/exception info etc. to the failed tests. "IndexConstraintGetTestCase": [ {"Fetch primary Key constraint of table": "Fetch primary Key constraint of table ... Traceback (most recent call last): File \"/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_groups/servers/databases/schemas/tables/constraints/index_constraint/tests/utils.py\", line 47, in create_index_constraint pg_cursor.execute(query) ProgrammingError: syntax error at or near \"Fetch\" LINE 1: ..._e7902.table_indexconstraint_569ed ADD CONSTRAINT Fetch prim..."} ] - Add the reason tests were skipped to the skipped tests e.g. "SynonymGetTestCase": [ {"Fetch synonym Node URL": "Synonyms not supported on PostgreSQL"} ] -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Hi
On Wed, Mar 22, 2017 at 6:03 AM, Navnath Gadakh
<navnath.gadakh@enterprisedb.com> wrote:
> Hi Dave,
>
> Please find the patch for test result enhancement.
> What's in the patch:
> 1. The test result summary will store in JSON file.
> 2. Removed some redundant code from regression/test_utils.py
> 3. To print test scenario names in failed and skipped test cases, I override
> apply_scenario() function in regression/test_utils.py
When running with the patch:
1) The browser isn't closed, and the script never exits - it just sits
indefinitely at:
=====
Please check output in file:
/Users/dpage/git/pgadmin4/web/regression/regression.log
make: *** [check] Error 1
=====
without returning to a shell prompt. The browser exits when I hit Ctrl+C.
2) I get the following failures consistently:
IndexConstraintGetTestCase (Fetch primary Key constraint of table,
Fetch unique Key constraint of table)
IndexConstraintDeleteTestCase (Delete primary Key constraint of table,
Delete unique Key constraint of table)
IndexConstraintUpdateTestCase (Update primary Key constraint of table,
Update unique Key constraint of table)
runTest (pgadmin.browser.server_groups.servers.databases. schemas.tables.constraints. index_constraint.tests.test_ index_constraint_delete. IndexConstraintDeleteTestCase)
Delete primary Key constraint of table ... Traceback (most recent call last):
File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_ groups/servers/databases/ schemas/tables/constraints/ index_constraint/tests/utils. py",
line 47, in create_index_constraint
pg_cursor.execute(query)
ProgrammingError: syntax error at or near "constraint"
LINE 1: ...onstraint_a7d98 ADD CONSTRAINT Delete primary Key constraint...
^
FAIL
runTest (pgadmin.browser.server_groups.servers.databases. schemas.tables.constraints. index_constraint.tests.test_ index_constraint_delete. IndexConstraintDeleteTestCase)
Delete unique Key constraint of table ... Traceback (most recent call last):
File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_ groups/servers/databases/ schemas/tables/constraints/ index_constraint/tests/utils. py",
line 47, in create_index_constraint
pg_cursor.execute(query)
ProgrammingError: syntax error at or near "Key"
LINE 1: ...ndexconstraint_a7d98 ADD CONSTRAINT Delete unique Key constr...
^
FAIL
runTest (pgadmin.browser.server_groups.servers.databases. schemas.tables.constraints. index_constraint.tests.test_ index_constraint_get. IndexConstraintGetTestCase)
Fetch primary Key constraint of table ... Traceback (most recent call last):
File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_ groups/servers/databases/ schemas/tables/constraints/ index_constraint/tests/utils. py",
line 47, in create_index_constraint
pg_cursor.execute(query)
ProgrammingError: syntax error at or near "Fetch"
LINE 1: ..._e7902.table_indexconstraint_569ed ADD CONSTRAINT Fetch prim...
^
FAIL
runTest (pgadmin.browser.server_groups.servers.databases. schemas.tables.constraints. index_constraint.tests.test_ index_constraint_get. IndexConstraintGetTestCase)
Fetch unique Key constraint of table ... Traceback (most recent call last):
File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_ groups/servers/databases/ schemas/tables/constraints/ index_constraint/tests/utils. py",
line 47, in create_index_constraint
pg_cursor.execute(query)
ProgrammingError: syntax error at or near "Fetch"
LINE 1: ..._e7902.table_indexconstraint_569ed ADD CONSTRAINT Fetch uniq...
^
FAIL
runTest (pgadmin.browser.server_groups.servers.databases. schemas.tables.constraints. index_constraint.tests.test_ index_constraint_put. IndexConstraintUpdateTestCase)
Update primary Key constraint of table ... Traceback (most recent call last):
File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_ groups/servers/databases/ schemas/tables/constraints/ index_constraint/tests/utils. py",
line 47, in create_index_constraint
pg_cursor.execute(query)
ProgrammingError: syntax error at or near "constraint"
LINE 1: ...onstraint_788bf ADD CONSTRAINT Update primary Key constraint...
^
FAIL
runTest (pgadmin.browser.server_groups.servers.databases. schemas.tables.constraints. index_constraint.tests.test_ index_constraint_put. IndexConstraintUpdateTestCase)
Update unique Key constraint of table ... Traceback (most recent call last):
File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_ groups/servers/databases/ schemas/tables/constraints/ index_constraint/tests/utils. py",
line 47, in create_index_constraint
pg_cursor.execute(query)
ProgrammingError: syntax error at or near "Key"
LINE 1: ...ndexconstraint_788bf ADD CONSTRAINT Update unique Key constr...
^
FAIL
> I have also attached the sample JSON file with the test result. Tell me any
> modification if any.
I would suggest the following changes:
- Use "tests_failed", "tests_passed" and "tests_skipped" for the names.
- Add the error message/exception info etc. to the failed tests.
"IndexConstraintGetTestCase": [
{"Fetch primary Key constraint of table": "Fetch primary Key
constraint of table ... Traceback (most recent call last):
File \"/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_ groups/servers/databases/ schemas/tables/constraints/ index_constraint/tests/utils. py\",
line 47, in create_index_constraint
pg_cursor.execute(query)
ProgrammingError: syntax error at or near \"Fetch\"
LINE 1: ..._e7902.table_indexconstraint_569ed ADD CONSTRAINT Fetch prim..."}
]
- Add the reason tests were skipped to the skipped tests
e.g.
"SynonymGetTestCase": [
{"Fetch synonym Node URL": "Synonyms not supported on PostgreSQL"}
]
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
The Enterprise PostgreSQL Company
Attachment
Hi On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: > >> When running with the patch: >> >> 1) The browser isn't closed, and the script never exits - it just sits >> indefinitely at: >> >> ===== >> Please check output in file: >> /Users/dpage/git/pgadmin4/web/regression/regression.log >> >> make: *** [check] Error 1 >> ===== >> >> without returning to a shell prompt. The browser exits when I hit Ctrl+C. The above is still a problem. In fact, not only do I have to hit Ctrl+C, but then the browser prompts me to check I really do want to exit. There's also another problem that just showed up. I got the following failure on PG 9.4 (due to a known intermittent bug that Ashesh and Tira@Pivotal are working on). Note how it's not reported in the summary (or the JSON output): runTest (pgadmin.feature_tests.connect_to_server_feature_test.ConnectsToServerFeatureTest) ... ERROR runTest (pgadmin.feature_tests.table_ddl_feature_test.TableDdlFeatureTest) ... ok runTest (pgadmin.utils.tests.test_versioned_template_loader.TestVersionedTemplateLoader) ... ok ====================================================================== ERROR: runTest (pgadmin.feature_tests.connect_to_server_feature_test.ConnectsToServerFeatureTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/dpage/git/pgadmin4/web/pgadmin/feature_tests/connect_to_server_feature_test.py", line 37, in runTest self._tables_node_expandable() File "/Users/dpage/git/pgadmin4/web/pgadmin/feature_tests/connect_to_server_feature_test.py", line 73, in _tables_node_expandable self.page.toggle_open_tree_item('test_table') File "/Users/dpage/git/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 68, in toggle_open_tree_item self.find_by_xpath("//*[@id='tree']//*[.='" + tree_item_text + "']/../*[@class='aciTreeButton']").click() File "/Users/dpage/git/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 71, in find_by_xpath return self.wait_for_element(lambda driver: driver.find_element_by_xpath(xpath)) File "/Users/dpage/git/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 128, in wait_for_element return self._wait_for("element to exist", element_if_it_exists) File "/Users/dpage/git/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 162, in _wait_for "Timed out waiting for " + waiting_for_message) File "/Users/dpage/.virtualenvs/pgadmin4/lib/python2.7/site-packages/selenium/webdriver/support/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace) TimeoutException: Message: Timed out waiting for element to exist ---------------------------------------------------------------------- Ran 153 tests in 60.698s FAILED (errors=1, skipped=12) ... ... ... ====================================================================== Test Result Summary ====================================================================== Regression - EPAS 9.5: 153 tests passed 0 tests failed 0 tests skipped Regression - PG 9.5: 141 tests passed 0 tests failed 12 tests skipped: SynonymGetTestCase (Fetch synonym Node URL) PackageDeleteTestCase (Fetch Package Node URL) ResourceGroupsGetTestCase (Get resource groups) SynonymDeleteTestCase (Fetch synonym Node URL) ResourceGroupsAddTestCase (Add resource groups) PackagePutTestCase (Fetch Package Node URL) SynonymPutTestCase (Fetch synonym Node URL) ResourceGroupsPutTestCase (Put resource groups) ResourceGroupsDeleteTestCase (Delete resource groups) SynonymAddTestCase (Default Node URL) PackageAddTestCase (Fetch Package Node URL) PackageGetTestCase (Fetch Package Node URL) Regression - PG 9.4: 141 tests passed 0 tests failed 12 tests skipped: SynonymGetTestCase (Fetch synonym Node URL) PackageDeleteTestCase (Fetch Package Node URL) ResourceGroupsGetTestCase (Get resource groups) SynonymDeleteTestCase (Fetch synonym Node URL) ResourceGroupsAddTestCase (Add resource groups) PackagePutTestCase (Fetch Package Node URL) SynonymPutTestCase (Fetch synonym Node URL) ResourceGroupsPutTestCase (Put resource groups) ResourceGroupsDeleteTestCase (Delete resource groups) SynonymAddTestCase (Default Node URL) PackageAddTestCase (Fetch Package Node URL) PackageGetTestCase (Fetch Package Node URL) ====================================================================== -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Hi
On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh
<navnath.gadakh@enterprisedb.com> wrote:
>
>> When running with the patch:
>>
>> 1) The browser isn't closed, and the script never exits - it just sits
>> indefinitely at:
>>
>> =====
>> Please check output in file:
>> /Users/dpage/git/pgadmin4/web/regression/regression.log
>>
>> make: *** [check] Error 1
>> =====
>>
>> without returning to a shell prompt. The browser exits when I hit Ctrl+C.
The above is still a problem. In fact, not only do I have to hit
Ctrl+C, but then the browser prompts me to check I really do want to
exit.
There's also another problem that just showed up. I got the following
failure on PG 9.4 (due to a known intermittent bug that Ashesh and
Tira@Pivotal are working on). Note how it's not reported in the
summary (or the JSON output):
152 tests passed
1 test failed:
LoginRoleGetTestCase (Check Role Node)
16 tests skipped:
SynonymGetTestCase (Fetch synonym Node URL)
But our in-built test framework does not provide that scenario name with failed/skipped test case that's why I override apply_scenario() function.
def apply_scenario(scenario, test):
name, parameters = scenario
parameters["scenario_name"] = name
While printing the result, I have checked the if 'scenario_name' in test as we need to print scenario name in test summary as well as in JSON file.
I can do it without scenario name but for better understanding which test scenario is failed it's good to add a scenario name with each test case.
See this is how test cases looks like while printing on console
API:
runTest (pgadmin.browser.server_groups.servers.databases.schemas.types.tests.test_types_put.TypesUpdateTestCase)
Update type under schema node ... ok
Feature tests:
runTest (pgadmin.utils.tests.test_
... ok
No scenario name in feature tests.
runTest (pgadmin.feature_tests.connect_to_server_feature_ test. ConnectsToServerFeatureTest)
... ERROR
runTest (pgadmin.feature_tests.table_ddl_feature_test. TableDdlFeatureTest)
... ok
runTest (pgadmin.utils.tests.test_versioned_template_loader. TestVersionedTemplateLoader)
... ok
============================================================ ==========
ERROR: runTest (pgadmin.feature_tests.connect_to_server_feature_ test. ConnectsToServerFeatureTest)
------------------------------------------------------------ ----------
Traceback (most recent call last):
File "/Users/dpage/git/pgadmin4/web/pgadmin/feature_tests/ connect_to_server_feature_ test.py",
line 37, in runTest
self._tables_node_expandable()
File "/Users/dpage/git/pgadmin4/web/pgadmin/feature_tests/ connect_to_server_feature_ test.py",
line 73, in _tables_node_expandable
self.page.toggle_open_tree_item('test_table')
File "/Users/dpage/git/pgadmin4/web/regression/feature_utils/ pgadmin_page.py",
line 68, in toggle_open_tree_item
self.find_by_xpath("//*[@id='tree']//*[.='" + tree_item_text +
"']/../*[@class='aciTreeButton']").click()
File "/Users/dpage/git/pgadmin4/web/regression/feature_utils/ pgadmin_page.py",
line 71, in find_by_xpath
return self.wait_for_element(lambda driver:
driver.find_element_by_xpath(xpath))
File "/Users/dpage/git/pgadmin4/web/regression/feature_utils/ pgadmin_page.py",
line 128, in wait_for_element
return self._wait_for("element to exist", element_if_it_exists)
File "/Users/dpage/git/pgadmin4/web/regression/feature_utils/ pgadmin_page.py",
line 162, in _wait_for
"Timed out waiting for " + waiting_for_message)
File "/Users/dpage/.virtualenvs/pgadmin4/lib/python2.7/site- packages/selenium/webdriver/ support/wait.py",
line 80, in until
raise TimeoutException(message, screen, stacktrace)
TimeoutException: Message: Timed out waiting for element to exist
------------------------------------------------------------ ----------
Ran 153 tests in 60.698s
FAILED (errors=1, skipped=12)
...
...
...
============================================================ ==========
Test Result Summary
============================================================ ==========
Regression - EPAS 9.5:
153 tests passed
0 tests failed
0 tests skipped
Regression - PG 9.5:
141 tests passed
0 tests failed
12 tests skipped:
SynonymGetTestCase (Fetch synonym Node URL)
PackageDeleteTestCase (Fetch Package Node URL)
ResourceGroupsGetTestCase (Get resource groups)
SynonymDeleteTestCase (Fetch synonym Node URL)
ResourceGroupsAddTestCase (Add resource groups)
PackagePutTestCase (Fetch Package Node URL)
SynonymPutTestCase (Fetch synonym Node URL)
ResourceGroupsPutTestCase (Put resource groups)
ResourceGroupsDeleteTestCase (Delete resource groups)
SynonymAddTestCase (Default Node URL)
PackageAddTestCase (Fetch Package Node URL)
PackageGetTestCase (Fetch Package Node URL)
Regression - PG 9.4:
141 tests passed
0 tests failed
12 tests skipped:
SynonymGetTestCase (Fetch synonym Node URL)
PackageDeleteTestCase (Fetch Package Node URL)
ResourceGroupsGetTestCase (Get resource groups)
SynonymDeleteTestCase (Fetch synonym Node URL)
ResourceGroupsAddTestCase (Add resource groups)
PackagePutTestCase (Fetch Package Node URL)
SynonymPutTestCase (Fetch synonym Node URL)
ResourceGroupsPutTestCase (Put resource groups)
ResourceGroupsDeleteTestCase (Delete resource groups)
SynonymAddTestCase (Default Node URL)
PackageAddTestCase (Fetch Package Node URL)
PackageGetTestCase (Fetch Package Node URL)
============================================================ ==========
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
The Enterprise PostgreSQL Company
Hi On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: > Hello Dave, > > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dpage@pgadmin.org> wrote: >> >> Hi >> >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh >> <navnath.gadakh@enterprisedb.com> wrote: >> > >> >> When running with the patch: >> >> >> >> 1) The browser isn't closed, and the script never exits - it just sits >> >> indefinitely at: >> >> >> >> ===== >> >> Please check output in file: >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log >> >> >> >> make: *** [check] Error 1 >> >> ===== >> >> >> >> without returning to a shell prompt. The browser exits when I hit >> >> Ctrl+C. >> >> The above is still a problem. In fact, not only do I have to hit >> Ctrl+C, but then the browser prompts me to check I really do want to >> exit. >> >> There's also another problem that just showed up. I got the following >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and >> Tira@Pivotal are working on). Note how it's not reported in the >> summary (or the JSON output): > > > I found the issue, In the feature tests we need to add a scenario name for > each test case. the purpose of this patch is to print the failed/skipped > test class name with the scenario name like: > > 152 tests passed > > 1 test failed: > > LoginRoleGetTestCase (Check Role Node) > > 16 tests skipped: > > SynonymGetTestCase (Fetch synonym Node URL) > > But our in-built test framework does not provide that scenario name with > failed/skipped test case that's why I override apply_scenario() function. > > def apply_scenario(scenario, test): > > name, parameters = scenario > > parameters["scenario_name"] = name > > While printing the result, I have checked the if 'scenario_name' in test as > we need to print scenario name in test summary as well as in JSON file. > > I can do it without scenario name but for better understanding which test > scenario is failed it's good to add a scenario name with each test case. OK. > See this is how test cases looks like while printing on console > > API: > > runTest > (pgadmin.browser.server_groups.servers.databases.schemas.types.tests.test_types_put.TypesUpdateTestCase) > > Update type under schema node ... ok > > Feature tests: > > runTest > (pgadmin.utils.tests.test_versioned_template_loader.TestVersionedTemplateLoader) > ... ok > > No scenario name in feature tests. > OK, is that easy to fix while you're at it? -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company
HiOK.
On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh
<navnath.gadakh@enterprisedb.com> wrote:
> Hello Dave,
>
> On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dpage@pgadmin.org> wrote:
>>
>> Hi
>>
>> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh
>> <navnath.gadakh@enterprisedb.com> wrote:
>> >
>> >> When running with the patch:
>> >>
>> >> 1) The browser isn't closed, and the script never exits - it just sits
>> >> indefinitely at:
>> >>
>> >> =====
>> >> Please check output in file:
>> >> /Users/dpage/git/pgadmin4/web/regression/regression.log
>> >>
>> >> make: *** [check] Error 1
>> >> =====
>> >>
>> >> without returning to a shell prompt. The browser exits when I hit
>> >> Ctrl+C.
>>
>> The above is still a problem. In fact, not only do I have to hit
>> Ctrl+C, but then the browser prompts me to check I really do want to
>> exit.
>>
>> There's also another problem that just showed up. I got the following
>> failure on PG 9.4 (due to a known intermittent bug that Ashesh and
>> Tira@Pivotal are working on). Note how it's not reported in the
>> summary (or the JSON output):
>
>
> I found the issue, In the feature tests we need to add a scenario name for
> each test case. the purpose of this patch is to print the failed/skipped
> test class name with the scenario name like:
>
> 152 tests passed
>
> 1 test failed:
>
> LoginRoleGetTestCase (Check Role Node)
>
> 16 tests skipped:
>
> SynonymGetTestCase (Fetch synonym Node URL)
>
> But our in-built test framework does not provide that scenario name with
> failed/skipped test case that's why I override apply_scenario() function.
>
> def apply_scenario(scenario, test):
>
> name, parameters = scenario
>
> parameters["scenario_name"] = name
>
> While printing the result, I have checked the if 'scenario_name' in test as
> we need to print scenario name in test summary as well as in JSON file.
>
> I can do it without scenario name but for better understanding which test
> scenario is failed it's good to add a scenario name with each test case.
> See this is how test cases looks like while printing on console
>
> API:
>
> runTest
> (pgadmin.browser.server_groups.servers.databases. schemas.types.tests.test_ types_put.TypesUpdateTestCase)
>
> Update type under schema node ... ok
>
> Feature tests:
>
> runTest
> (pgadmin.utils.tests.test_versioned_template_loader. TestVersionedTemplateLoader)
> ... ok
>
> No scenario name in feature tests.
>
OK, is that easy to fix while you're at it?
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
The Enterprise PostgreSQL Company
On Wed, Mar 29, 2017 at 4:12 AM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: > Hi, > > On Mon, Mar 27, 2017 at 5:37 PM, Dave Page <dpage@pgadmin.org> wrote: >> >> Hi >> >> On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh >> <navnath.gadakh@enterprisedb.com> wrote: >> > Hello Dave, >> > >> > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dpage@pgadmin.org> wrote: >> >> >> >> Hi >> >> >> >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh >> >> <navnath.gadakh@enterprisedb.com> wrote: >> >> > >> >> >> When running with the patch: >> >> >> >> >> >> 1) The browser isn't closed, and the script never exits - it just >> >> >> sits >> >> >> indefinitely at: >> >> >> >> >> >> ===== >> >> >> Please check output in file: >> >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log >> >> >> >> >> >> make: *** [check] Error 1 >> >> >> ===== >> >> >> >> >> >> without returning to a shell prompt. The browser exits when I hit >> >> >> Ctrl+C. >> >> >> >> The above is still a problem. In fact, not only do I have to hit >> >> Ctrl+C, but then the browser prompts me to check I really do want to >> >> exit. >> >> >> >> There's also another problem that just showed up. I got the following >> >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and >> >> Tira@Pivotal are working on). Note how it's not reported in the >> >> summary (or the JSON output): >> > >> > >> > I found the issue, In the feature tests we need to add a scenario name >> > for >> > each test case. the purpose of this patch is to print the failed/skipped >> > test class name with the scenario name like: >> > >> > 152 tests passed >> > >> > 1 test failed: >> > >> > LoginRoleGetTestCase (Check Role Node) >> > >> > 16 tests skipped: >> > >> > SynonymGetTestCase (Fetch synonym Node URL) >> > >> > But our in-built test framework does not provide that scenario name with >> > failed/skipped test case that's why I override apply_scenario() >> > function. >> > >> > def apply_scenario(scenario, test): >> > >> > name, parameters = scenario >> > >> > parameters["scenario_name"] = name >> > >> > While printing the result, I have checked the if 'scenario_name' in test >> > as >> > we need to print scenario name in test summary as well as in JSON file. >> > >> > I can do it without scenario name but for better understanding which >> > test >> > scenario is failed it's good to add a scenario name with each test case. >> >> OK. >> >> > See this is how test cases looks like while printing on console >> > >> > API: >> > >> > runTest >> > >> > (pgadmin.browser.server_groups.servers.databases.schemas.types.tests.test_types_put.TypesUpdateTestCase) >> > >> > Update type under schema node ... ok >> > >> > Feature tests: >> > >> > runTest >> > >> > (pgadmin.utils.tests.test_versioned_template_loader.TestVersionedTemplateLoader) >> > ... ok >> > >> > No scenario name in feature tests. >> > >> >> OK, is that easy to fix while you're at it? > > > I have two solutions- > > 1. Need a little hack to skip scenario/test name if that does not exist, but > that's not the best idea. > > 2. Owner of feature tests should add scenario/test name to each feature > test. In the summary also we will know for which scenario test is failing or > skipping. > This is ideal and long term solution and I prefer it. Agreed - and as there are only 2 feature tests, you should be able to fix them up pretty quickly :-p Once code is in the repo, it's "ours", meaning the entire communities. I wouldn't expect us to ping all issues back to Pivotal - we're one team on this. Thanks! -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On Wed, Mar 29, 2017 at 4:12 AM, Navnath GadakhAgreed - and as there are only 2 feature tests, you should be able to<navnath.gadakh@enterprisedb.com> wrote:
> Hi,
>
> On Mon, Mar 27, 2017 at 5:37 PM, Dave Page <dpage@pgadmin.org> wrote:
>>
>> Hi
>>
>> On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh
>> <navnath.gadakh@enterprisedb.com> wrote:
>> > Hello Dave,
>> >
>> > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dpage@pgadmin.org> wrote:
>> >>
>> >> Hi
>> >>
>> >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh
>> >> <navnath.gadakh@enterprisedb.com> wrote:
>> >> >
>> >> >> When running with the patch:
>> >> >>
>> >> >> 1) The browser isn't closed, and the script never exits - it just
>> >> >> sits
>> >> >> indefinitely at:
>> >> >>
>> >> >> =====
>> >> >> Please check output in file:
>> >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log
>> >> >>
>> >> >> make: *** [check] Error 1
>> >> >> =====
>> >> >>
>> >> >> without returning to a shell prompt. The browser exits when I hit
>> >> >> Ctrl+C.
>> >>
>> >> The above is still a problem. In fact, not only do I have to hit
>> >> Ctrl+C, but then the browser prompts me to check I really do want to
>> >> exit.
>> >>
>> >> There's also another problem that just showed up. I got the following
>> >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and
>> >> Tira@Pivotal are working on). Note how it's not reported in the
>> >> summary (or the JSON output):
>> >
>> >
>> > I found the issue, In the feature tests we need to add a scenario name
>> > for
>> > each test case. the purpose of this patch is to print the failed/skipped
>> > test class name with the scenario name like:
>> >
>> > 152 tests passed
>> >
>> > 1 test failed:
>> >
>> > LoginRoleGetTestCase (Check Role Node)
>> >
>> > 16 tests skipped:
>> >
>> > SynonymGetTestCase (Fetch synonym Node URL)
>> >
>> > But our in-built test framework does not provide that scenario name with
>> > failed/skipped test case that's why I override apply_scenario()
>> > function.
>> >
>> > def apply_scenario(scenario, test):
>> >
>> > name, parameters = scenario
>> >
>> > parameters["scenario_name"] = name
>> >
>> > While printing the result, I have checked the if 'scenario_name' in test
>> > as
>> > we need to print scenario name in test summary as well as in JSON file.
>> >
>> > I can do it without scenario name but for better understanding which
>> > test
>> > scenario is failed it's good to add a scenario name with each test case.
>>
>> OK.
>>
>> > See this is how test cases looks like while printing on console
>> >
>> > API:
>> >
>> > runTest
>> >
>> > (pgadmin.browser.server_groups.servers.databases. schemas.types.tests.test_ types_put.TypesUpdateTestCase)
>> >
>> > Update type under schema node ... ok
>> >
>> > Feature tests:
>> >
>> > runTest
>> >
>> > (pgadmin.utils.tests.test_versioned_template_loader. TestVersionedTemplateLoader)
>> > ... ok
>> >
>> > No scenario name in feature tests.
>> >
>>
>> OK, is that easy to fix while you're at it?
>
>
> I have two solutions-
>
> 1. Need a little hack to skip scenario/test name if that does not exist, but
> that's not the best idea.
>
> 2. Owner of feature tests should add scenario/test name to each feature
> test. In the summary also we will know for which scenario test is failing or
> skipping.
> This is ideal and long term solution and I prefer it.
fix them up pretty quickly :-p
Once code is in the repo, it's "ours", meaning the entire communities.
I wouldn't expect us to ping all issues back to Pivotal - we're one
team on this.
Thanks!
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
The Enterprise PostgreSQL Company
Attachment
======================================================================
ERROR: runTest (pgadmin.feature_tests.connect_to_server_feature_test.ConnectsToServerFeatureTest)
Test database connection which can be created from the UI
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/akshay/Development/pgadmin4/web/regression/feature_utils/base_feature_test.py", line 33, in setUp
self.page.reset_layout()
File "/Users/akshay/Development/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 33, in reset_layout
self.click_modal_ok()
File "/Users/akshay/Development/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 38, in click_modal_ok
self.click_element(self.find_by_xpath("//button[contains(.,'OK')]"))
File "/Users/akshay/Development/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 71, in find_by_xpath
return self.wait_for_element(lambda driver: driver.find_element_by_xpath(xpath))
File "/Users/akshay/Development/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 128, in wait_for_element
return self._wait_for("element to exist", element_if_it_exists)
File "/Users/akshay/Development/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 162, in _wait_for
"Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/Workspace/lib/python3.5/site-packages/selenium/webdriver/support/wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message: Timed out waiting for element to exist
======================================================================
ERROR: runTest (pgadmin.feature_tests.table_ddl_feature_test.TableDdlFeatureTest)
Test scenarios for acceptance tests
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/akshay/Development/pgadmin4/web/regression/feature_utils/base_feature_test.py", line 33, in setUp
self.page.reset_layout()
File "/Users/akshay/Development/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 31, in reset_layout
self.click_element(self.find_by_partial_link_text("File"))
File "/Users/akshay/Development/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 90, in click_element
return self._wait_for("clicking the element not to throw an exception", click_succeeded)
File "/Users/akshay/Development/pgadmin4/web/regression/feature_utils/pgadmin_page.py", line 162, in _wait_for
"Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/Workspace/lib/python3.5/site-packages/selenium/webdriver/support/wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message: Timed out waiting for clicking the element not to throw an exception
----------------------------------------------------------------------
Ran 153 tests in 45.493s
FAILED (errors=2, skipped=16)
======================================================================
Test Result Summary
======================================================================
Traceback (most recent call last):
File "runtests.py", line 354, in <module>
skipped_cases)
File "/Users/akshay/Development/pgadmin4/web/regression/python_test_utils/test_utils.py", line 442, in get_scenario_name
key, value = case_name_dict.items()[0]
TypeError: 'dict_items' object does not support indexing
Hi Dave,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario()function in regression/test_utils.py I have also attached the sample JSON file with the test result as per your suggestions.Thanks!On Wed, Mar 29, 2017 at 6:03 PM, Dave Page <dpage@pgadmin.org> wrote:On Wed, Mar 29, 2017 at 4:12 AM, Navnath GadakhAgreed - and as there are only 2 feature tests, you should be able to<navnath.gadakh@enterprisedb.com> wrote:
> Hi,
>
> On Mon, Mar 27, 2017 at 5:37 PM, Dave Page <dpage@pgadmin.org> wrote:
>>
>> Hi
>>
>> On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh
>> <navnath.gadakh@enterprisedb.com> wrote:
>> > Hello Dave,
>> >
>> > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dpage@pgadmin.org> wrote:
>> >>
>> >> Hi
>> >>
>> >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh
>> >> <navnath.gadakh@enterprisedb.com> wrote:
>> >> >
>> >> >> When running with the patch:
>> >> >>
>> >> >> 1) The browser isn't closed, and the script never exits - it just
>> >> >> sits
>> >> >> indefinitely at:
>> >> >>
>> >> >> =====
>> >> >> Please check output in file:
>> >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log
>> >> >>
>> >> >> make: *** [check] Error 1
>> >> >> =====
>> >> >>
>> >> >> without returning to a shell prompt. The browser exits when I hit
>> >> >> Ctrl+C.
>> >>
>> >> The above is still a problem. In fact, not only do I have to hit
>> >> Ctrl+C, but then the browser prompts me to check I really do want to
>> >> exit.
>> >>
>> >> There's also another problem that just showed up. I got the following
>> >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and
>> >> Tira@Pivotal are working on). Note how it's not reported in the
>> >> summary (or the JSON output):
>> >
>> >
>> > I found the issue, In the feature tests we need to add a scenario name
>> > for
>> > each test case. the purpose of this patch is to print the failed/skipped
>> > test class name with the scenario name like:
>> >
>> > 152 tests passed
>> >
>> > 1 test failed:
>> >
>> > LoginRoleGetTestCase (Check Role Node)
>> >
>> > 16 tests skipped:
>> >
>> > SynonymGetTestCase (Fetch synonym Node URL)
>> >
>> > But our in-built test framework does not provide that scenario name with
>> > failed/skipped test case that's why I override apply_scenario()
>> > function.
>> >
>> > def apply_scenario(scenario, test):
>> >
>> > name, parameters = scenario
>> >
>> > parameters["scenario_name"] = name
>> >
>> > While printing the result, I have checked the if 'scenario_name' in test
>> > as
>> > we need to print scenario name in test summary as well as in JSON file.
>> >
>> > I can do it without scenario name but for better understanding which
>> > test
>> > scenario is failed it's good to add a scenario name with each test case.
>>
>> OK.
>>
>> > See this is how test cases looks like while printing on console
>> >
>> > API:
>> >
>> > runTest
>> >
>> > (pgadmin.browser.server_groups.servers.databases.schemas. types.tests.test_types_put. TypesUpdateTestCase)
>> >
>> > Update type under schema node ... ok
>> >
>> > Feature tests:
>> >
>> > runTest
>> >
>> > (pgadmin.utils.tests.test_versioned_template_loader.TestVers ionedTemplateLoader)
>> > ... ok
>> >
>> > No scenario name in feature tests.
>> >
>>
>> OK, is that easy to fix while you're at it?
>
>
> I have two solutions-
>
> 1. Need a little hack to skip scenario/test name if that does not exist, but
> that's not the best idea.
>
> 2. Owner of feature tests should add scenario/test name to each feature
> test. In the summary also we will know for which scenario test is failing or
> skipping.
> This is ideal and long term solution and I prefer it.
fix them up pretty quickly :-p
Once code is in the repo, it's "ours", meaning the entire communities.
I wouldn't expect us to ping all issues back to Pivotal - we're one
team on this.
Thanks!
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company
--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers
Mobile: +91 976-788-8246
Hi NavnathI have run the updated patch. It is working fine with Python 2.7 but I am facing following error with Python 3.5, can you please look into it:==============================
============================== ========== ERROR: runTest (pgadmin.feature_tests.
connect_to_server_feature_ test. ConnectsToServerFeatureTest) Test database connection which can be created from the UI
------------------------------
------------------------------ ---------- Traceback (most recent call last):
File "/Users/akshay/Development/
pgadmin4/web/regression/ feature_utils/base_feature_ test.py", line 33, in setUp self.page.reset_layout()
File "/Users/akshay/Development/
pgadmin4/web/regression/ feature_utils/pgadmin_page.py" , line 33, in reset_layout self.click_modal_ok()
File "/Users/akshay/Development/
pgadmin4/web/regression/ feature_utils/pgadmin_page.py" , line 38, in click_modal_ok self.click_element(self.find_
by_xpath("//button[contains(., 'OK')]")) File "/Users/akshay/Development/
pgadmin4/web/regression/ feature_utils/pgadmin_page.py" , line 71, in find_by_xpath return self.wait_for_element(lambda driver: driver.find_element_by_xpath(
xpath)) File "/Users/akshay/Development/
pgadmin4/web/regression/ feature_utils/pgadmin_page.py" , line 128, in wait_for_element return self._wait_for("element to exist", element_if_it_exists)
File "/Users/akshay/Development/
pgadmin4/web/regression/ feature_utils/pgadmin_page.py" , line 162, in _wait_for "Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/
Workspace/lib/python3.5/site- packages/selenium/webdriver/ support/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.
TimeoutException: Message: Timed out waiting for element to exist
==============================
============================== ========== ERROR: runTest (pgadmin.feature_tests.table_
ddl_feature_test. TableDdlFeatureTest) Test scenarios for acceptance tests
------------------------------
------------------------------ ---------- Traceback (most recent call last):
File "/Users/akshay/Development/
pgadmin4/web/regression/ feature_utils/base_feature_ test.py", line 33, in setUp self.page.reset_layout()
File "/Users/akshay/Development/
pgadmin4/web/regression/ feature_utils/pgadmin_page.py" , line 31, in reset_layout self.click_element(self.find_
by_partial_link_text("File")) File "/Users/akshay/Development/
pgadmin4/web/regression/ feature_utils/pgadmin_page.py" , line 90, in click_element return self._wait_for("clicking the element not to throw an exception", click_succeeded)
File "/Users/akshay/Development/
pgadmin4/web/regression/ feature_utils/pgadmin_page.py" , line 162, in _wait_for "Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/
Workspace/lib/python3.5/site- packages/selenium/webdriver/ support/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.
TimeoutException: Message: Timed out waiting for clicking the element not to throw an exception
------------------------------
------------------------------ ---------- Ran 153 tests in 45.493s
FAILED (errors=2, skipped=16)
==============================
============================== ========== Test Result Summary
==============================
============================== ==========
Traceback (most recent call last):
File "runtests.py", line 354, in <module>
skipped_cases)
File "/Users/akshay/Development/
pgadmin4/web/regression/ python_test_utils/test_utils. py", line 442, in get_scenario_name key, value = case_name_dict.items()[0]
TypeError: 'dict_items' object does not support indexing
On Thu, Mar 30, 2017 at 8:04 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: --Hi Dave,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.py I have also attached the sample JSON file with the test result as per your suggestions.Thanks!On Wed, Mar 29, 2017 at 6:03 PM, Dave Page <dpage@pgadmin.org> wrote:On Wed, Mar 29, 2017 at 4:12 AM, Navnath GadakhAgreed - and as there are only 2 feature tests, you should be able to<navnath.gadakh@enterprisedb.com> wrote:
> Hi,
>
> On Mon, Mar 27, 2017 at 5:37 PM, Dave Page <dpage@pgadmin.org> wrote:
>>
>> Hi
>>
>> On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh
>> <navnath.gadakh@enterprisedb.com> wrote:
>> > Hello Dave,
>> >
>> > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dpage@pgadmin.org> wrote:
>> >>
>> >> Hi
>> >>
>> >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh
>> >> <navnath.gadakh@enterprisedb.com> wrote:
>> >> >
>> >> >> When running with the patch:
>> >> >>
>> >> >> 1) The browser isn't closed, and the script never exits - it just
>> >> >> sits
>> >> >> indefinitely at:
>> >> >>
>> >> >> =====
>> >> >> Please check output in file:
>> >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log
>> >> >>
>> >> >> make: *** [check] Error 1
>> >> >> =====
>> >> >>
>> >> >> without returning to a shell prompt. The browser exits when I hit
>> >> >> Ctrl+C.
>> >>
>> >> The above is still a problem. In fact, not only do I have to hit
>> >> Ctrl+C, but then the browser prompts me to check I really do want to
>> >> exit.
>> >>
>> >> There's also another problem that just showed up. I got the following
>> >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and
>> >> Tira@Pivotal are working on). Note how it's not reported in the
>> >> summary (or the JSON output):
>> >
>> >
>> > I found the issue, In the feature tests we need to add a scenario name
>> > for
>> > each test case. the purpose of this patch is to print the failed/skipped
>> > test class name with the scenario name like:
>> >
>> > 152 tests passed
>> >
>> > 1 test failed:
>> >
>> > LoginRoleGetTestCase (Check Role Node)
>> >
>> > 16 tests skipped:
>> >
>> > SynonymGetTestCase (Fetch synonym Node URL)
>> >
>> > But our in-built test framework does not provide that scenario name with
>> > failed/skipped test case that's why I override apply_scenario()
>> > function.
>> >
>> > def apply_scenario(scenario, test):
>> >
>> > name, parameters = scenario
>> >
>> > parameters["scenario_name"] = name
>> >
>> > While printing the result, I have checked the if 'scenario_name' in test
>> > as
>> > we need to print scenario name in test summary as well as in JSON file.
>> >
>> > I can do it without scenario name but for better understanding which
>> > test
>> > scenario is failed it's good to add a scenario name with each test case.
>>
>> OK.
>>
>> > See this is how test cases looks like while printing on console
>> >
>> > API:
>> >
>> > runTest
>> >
>> > (pgadmin.browser.server_groups.servers.databases.schemas.typ es.tests.test_types_put.TypesU pdateTestCase)
>> >
>> > Update type under schema node ... ok
>> >
>> > Feature tests:
>> >
>> > runTest
>> >
>> > (pgadmin.utils.tests.test_versioned_template_loader.TestVers ionedTemplateLoader)
>> > ... ok
>> >
>> > No scenario name in feature tests.
>> >
>>
>> OK, is that easy to fix while you're at it?
>
>
> I have two solutions-
>
> 1. Need a little hack to skip scenario/test name if that does not exist, but
> that's not the best idea.
>
> 2. Owner of feature tests should add scenario/test name to each feature
> test. In the summary also we will know for which scenario test is failing or
> skipping.
> This is ideal and long term solution and I prefer it.
fix them up pretty quickly :-p
Once code is in the repo, it's "ours", meaning the entire communities.
I wouldn't expect us to ping all issues back to Pivotal - we're one
team on this.
Thanks!
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers --Akshay JoshiPrincipal Software EngineerPhone: +91 20-3058-9517
Mobile: +91 976-788-8246
The Enterprise PostgreSQL Company
Attachment
Hi Akshay,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.py On Fri, Mar 31, 2017 at 6:16 PM, Akshay Joshi <akshay.joshi@enterprisedb.com> wrote: Hi NavnathI have run the updated patch. It is working fine with Python 2.7 but I am facing following error with Python 3.5, can you please look into it:==============================
============================== ========== ERROR: runTest (pgadmin.feature_tests.connect
_to_server_feature_test.Connec tsToServerFeatureTest) Test database connection which can be created from the UI
------------------------------
------------------------------ ---------- Traceback (most recent call last):
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_ utils/base_feature_test.py", line 33, in setUp self.page.reset_layout()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_ utils/pgadmin_page.py", line 33, in reset_layout self.click_modal_ok()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_ utils/pgadmin_page.py", line 38, in click_modal_ok self.click_element(self.find_b
y_xpath("//button[contains(.,' OK')]")) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_ utils/pgadmin_page.py", line 71, in find_by_xpath return self.wait_for_element(lambda driver: driver.find_element_by_xpath(x
path)) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_ utils/pgadmin_page.py", line 128, in wait_for_element return self._wait_for("element to exist", element_if_it_exists)
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_ utils/pgadmin_page.py", line 162, in _wait_for "Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/Wor
kspace/lib/python3.5/site-pack ages/selenium/webdriver/suppor t/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.Tim
eoutException: Message: Timed out waiting for element to exist
==============================
============================== ========== ERROR: runTest (pgadmin.feature_tests.table_d
dl_feature_test.TableDdlFeatur eTest) Test scenarios for acceptance tests
------------------------------
------------------------------ ---------- Traceback (most recent call last):
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_ utils/base_feature_test.py", line 33, in setUp self.page.reset_layout()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_ utils/pgadmin_page.py", line 31, in reset_layout self.click_element(self.find_b
y_partial_link_text("File")) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_ utils/pgadmin_page.py", line 90, in click_element return self._wait_for("clicking the element not to throw an exception", click_succeeded)
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_ utils/pgadmin_page.py", line 162, in _wait_for "Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/Wor
kspace/lib/python3.5/site-pack ages/selenium/webdriver/suppor t/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.Tim
eoutException: Message: Timed out waiting for clicking the element not to throw an exception
------------------------------
------------------------------ ---------- Ran 153 tests in 45.493s
FAILED (errors=2, skipped=16)
==============================
============================== ========== Test Result Summary
==============================
============================== ==========
Traceback (most recent call last):
File "runtests.py", line 354, in <module>
skipped_cases)
File "/Users/akshay/Development/pga
dmin4/web/regression/python_ test_utils/test_utils.py", line 442, in get_scenario_name key, value = case_name_dict.items()[0]
TypeError: 'dict_items' object does not support indexing
Resolved.Thanks!On Thu, Mar 30, 2017 at 8:04 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: --Hi Dave,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.py I have also attached the sample JSON file with the test result as per your suggestions.Thanks!On Wed, Mar 29, 2017 at 6:03 PM, Dave Page <dpage@pgadmin.org> wrote:On Wed, Mar 29, 2017 at 4:12 AM, Navnath GadakhAgreed - and as there are only 2 feature tests, you should be able to<navnath.gadakh@enterprisedb.com> wrote:
> Hi,
>
> On Mon, Mar 27, 2017 at 5:37 PM, Dave Page <dpage@pgadmin.org> wrote:
>>
>> Hi
>>
>> On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh
>> <navnath.gadakh@enterprisedb.com> wrote:
>> > Hello Dave,
>> >
>> > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dpage@pgadmin.org> wrote:
>> >>
>> >> Hi
>> >>
>> >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh
>> >> <navnath.gadakh@enterprisedb.com> wrote:
>> >> >
>> >> >> When running with the patch:
>> >> >>
>> >> >> 1) The browser isn't closed, and the script never exits - it just
>> >> >> sits
>> >> >> indefinitely at:
>> >> >>
>> >> >> =====
>> >> >> Please check output in file:
>> >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log
>> >> >>
>> >> >> make: *** [check] Error 1
>> >> >> =====
>> >> >>
>> >> >> without returning to a shell prompt. The browser exits when I hit
>> >> >> Ctrl+C.
>> >>
>> >> The above is still a problem. In fact, not only do I have to hit
>> >> Ctrl+C, but then the browser prompts me to check I really do want to
>> >> exit.
>> >>
>> >> There's also another problem that just showed up. I got the following
>> >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and
>> >> Tira@Pivotal are working on). Note how it's not reported in the
>> >> summary (or the JSON output):
>> >
>> >
>> > I found the issue, In the feature tests we need to add a scenario name
>> > for
>> > each test case. the purpose of this patch is to print the failed/skipped
>> > test class name with the scenario name like:
>> >
>> > 152 tests passed
>> >
>> > 1 test failed:
>> >
>> > LoginRoleGetTestCase (Check Role Node)
>> >
>> > 16 tests skipped:
>> >
>> > SynonymGetTestCase (Fetch synonym Node URL)
>> >
>> > But our in-built test framework does not provide that scenario name with
>> > failed/skipped test case that's why I override apply_scenario()
>> > function.
>> >
>> > def apply_scenario(scenario, test):
>> >
>> > name, parameters = scenario
>> >
>> > parameters["scenario_name"] = name
>> >
>> > While printing the result, I have checked the if 'scenario_name' in test
>> > as
>> > we need to print scenario name in test summary as well as in JSON file.
>> >
>> > I can do it without scenario name but for better understanding which
>> > test
>> > scenario is failed it's good to add a scenario name with each test case.
>>
>> OK.
>>
>> > See this is how test cases looks like while printing on console
>> >
>> > API:
>> >
>> > runTest
>> >
>> > (pgadmin.browser.server_groups.servers.databases.schemas.typ es.tests.test_types_put.TypesU pdateTestCase)
>> >
>> > Update type under schema node ... ok
>> >
>> > Feature tests:
>> >
>> > runTest
>> >
>> > (pgadmin.utils.tests.test_versioned_template_loader.TestVers ionedTemplateLoader)
>> > ... ok
>> >
>> > No scenario name in feature tests.
>> >
>>
>> OK, is that easy to fix while you're at it?
>
>
> I have two solutions-
>
> 1. Need a little hack to skip scenario/test name if that does not exist, but
> that's not the best idea.
>
> 2. Owner of feature tests should add scenario/test name to each feature
> test. In the summary also we will know for which scenario test is failing or
> skipping.
> This is ideal and long term solution and I prefer it.
fix them up pretty quickly :-p
Once code is in the repo, it's "ours", meaning the entire communities.
I wouldn't expect us to ping all issues back to Pivotal - we're one
team on this.
Thanks!
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers --Akshay JoshiPrincipal Software EngineerPhone: +91 20-3058-9517
Mobile: +91 976-788-8246--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company
The Enterprise PostgreSQL Company
Attachment
Hi Dave,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.pyOn Mon, Apr 3, 2017 at 12:32 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: Hi Akshay,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.py On Fri, Mar 31, 2017 at 6:16 PM, Akshay Joshi <akshay.joshi@enterprisedb.com> wrote: Hi NavnathI have run the updated patch. It is working fine with Python 2.7 but I am facing following error with Python 3.5, can you please look into it:==============================
============================== ========== ERROR: runTest (pgadmin.feature_tests.connect
_to_server_feature_test.Connec tsToServerFeatureTest) Test database connection which can be created from the UI
------------------------------
------------------------------ ---------- Traceback (most recent call last):
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/base_feature_test.py", line 33, in setUp self.page.reset_layout()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 33, in reset_layout self.click_modal_ok()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 38, in click_modal_ok self.click_element(self.find_b
y_xpath("//button[contains(.,' OK')]")) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 71, in find_by_xpath return self.wait_for_element(lambda driver: driver.find_element_by_xpath(x
path)) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 128, in wait_for_element return self._wait_for("element to exist", element_if_it_exists)
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 162, in _wait_for "Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/Wor
kspace/lib/python3.5/site-pack ages/selenium/webdriver/suppor t/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.Tim
eoutException: Message: Timed out waiting for element to exist
==============================
============================== ========== ERROR: runTest (pgadmin.feature_tests.table_d
dl_feature_test.TableDdlFeatur eTest) Test scenarios for acceptance tests
------------------------------
------------------------------ ---------- Traceback (most recent call last):
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/base_feature_test.py", line 33, in setUp self.page.reset_layout()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 31, in reset_layout self.click_element(self.find_b
y_partial_link_text("File")) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 90, in click_element return self._wait_for("clicking the element not to throw an exception", click_succeeded)
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 162, in _wait_for "Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/Wor
kspace/lib/python3.5/site-pack ages/selenium/webdriver/suppor t/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.Tim
eoutException: Message: Timed out waiting for clicking the element not to throw an exception
------------------------------
------------------------------ ---------- Ran 153 tests in 45.493s
FAILED (errors=2, skipped=16)
==============================
============================== ========== Test Result Summary
==============================
============================== ==========
Traceback (most recent call last):
File "runtests.py", line 354, in <module>
skipped_cases)
File "/Users/akshay/Development/pga
dmin4/web/regression/python_te st_utils/test_utils.py", line 442, in get_scenario_name key, value = case_name_dict.items()[0]
TypeError: 'dict_items' object does not support indexing
Resolved.Thanks!On Thu, Mar 30, 2017 at 8:04 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: --Hi Dave,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.py I have also attached the sample JSON file with the test result as per your suggestions.Thanks!On Wed, Mar 29, 2017 at 6:03 PM, Dave Page <dpage@pgadmin.org> wrote:On Wed, Mar 29, 2017 at 4:12 AM, Navnath GadakhAgreed - and as there are only 2 feature tests, you should be able to<navnath.gadakh@enterprisedb.com> wrote:
> Hi,
>
> On Mon, Mar 27, 2017 at 5:37 PM, Dave Page <dpage@pgadmin.org> wrote:
>>
>> Hi
>>
>> On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh
>> <navnath.gadakh@enterprisedb.com> wrote:
>> > Hello Dave,
>> >
>> > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dpage@pgadmin.org> wrote:
>> >>
>> >> Hi
>> >>
>> >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh
>> >> <navnath.gadakh@enterprisedb.com> wrote:
>> >> >
>> >> >> When running with the patch:
>> >> >>
>> >> >> 1) The browser isn't closed, and the script never exits - it just
>> >> >> sits
>> >> >> indefinitely at:
>> >> >>
>> >> >> =====
>> >> >> Please check output in file:
>> >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log
>> >> >>
>> >> >> make: *** [check] Error 1
>> >> >> =====
>> >> >>
>> >> >> without returning to a shell prompt. The browser exits when I hit
>> >> >> Ctrl+C.
>> >>
>> >> The above is still a problem. In fact, not only do I have to hit
>> >> Ctrl+C, but then the browser prompts me to check I really do want to
>> >> exit.
>> >>
>> >> There's also another problem that just showed up. I got the following
>> >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and
>> >> Tira@Pivotal are working on). Note how it's not reported in the
>> >> summary (or the JSON output):
>> >
>> >
>> > I found the issue, In the feature tests we need to add a scenario name
>> > for
>> > each test case. the purpose of this patch is to print the failed/skipped
>> > test class name with the scenario name like:
>> >
>> > 152 tests passed
>> >
>> > 1 test failed:
>> >
>> > LoginRoleGetTestCase (Check Role Node)
>> >
>> > 16 tests skipped:
>> >
>> > SynonymGetTestCase (Fetch synonym Node URL)
>> >
>> > But our in-built test framework does not provide that scenario name with
>> > failed/skipped test case that's why I override apply_scenario()
>> > function.
>> >
>> > def apply_scenario(scenario, test):
>> >
>> > name, parameters = scenario
>> >
>> > parameters["scenario_name"] = name
>> >
>> > While printing the result, I have checked the if 'scenario_name' in test
>> > as
>> > we need to print scenario name in test summary as well as in JSON file.
>> >
>> > I can do it without scenario name but for better understanding which
>> > test
>> > scenario is failed it's good to add a scenario name with each test case.
>>
>> OK.
>>
>> > See this is how test cases looks like while printing on console
>> >
>> > API:
>> >
>> > runTest
>> >
>> > (pgadmin.browser.server_groups.servers.databases.schemas.typ es.tests.test_types_put.TypesU pdateTestCase)
>> >
>> > Update type under schema node ... ok
>> >
>> > Feature tests:
>> >
>> > runTest
>> >
>> > (pgadmin.utils.tests.test_versioned_template_loader.TestVers ionedTemplateLoader)
>> > ... ok
>> >
>> > No scenario name in feature tests.
>> >
>>
>> OK, is that easy to fix while you're at it?
>
>
> I have two solutions-
>
> 1. Need a little hack to skip scenario/test name if that does not exist, but
> that's not the best idea.
>
> 2. Owner of feature tests should add scenario/test name to each feature
> test. In the summary also we will know for which scenario test is failing or
> skipping.
> This is ideal and long term solution and I prefer it.
fix them up pretty quickly :-p
Once code is in the repo, it's "ours", meaning the entire communities.
I wouldn't expect us to ping all issues back to Pivotal - we're one
team on this.
Thanks!
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers --Akshay JoshiPrincipal Software EngineerPhone: +91 20-3058-9517
Mobile: +91 976-788-8246--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company
The Enterprise PostgreSQL Company
Attachment
Re: [pgadmin-hackers] pgAdmin4: Test result enhancement patch
Hi Dave,<Ignore previous email>Please find the revised patch for test result enhancement, which include code to write passed test cases into the JSON file along with skipped and failed.On Tue, Apr 4, 2017 at 11:30 AM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: Hi Dave,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.pyOn Mon, Apr 3, 2017 at 12:32 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: Hi Akshay,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.py On Fri, Mar 31, 2017 at 6:16 PM, Akshay Joshi <akshay.joshi@enterprisedb.com> wrote: Hi NavnathI have run the updated patch. It is working fine with Python 2.7 but I am facing following error with Python 3.5, can you please look into it:==============================
============================== ========== ERROR: runTest (pgadmin.feature_tests.connect
_to_server_feature_test.Connec tsToServerFeatureTest) Test database connection which can be created from the UI
------------------------------
------------------------------ ---------- Traceback (most recent call last):
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/base_feature_test.py", line 33, in setUp self.page.reset_layout()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 33, in reset_layout self.click_modal_ok()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 38, in click_modal_ok self.click_element(self.find_b
y_xpath("//button[contains(.,' OK')]")) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 71, in find_by_xpath return self.wait_for_element(lambda driver: driver.find_element_by_xpath(x
path)) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 128, in wait_for_element return self._wait_for("element to exist", element_if_it_exists)
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 162, in _wait_for "Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/Wor
kspace/lib/python3.5/site-pack ages/selenium/webdriver/suppor t/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.Tim
eoutException: Message: Timed out waiting for element to exist
==============================
============================== ========== ERROR: runTest (pgadmin.feature_tests.table_d
dl_feature_test.TableDdlFeatur eTest) Test scenarios for acceptance tests
------------------------------
------------------------------ ---------- Traceback (most recent call last):
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/base_feature_test.py", line 33, in setUp self.page.reset_layout()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 31, in reset_layout self.click_element(self.find_b
y_partial_link_text("File")) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 90, in click_element return self._wait_for("clicking the element not to throw an exception", click_succeeded)
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 162, in _wait_for "Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/Wor
kspace/lib/python3.5/site-pack ages/selenium/webdriver/suppor t/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.Tim
eoutException: Message: Timed out waiting for clicking the element not to throw an exception
------------------------------
------------------------------ ---------- Ran 153 tests in 45.493s
FAILED (errors=2, skipped=16)
==============================
============================== ========== Test Result Summary
==============================
============================== ==========
Traceback (most recent call last):
File "runtests.py", line 354, in <module>
skipped_cases)
File "/Users/akshay/Development/pga
dmin4/web/regression/python_te st_utils/test_utils.py", line 442, in get_scenario_name key, value = case_name_dict.items()[0]
TypeError: 'dict_items' object does not support indexing
Resolved.Thanks!On Thu, Mar 30, 2017 at 8:04 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: --Hi Dave,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.py I have also attached the sample JSON file with the test result as per your suggestions.Thanks!On Wed, Mar 29, 2017 at 6:03 PM, Dave Page <dpage@pgadmin.org> wrote:On Wed, Mar 29, 2017 at 4:12 AM, Navnath GadakhAgreed - and as there are only 2 feature tests, you should be able to<navnath.gadakh@enterprisedb.com> wrote:
> Hi,
>
> On Mon, Mar 27, 2017 at 5:37 PM, Dave Page <dpage@pgadmin.org> wrote:
>>
>> Hi
>>
>> On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh
>> <navnath.gadakh@enterprisedb.com> wrote:
>> > Hello Dave,
>> >
>> > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dpage@pgadmin.org> wrote:
>> >>
>> >> Hi
>> >>
>> >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh
>> >> <navnath.gadakh@enterprisedb.com> wrote:
>> >> >
>> >> >> When running with the patch:
>> >> >>
>> >> >> 1) The browser isn't closed, and the script never exits - it just
>> >> >> sits
>> >> >> indefinitely at:
>> >> >>
>> >> >> =====
>> >> >> Please check output in file:
>> >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log
>> >> >>
>> >> >> make: *** [check] Error 1
>> >> >> =====
>> >> >>
>> >> >> without returning to a shell prompt. The browser exits when I hit
>> >> >> Ctrl+C.
>> >>
>> >> The above is still a problem. In fact, not only do I have to hit
>> >> Ctrl+C, but then the browser prompts me to check I really do want to
>> >> exit.
>> >>
>> >> There's also another problem that just showed up. I got the following
>> >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and
>> >> Tira@Pivotal are working on). Note how it's not reported in the
>> >> summary (or the JSON output):
>> >
>> >
>> > I found the issue, In the feature tests we need to add a scenario name
>> > for
>> > each test case. the purpose of this patch is to print the failed/skipped
>> > test class name with the scenario name like:
>> >
>> > 152 tests passed
>> >
>> > 1 test failed:
>> >
>> > LoginRoleGetTestCase (Check Role Node)
>> >
>> > 16 tests skipped:
>> >
>> > SynonymGetTestCase (Fetch synonym Node URL)
>> >
>> > But our in-built test framework does not provide that scenario name with
>> > failed/skipped test case that's why I override apply_scenario()
>> > function.
>> >
>> > def apply_scenario(scenario, test):
>> >
>> > name, parameters = scenario
>> >
>> > parameters["scenario_name"] = name
>> >
>> > While printing the result, I have checked the if 'scenario_name' in test
>> > as
>> > we need to print scenario name in test summary as well as in JSON file.
>> >
>> > I can do it without scenario name but for better understanding which
>> > test
>> > scenario is failed it's good to add a scenario name with each test case.
>>
>> OK.
>>
>> > See this is how test cases looks like while printing on console
>> >
>> > API:
>> >
>> > runTest
>> >
>> > (pgadmin.browser.server_groups.servers.databases.schemas.typ es.tests.test_types_put.TypesU pdateTestCase)
>> >
>> > Update type under schema node ... ok
>> >
>> > Feature tests:
>> >
>> > runTest
>> >
>> > (pgadmin.utils.tests.test_versioned_template_loader.TestVers ionedTemplateLoader)
>> > ... ok
>> >
>> > No scenario name in feature tests.
>> >
>>
>> OK, is that easy to fix while you're at it?
>
>
> I have two solutions-
>
> 1. Need a little hack to skip scenario/test name if that does not exist, but
> that's not the best idea.
>
> 2. Owner of feature tests should add scenario/test name to each feature
> test. In the summary also we will know for which scenario test is failing or
> skipping.
> This is ideal and long term solution and I prefer it.
fix them up pretty quickly :-p
Once code is in the repo, it's "ours", meaning the entire communities.
I wouldn't expect us to ping all issues back to Pivotal - we're one
team on this.
Thanks!
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers ----Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company
--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers
Hello Hackers,We just looked into this patch and have some questions about it.- Where is the JSON file created? Do we need to call it with some special argument in order for it to be created?
- There is a print statement in the line 275 of runtests.py is it supposed to be there?
- The function name addSuccess does not match the styling of the code, it should be add_success
Suggestions:- The definition of the class_name (run_tests.py, line 229) variable looks the same as in the if statements below and could be extracted into a function to avoid repeating the same code.
- In the same function when we are updating error/failure/skip test results the code looks pretty similar and can also be extracted into a function
ThanksJoao & SarahOn Fri, Apr 7, 2017 at 10:15 AM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: Hi Dave,<Ignore previous email>Please find the revised patch for test result enhancement, which include code to write passed test cases into the JSON file along with skipped and failed.On Tue, Apr 4, 2017 at 11:30 AM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: Hi Dave,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.pyOn Mon, Apr 3, 2017 at 12:32 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: Hi Akshay,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.py On Fri, Mar 31, 2017 at 6:16 PM, Akshay Joshi <akshay.joshi@enterprisedb.com> wrote: Hi NavnathI have run the updated patch. It is working fine with Python 2.7 but I am facing following error with Python 3.5, can you please look into it:==============================
============================== ========== ERROR: runTest (pgadmin.feature_tests.connect
_to_server_feature_test.Connec tsToServerFeatureTest) Test database connection which can be created from the UI
------------------------------
------------------------------ ---------- Traceback (most recent call last):
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/base_feature_test.py", line 33, in setUp self.page.reset_layout()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 33, in reset_layout self.click_modal_ok()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 38, in click_modal_ok self.click_element(self.find_b
y_xpath("//button[contains(.,' OK')]")) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 71, in find_by_xpath return self.wait_for_element(lambda driver: driver.find_element_by_xpath(x
path)) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 128, in wait_for_element return self._wait_for("element to exist", element_if_it_exists)
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 162, in _wait_for "Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/Wor
kspace/lib/python3.5/site-pack ages/selenium/webdriver/suppor t/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.Tim
eoutException: Message: Timed out waiting for element to exist
==============================
============================== ========== ERROR: runTest (pgadmin.feature_tests.table_d
dl_feature_test.TableDdlFeatur eTest) Test scenarios for acceptance tests
------------------------------
------------------------------ ---------- Traceback (most recent call last):
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/base_feature_test.py", line 33, in setUp self.page.reset_layout()
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 31, in reset_layout self.click_element(self.find_b
y_partial_link_text("File")) File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 90, in click_element return self._wait_for("clicking the element not to throw an exception", click_succeeded)
File "/Users/akshay/Development/pga
dmin4/web/regression/feature_u tils/pgadmin_page.py", line 162, in _wait_for "Timed out waiting for " + waiting_for_message)
File "/Users/akshay/Development/Wor
kspace/lib/python3.5/site-pack ages/selenium/webdriver/suppor t/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.Tim
eoutException: Message: Timed out waiting for clicking the element not to throw an exception
------------------------------
------------------------------ ---------- Ran 153 tests in 45.493s
FAILED (errors=2, skipped=16)
==============================
============================== ========== Test Result Summary
==============================
============================== ==========
Traceback (most recent call last):
File "runtests.py", line 354, in <module>
skipped_cases)
File "/Users/akshay/Development/pga
dmin4/web/regression/python_te st_utils/test_utils.py", line 442, in get_scenario_name key, value = case_name_dict.items()[0]
TypeError: 'dict_items' object does not support indexing
Resolved.Thanks!On Thu, Mar 30, 2017 at 8:04 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: --Hi Dave,Please find the revised patch for test result enhancement.What's in the patch:1. The test result summary will store in JSON file.2. Removed some redundant code from regression/test_utils.py3. Added the scenario names for feature tests.4. To print test scenario names in failed and skipped test cases, I override apply_scenario() function in regression/test_utils.py I have also attached the sample JSON file with the test result as per your suggestions.Thanks!On Wed, Mar 29, 2017 at 6:03 PM, Dave Page <dpage@pgadmin.org> wrote:On Wed, Mar 29, 2017 at 4:12 AM, Navnath GadakhAgreed - and as there are only 2 feature tests, you should be able to<navnath.gadakh@enterprisedb.com> wrote:
> Hi,
>
> On Mon, Mar 27, 2017 at 5:37 PM, Dave Page <dpage@pgadmin.org> wrote:
>>
>> Hi
>>
>> On Mon, Mar 27, 2017 at 12:15 AM, Navnath Gadakh
>> <navnath.gadakh@enterprisedb.com> wrote:
>> > Hello Dave,
>> >
>> > On Fri, Mar 24, 2017 at 9:10 PM, Dave Page <dpage@pgadmin.org> wrote:
>> >>
>> >> Hi
>> >>
>> >> On Fri, Mar 24, 2017 at 3:13 PM, Navnath Gadakh
>> >> <navnath.gadakh@enterprisedb.com> wrote:
>> >> >
>> >> >> When running with the patch:
>> >> >>
>> >> >> 1) The browser isn't closed, and the script never exits - it just
>> >> >> sits
>> >> >> indefinitely at:
>> >> >>
>> >> >> =====
>> >> >> Please check output in file:
>> >> >> /Users/dpage/git/pgadmin4/web/regression/regression.log
>> >> >>
>> >> >> make: *** [check] Error 1
>> >> >> =====
>> >> >>
>> >> >> without returning to a shell prompt. The browser exits when I hit
>> >> >> Ctrl+C.
>> >>
>> >> The above is still a problem. In fact, not only do I have to hit
>> >> Ctrl+C, but then the browser prompts me to check I really do want to
>> >> exit.
>> >>
>> >> There's also another problem that just showed up. I got the following
>> >> failure on PG 9.4 (due to a known intermittent bug that Ashesh and
>> >> Tira@Pivotal are working on). Note how it's not reported in the
>> >> summary (or the JSON output):
>> >
>> >
>> > I found the issue, In the feature tests we need to add a scenario name
>> > for
>> > each test case. the purpose of this patch is to print the failed/skipped
>> > test class name with the scenario name like:
>> >
>> > 152 tests passed
>> >
>> > 1 test failed:
>> >
>> > LoginRoleGetTestCase (Check Role Node)
>> >
>> > 16 tests skipped:
>> >
>> > SynonymGetTestCase (Fetch synonym Node URL)
>> >
>> > But our in-built test framework does not provide that scenario name with
>> > failed/skipped test case that's why I override apply_scenario()
>> > function.
>> >
>> > def apply_scenario(scenario, test):
>> >
>> > name, parameters = scenario
>> >
>> > parameters["scenario_name"] = name
>> >
>> > While printing the result, I have checked the if 'scenario_name' in test
>> > as
>> > we need to print scenario name in test summary as well as in JSON file.
>> >
>> > I can do it without scenario name but for better understanding which
>> > test
>> > scenario is failed it's good to add a scenario name with each test case.
>>
>> OK.
>>
>> > See this is how test cases looks like while printing on console
>> >
>> > API:
>> >
>> > runTest
>> >
>> > (pgadmin.browser.server_groups.servers.databases.schemas.typ es.tests.test_types_put.TypesU pdateTestCase)
>> >
>> > Update type under schema node ... ok
>> >
>> > Feature tests:
>> >
>> > runTest
>> >
>> > (pgadmin.utils.tests.test_versioned_template_loader.TestVers ionedTemplateLoader)
>> > ... ok
>> >
>> > No scenario name in feature tests.
>> >
>>
>> OK, is that easy to fix while you're at it?
>
>
> I have two solutions-
>
> 1. Need a little hack to skip scenario/test name if that does not exist, but
> that's not the best idea.
>
> 2. Owner of feature tests should add scenario/test name to each feature
> test. In the summary also we will know for which scenario test is failing or
> skipping.
> This is ideal and long term solution and I prefer it.
fix them up pretty quickly :-p
Once code is in the repo, it's "ours", meaning the entire communities.
I wouldn't expect us to ping all issues back to Pivotal - we're one
team on this.
Thanks!
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers ----Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company--Regards,Navnath GadakhEnterpriseDB Corporation
The Enterprise PostgreSQL Company
--
Sent via pgadmin-hackers mailing list (pgadmin-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgadmin-hackers
The Enterprise PostgreSQL Company
Attachment
Hello Joao,Thanks for review and suggestions.On Fri, Apr 7, 2017 at 9:08 PM, Joao Pedro De Almeida Pereira <jdealmeidapereira@pivotal.io> wrote:Hello Hackers,We just looked into this patch and have some questions about it.- Where is the JSON file created? Do we need to call it with some special argument in order for it to be created?This file is created under /regression/ directory (runtests.py, line 401). No need to call any special argument.- There is a print statement in the line 275 of runtests.py is it supposed to be there?Removed.- The function name addSuccess does not match the styling of the code, it should be add_successDone.Suggestions:- The definition of the class_name (run_tests.py, line 229) variable looks the same as in the if statements below and could be extracted into a function to avoid repeating the same code.Done.- In the same function when we are updating error/failure/skip test results the code looks pretty similar and can also be extracted into a functionDone.@Dave, Please find the attached patch with necessary code changes. I have also added missing scenario names to some test cases.
VP, Chief Architect, Tools & Installers
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
HiOn Fri, Apr 7, 2017 at 8:01 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: Hello Joao,Thanks for review and suggestions.On Fri, Apr 7, 2017 at 9:08 PM, Joao Pedro De Almeida Pereira <jdealmeidapereira@pivotal.io> wrote:Hello Hackers,We just looked into this patch and have some questions about it.- Where is the JSON file created? Do we need to call it with some special argument in order for it to be created?This file is created under /regression/ directory (runtests.py, line 401). No need to call any special argument.- There is a print statement in the line 275 of runtests.py is it supposed to be there?Removed.- The function name addSuccess does not match the styling of the code, it should be add_successDone.Suggestions:- The definition of the class_name (run_tests.py, line 229) variable looks the same as in the if statements below and could be extracted into a function to avoid repeating the same code.Done.- In the same function when we are updating error/failure/skip test results the code looks pretty similar and can also be extracted into a functionDone.@Dave, Please find the attached patch with necessary code changes. I have also added missing scenario names to some test cases.The passed test results are shown as null. They should either be removed (because they'll all be "Passed" or similar anyway), or set to "Passed" or "Pass".
--Thanks.Dave Page
VP, Chief Architect, Tools & Installers
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake
The Enterprise PostgreSQL Company
Attachment
Thanks, patch applied. On Mon, Apr 10, 2017 at 2:22 PM, Navnath Gadakh <navnath.gadakh@enterprisedb.com> wrote: > Hi Dave, > > Please find the revised patch. > > On Mon, Apr 10, 2017 at 1:43 PM, Dave Page <dave.page@enterprisedb.com> > wrote: >> >> Hi >> >> On Fri, Apr 7, 2017 at 8:01 PM, Navnath Gadakh >> <navnath.gadakh@enterprisedb.com> wrote: >>> >>> Hello Joao, >>> >>> Thanks for review and suggestions. >>> >>> On Fri, Apr 7, 2017 at 9:08 PM, Joao Pedro De Almeida Pereira >>> <jdealmeidapereira@pivotal.io> wrote: >>>> >>>> Hello Hackers, >>>> We just looked into this patch and have some questions about it. >>>> >>>> - Where is the JSON file created? Do we need to call it with some >>>> special argument in order for it to be created? >>> >>> This file is created under /regression/ directory (runtests.py, line >>> 401). No need to call any special argument. >>>> >>>> - There is a print statement in the line 275 of runtests.py is it >>>> supposed to be there? >>> >>> Removed. >>>> >>>> - The function name addSuccess does not match the styling of the code, >>>> it should be add_success >>> >>> Done. >>>> >>>> Suggestions: >>>> - The definition of the class_name (run_tests.py, line 229) variable >>>> looks the same as in the if statements below and could be extracted into a >>>> function to avoid repeating the same code. >>> >>> Done. >>>> >>>> - In the same function when we are updating error/failure/skip test >>>> results the code looks pretty similar and can also be extracted into a >>>> function >>> >>> Done. >>> >>> >>> @Dave, Please find the attached patch with necessary code changes. I have >>> also added missing scenario names to some test cases. >> >> >> The passed test results are shown as null. They should either be removed >> (because they'll all be "Passed" or similar anyway), or set to "Passed" or >> "Pass". > > Ok. For consistency purposed with other test case type I have set it to > ''Passed". > > Also, did some code cleanup in the attached patch. > > Thanks. >> >> >> Thanks. >> >> -- >> Dave Page >> VP, Chief Architect, Tools & Installers >> EnterpriseDB: http://www.enterprisedb.com >> The Enterprise PostgreSQL Company >> >> Blog: http://pgsnake.blogspot.com >> Twitter: @pgsnake > > > > > -- > Regards, > Navnath Gadakh > > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > -- Dave Page VP, Chief Architect, Tools & Installers EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company Blog: http://pgsnake.blogspot.com Twitter: @pgsnake