Re: [GSoC][Patch] Automatic Mode Detection V1 - Mailing list pgadmin-hackers

From Dave Page
Subject Re: [GSoC][Patch] Automatic Mode Detection V1
Date
Msg-id CA+OCxox1FEypF6y46DkSy3gZ77j7CBRa90kacMBF+qULNCcfcw@mail.gmail.com
Whole thread Raw
In response to Re: [GSoC][Patch] Automatic Mode Detection V1  (Yosry Muhammad <yosrym93@gmail.com>)
List pgadmin-hackers
Hi

On Sun, Jun 16, 2019 at 3:10 PM Yosry Muhammad <yosrym93@gmail.com> wrote:
This is a patch fixing a problem with the above patch that happened when:
- primary key columns are renamed.
- other columns are renamed to be like primary key columns.

This problem happened mainly because the primary keys are identified in the front-end by their names. This can be handled in a better way in a future update where columns that are primary keys are identified by the backend and sent to the frontend instead.
Also, renamed columns can be handled better by making them read-only in a future update (now they are editable but they cannot be updated as a column with the new name does not exist - it produces an error message to the user).

Seems like a fairly low-impact problem. Most people probably don't rename columns whilst they're editing data in the same table. That said, if it's not overly invasive or complex, I see no reason not to fix it.
 

On Sat, Jun 15, 2019 at 8:48 AM Yosry Muhammad <yosrym93@gmail.com> wrote:
Dear all,

This is my first patch of my GSoC project, query tool automatic mode detection.

In this patch, the initial (basic) version of the project is implemented. In this version, query resultsets are updatable if and only if:
- All the columns belong to a single table
- No duplicate columns are available
- All the primary keys of the table are available

Inserts, updates and deletes work automatically when the resultset is updatable.

Hmm, I wonder if I under-estimated the complexity of this task! There is more work to do of course, but it almost looks like you've done the hard part. Still, there are plenty of other related things that can be improved along the way.
 

The 'save' button in the query tool works automatically to save the changes in the resultset if the query is the updatable, and saves the query to a file otherwise. The 'save as' button stays as is.

Yeah, I think we'll have to have a second Save button. The current one would save the query text, whilst the new one would save changes to the data.

Do you want me to ask our design guy for an icon?
 

I will work on improving and adding features to this version throughout my work during the summer according to what has the highest priorities (supporting duplicate columns or columns produced by functions or aggregations as read-only columns in the results seems like a good next move).

I think handling read-only columns is most important, then duplicates.
 
Please give me your feedback of the changes I made, and any hints or comments that will improve my code in any aspect.

Well the first problem is that it doesn't actually work for me. This is what I get when running a simple "select * from pem.probe" (where pem.probe is a table with primary key and a few columns - see below) on a PG11 system (with both of your patches applied):

2019-06-17 10:56:44,610: SQL flask.app: Execute (void) for server #5 - CONN:3976967 (Query-id: 2391511):
BEGIN;
2019-06-17 10:56:44,610: SQL flask.app: Execute (async) for server #5 - CONN:3976967 (Query-id: 5707996):
select * from pem.probe
2019-06-17 10:56:44,614: INFO werkzeug: 127.0.0.1 - - [17/Jun/2019 10:56:44] "POST /sqleditor/query_tool/start/3781524 HTTP/1.1" 200 -
2019-06-17 10:56:44,631: SQL flask.app: Polling result for (Query-id: 5707996)
2019-06-17 10:56:44,635: SQL flask.app: Polling result for (Query-id: 5707996)
2019-06-17 10:56:44,639: SQL flask.app: Execute (dict) for server #5 - CONN:3976967 (Query-id: 5976248):
SELECT at.attname, ty.typname
FROM pg_attribute at LEFT JOIN pg_type ty ON (ty.oid = at.atttypid)
WHERE attrelid=17491::oid AND attnum = ANY (
    (SELECT con.conkey FROM pg_class rel LEFT OUTER JOIN pg_constraint con ON con.conrelid=rel.oid
    AND con.contype='p' WHERE rel.relkind IN ('r','s','t', 'p') AND rel.oid = 17491::oid)::oid[])

2019-06-17 10:56:44,641: ERROR flask.app: 'attnum'
Traceback (most recent call last):
  File "/Users/dpage/.virtualenvs/pgadmin4/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
    rv = self.dispatch_request()
  File "/Users/dpage/.virtualenvs/pgadmin4/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/Users/dpage/.virtualenvs/pgadmin4/lib/python3.7/site-packages/flask_login/utils.py", line 261, in decorated_view
    return func(*args, **kwargs)
  File "/Users/dpage/git/pgadmin4/web/pgadmin/tools/sqleditor/__init__.py", line 462, in poll
    trans_obj.check_for_updatable_resultset_and_primary_keys()
  File "/Users/dpage/git/pgadmin4/web/pgadmin/tools/sqleditor/command.py", line 904, in check_for_updatable_resultset_and_primary_keys
    is_query_resultset_updatable(conn, sql_path)
  File "/Users/dpage/git/pgadmin4/web/pgadmin/tools/sqleditor/utils/is_query_resultset_updatable.py", line 75, in is_query_resultset_updatable
    'column_number': row['attnum']
KeyError: 'attnum'


This is the table definition:

========
-- Table: pem.probe

-- DROP TABLE pem.probe;

CREATE TABLE pem.probe
(
    id integer NOT NULL DEFAULT nextval('pem.probe_id_seq'::regclass),
    display_name text COLLATE pg_catalog."default" NOT NULL,
    internal_name text COLLATE pg_catalog."default" NOT NULL,
    collection_method character(1) COLLATE pg_catalog."default" NOT NULL,
    target_type_id integer NOT NULL,
    applies_to_id integer NOT NULL,
    agent_capability text COLLATE pg_catalog."default",
    probe_code text COLLATE pg_catalog."default" NOT NULL,
    enabled_by_default boolean NOT NULL,
    default_execution_frequency integer NOT NULL,
    default_lifetime integer NOT NULL,
    any_server_version boolean NOT NULL,
    force_enabled boolean NOT NULL DEFAULT false,
    probe_key_list character varying[] COLLATE pg_catalog."default" NOT NULL DEFAULT '{}'::character varying[],
    discard_history boolean NOT NULL DEFAULT false,
    is_system_probe boolean NOT NULL DEFAULT true,
    deleted boolean NOT NULL DEFAULT false,
    deleted_time timestamp with time zone,
    platform text COLLATE pg_catalog."default",
    is_chartable boolean NOT NULL DEFAULT true,
    jstid integer,
    CONSTRAINT probe_pkey PRIMARY KEY (id),
    CONSTRAINT probe_applies_to_id_fkey FOREIGN KEY (applies_to_id)
        REFERENCES pem.probe_target_type (id) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE NO ACTION,
    CONSTRAINT probe_purge_jobstep_id_fkey FOREIGN KEY (jstid)
        REFERENCES pem.jobstep (jstid) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE NO ACTION,
    CONSTRAINT probe_target_type_id_fkey FOREIGN KEY (target_type_id)
        REFERENCES pem.probe_target_type (id) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE NO ACTION,
    CONSTRAINT probe_collection_method CHECK (collection_method = ANY (ARRAY['b'::bpchar, 's'::bpchar, 'i'::bpchar, 'w'::bpchar])),
    CONSTRAINT probe_target_type_coherence CHECK (collection_method <> 's'::bpchar OR target_type_id <> 100)
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default;

ALTER TABLE pem.probe
    OWNER to postgres;

COMMENT ON COLUMN pem.probe.default_lifetime
    IS 'Default lifetime value in days';

-- Trigger: create_delete_purge_probe_jobstep_trigger

-- DROP TRIGGER create_delete_purge_probe_jobstep_trigger ON pem.probe;

CREATE TRIGGER create_delete_purge_probe_jobstep_trigger
    AFTER INSERT OR DELETE
    ON pem.probe
    FOR EACH ROW
    EXECUTE PROCEDURE pem.create_delete_probe_purge_jobstep();

-- Trigger: pem_custom_probe_deleted

-- DROP TRIGGER pem_custom_probe_deleted ON pem.probe;

CREATE TRIGGER pem_custom_probe_deleted
    BEFORE UPDATE OF deleted
    ON pem.probe
    FOR EACH ROW
    EXECUTE PROCEDURE pem.custom_probe_deleted();

-- Trigger: probe_preupdate

-- DROP TRIGGER probe_preupdate ON pem.probe;

CREATE TRIGGER probe_preupdate
    BEFORE INSERT OR UPDATE
    ON pem.probe
    FOR EACH ROW
    EXECUTE PROCEDURE pem.probe_preupdate();

-- Trigger: update_purge_jobs_on_insert_probe

-- DROP TRIGGER update_purge_jobs_on_insert_probe ON pem.probe;

CREATE TRIGGER update_purge_jobs_on_insert_probe
    AFTER INSERT
    ON pem.probe
    FOR EACH STATEMENT
    EXECUTE PROCEDURE pem.run_job_to_update_probe_objects_combo();

-- Trigger: update_purge_jobs_on_update_probe

-- DROP TRIGGER update_purge_jobs_on_update_probe ON pem.probe;

CREATE TRIGGER update_purge_jobs_on_update_probe
    AFTER UPDATE OF default_lifetime
    ON pem.probe
    FOR EACH ROW
    EXECUTE PROCEDURE pem.run_job_to_update_probe_objects_combo();
========
 

I also have a couple of questions,
- Should the save button in the query tool work the way I am using it now? or should there be a new dedicated button for saving the query to a file?

See above :-)
 

- What documentations or unit tests should I write? any guidelines here would be appreciated.

We're aiming to add unit tests to give as much coverage as possible, focussing on Jasmine, Python/API and then feature tests in that order (fast -> slow execution, which is important). So we probably want 

- one feature test to do basic end-to-end validation
- Python/API tests to exercise is_query_resultset_updatable, save_changed_data and anything else that seems relevant.
- Jasmine tests to ensure buttons are enabled/disabled as they should be, and that primary key and updatability data is tracked properly (this may not be feasible, but I'd still like it to be investigated and justified if not).

We're also a day or two away from committing a new test suite for exercising CRUD operations and the resulting reverse engineered SQL; if we can utilise that to test primary_keys.sql, that'd be good.

Once the in-place editing works, we'll need to rip out all the code related to the View/Edit data mode of the query tool. For example, there will be no need to have the Filter/Sort options any more as the user can edit the SQL directly (that one may be controversial - it's probably worth polling the users first). Of course, if they don't want it to be removed, we'll need to re-think how it works as then we'd have a dialogue that tries to edit arbitrary SQL strings.

When all that's done, the docs will need an overhaul to make them match the new design. That'll require new screenshots, and some non-trivial changes I suspect. You'll need to review what's there at the moment, and figure out what needs to be updated. It's possible we'll need to talk about structural changes as well, but we can do that nearer the time.
 
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

pgadmin-hackers by date:

Previous
From: Dave Page
Date:
Subject: Re: [pgAdmin][RM3174] Distinguish simple tables from tables that areinherited and from which other tables are inherited
Next
From: Aditya Toshniwal
Date:
Subject: [pgAdmin][RM4306] Please log in to access the page Message displayed unnecessarily