Thread: Table modification

Table modification

From
Jean-Michel POURE
Date:
>Ahh, yes, I see what you mean. I'm not sure about the best way to fix that -
>it won't be fixed in pgAdmin anyway as that version is not being updated
>anymore - The new one doesn't use any views or functions.
>
>I'll write a notice about the problem and how to get round it and post it to
>whereever I can.
>
>Regards, Dave.

OK, I am going to have a close look at pgAdmin II to port the development
features of pgAdmin I.

Dave, could we please again discuss about the table modification feature?
I understand we have three alternatives:

1) Implement table rebuilding in frmTable at pgAdmin II level
Easy way, but pgSchema users will not benefit from it.
Can be done easily and quickly.

2) Implement partial table rebuilding at pgSchema level
frmTable would just have to
svr.Databases(ctx.CurrentDB).Tables(objTable.Identifier).Columns.Remove($COL
NAME). pgSchema would handle the rest. The problem is that we might need a
CREATE TABLE AS for each item we
remove in a table (column, foreign key, etc..). This works for small
tables, not the large ones I have.

3) Implement table rebuilding at pgSchema level
Implement table rebuilding at pgSchema level within one transaction for all
elements (columns, foreign keys, etc...)? frmTable would have to
svr.Databases(ctx.CurrentDB).Tables(objTable.Identifier).Modify (.......).
Any ideas about how to implement it?

Do I miss something? Dave, I need your help to understand things fully.

Best regards,
Jean-Michel POURE


Re: Table modification

From
Dave Page
Date:

> -----Original Message-----
> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]
> Sent: 02 October 2001 09:41
> To: pgadmin-hackers@postgresql.org
> Subject: [pgadmin-hackers] Table modification
>
> Dave, could we please again discuss about the table
> modification feature? I understand we have three alternatives:
>
> 1) Implement table rebuilding in frmTable at pgAdmin II level
> Easy way, but pgSchema users will not benefit from it. Can be
> done easily and quickly.

No. Not acceptable. pgAdmin does not know about SQL. That's pgSchema's job.
If pgAdmin starts creating or modifying objects, then the pgSchema object
model will become out of sync and the revision control auto commit will fail
- yes, Revision Control is now in CVS :-) though only the logging/status
tracking at present. History Viewing/Rollback should be there soon.

> 2) Implement partial table rebuilding at pgSchema level
> frmTable would just have to
> svr.Databases(ctx.CurrentDB).Tables(objTable.Identifier).Colum
> ns.Remove($COL
> NAME). pgSchema would handle the rest. The problem is that we
> might need a
> CREATE TABLE AS for each item we
> remove in a table (column, foreign key, etc..). This works for small
> tables, not the large ones I have.

This is the correct way to do it, though I appreciate your problem with
large tables. Perhaps (for Table objects only) we should have a
Tables.DeferRebuild property. When set true, all rebuilds triggered by mods
of individual properties or collections will get queued up until an update
method is fired. That way, the update method can reduce all the required
rebuilds that are queued up into as small an operation as possible.

What do you think?

> Do I miss something?

I don't think so. I would suggest that we both sleep on how to achieve the
above for a while - in the meantime look at the more simple mods like
updating functions/views/triggers.

BTW, any object modifications in pgSchema should be followed by a call to
Commit (see the existing comment properties) eg:

If Not SystemObject Then Commit rcUpdate, "Updated object comment."

Regards, Dave.

Re: Table modification

From
Jean-Michel POURE
Date:
>This is the correct way to do it, though I appreciate your problem with
>large tables. Perhaps (for Table objects only) we should have a
>Tables.DeferRebuild property. When set true, all rebuilds triggered by mods
>of individual properties or collections will get queued up until an update
>method is fired. That way, the update method can reduce all the required
>rebuilds that are queued up into as small an operation as possible.
>
>What do you think?
Agreed. Another question: how do we enable table reorganization (i.e.
change the order of columns)?

> > Do I miss something?
>I don't think so. I would suggest that we both sleep on how to achieve the
>above for a while - in the meantime look at the more simple mods like
>updating functions/views/triggers.
Agreed. What are your plans as regards functions/views/triggers?
Do you confirm creating objects in their OID order should work for solving
dependencies?

Cheers,
Jean-Michel




Re: Table modification

From
Dave Page
Date:

> -----Original Message-----
> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]
> Sent: 02 October 2001 10:41
> To: pgadmin-hackers@postgresql.org
> Subject: Re: [pgadmin-hackers] Table modification
>
>
>
> >This is the correct way to do it, though I appreciate your
> problem with
> >large tables. Perhaps (for Table objects only) we should have a
> >Tables.DeferRebuild property. When set true, all rebuilds
> triggered by
> >mods of individual properties or collections will get queued
> up until
> >an update method is fired. That way, the update method can
> reduce all
> >the required rebuilds that are queued up into as small an
> operation as
> >possible.
> >
> >What do you think?
> Agreed. Another question: how do we enable table reorganization (i.e.
> change the order of columns)?

Ug. Don't know. I suppose that's a special case (like table/column rename)
where you can't do it any way other than with a special method.

> > > Do I miss something?
> >I don't think so. I would suggest that we both sleep on how
> to achieve
> >the above for a while - in the meantime look at the more simple mods
> >like updating functions/views/triggers.
> Agreed. What are your plans as regards
> functions/views/triggers?
>
Currently I'm working on Revision Control/PostgreSQL 7.2 related updates. I
think it would be fairly easy to implement function/view/trigger
modification *without* project rebuilding. This should be the fisrt step I
think (if someone manually edits such an object they'll still have the same
dependency problems anyway, so they're no worse off!).

> Do you confirm creating objects in
> their OID order should work for solving
> dependencies?

As I said, that's how pg_dump does it afaict. The only case where it (and
pg_dump) fails that I've found so far is illustrated with:

1) Create table with a text column.
2) Create a function that returns a string with no arguments required.
3) Alter the column default to use that function.

The table reload will fail because the function has a higher oid and
therefore is not yet created. pg_dump isn't clever enough to create and
alter later.

Interestingly, if the function is dependant on the table you can create a
cirular dependency.... I don't know how we'd fix that, though I suspect it
might involve Revision Control.

Regards, Dave.

Re: Table modification

From
Jean-Michel POURE
Date:
>As I said, that's how pg_dump does it afaict. The only case where it (and
>pg_dump) fails that I've found so far is illustrated with

Again, I am still asking myself wether PostgreSQL schema objects should be
considered as 'compiled' or 'interpreted' code.
- If it is 'compiled' code, then source code should be stored in separate
tables and compiled on the run (like in any compiled language).
- On the converse, if it is 'interpreted' code, source code shall be read
directly from the database and be 'safe'.

'safe' means dependencies shall not be based on OIDs (if an object is
dropped, something is broken).

The problem with pg_dump is that it is a backup tool. What about:
1) multi-user access -> we have to reload all dependant objects before
compiling to be sure to have the latest stage.
2) if one function does not compile, all dependant objects are broken.

There have been many discussion about this on pg-hackers.
There is no answer to date unless there is real 1) schema and 2) a real
depency table.

I think we should better go for storing all schema objects in development
tables.

PostgreSQL and pgAdmin II are professional tools. We cannot take the risk
of loosing code when editing server-side objects. My company framework is
entirely based on PostgreSQL.and pg/PLSQL. I think it is too risky to go
for 'interpreted' solutions. What if I loose code and something is broken?

The way that most PL/pgSQL developers handle this problem is that they
write functions, views and triggers in a single text file with DROP/CREATE
stuff. This works for less than 10 objects. Everyone stores source code in
a separate place. pgAdmin I solution is interesting because it stores
source code in the database itself. This is not a new concept. If we try to
mix source/compiled code, we are going in a wrong direction.

As usual, if you find the answer I will be glad about it.
Cheers,
Jean-Michel





Re: Table modification

From
Dave Page
Date:

> -----Original Message-----
> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]
> Sent: 02 October 2001 11:35
> To: pgadmin-hackers@postgresql.org
> Subject: Re: [pgadmin-hackers] Table modification
>
>
>
> >As I said, that's how pg_dump does it afaict. The only case where it
> >(and
> >pg_dump) fails that I've found so far is illustrated with
>
> Again, I am still asking myself wether PostgreSQL schema
> objects should be
> considered as 'compiled' or 'interpreted' code.
> - If it is 'compiled' code, then source code should be stored
> in separate
> tables and compiled on the run (like in any compiled language).
> - On the converse, if it is 'interpreted' code, source code
> shall be read
> directly from the database and be 'safe'.

The second option sounds correct. Many people won't want the advanced
functions, and we need to be able to pick up changes from other tools like
psql.

That said, if you look at the new revision control code, that may provide
some of the answers (note that it's not all in CVS yet as I'm working on it
at the moment).

> 'safe' means dependencies shall not be based on OIDs (if an object is
> dropped, something is broken).
>
> The problem with pg_dump is that it is a backup tool. What about:
> 1) multi-user access -> we have to reload all dependant
> objects before
> compiling to be sure to have the latest stage.
> 2) if one function does not compile, all dependant objects are broken.

Yes, but in the example I gave, there really is no way to dump/reload (or
recompile) the objects involved, and it's not an unreasonable scenario to
get into. I use pg_dump as a reference, because if anything can reverse
engineer the database correctly it will be that, and essentially that's what
we are also trying to do.

> There have been many discussion about this on pg-hackers.
> There is no answer to date unless there is real 1) schema and
> 2) a real
> depency table.
> I think we should better go for storing all schema objects in
> development
> tables.
>
> PostgreSQL and pgAdmin II are professional tools. We cannot
> take the risk
> of loosing code when editing server-side objects. My company
> framework is
> entirely based on PostgreSQL.and pg/PLSQL. I think it is too
> risky to go
> for 'interpreted' solutions. What if I loose code and
> something is broken?

Then you'll need to completely rewrite pgSchema to work that way (which I'm
not about to do). For months pgSchema has been developed in an interpretive
way (which you were well aware of) - I'm not about to throw all those months
of work down the drain now. I think it is a little late to be saying that
this is not the way it should work.

> The way that most PL/pgSQL developers handle this problem is
> that they
> write functions, views and triggers in a single text file
> with DROP/CREATE
> stuff. This works for less than 10 objects. Everyone stores
> source code in
> a separate place. pgAdmin I solution is interesting because it stores
> source code in the database itself. This is not a new
> concept. If we try to
> mix source/compiled code, we are going in a wrong direction.

If pgAdmin II stored objects as source code then it would still need to
interpret object design by parsing SQL queries back into values and options.
This is significantly more difficult than interpreting the data from the
system catalogs.

The way it works presently is the most straight forward in my opinion. The
SQL reverse engineering can be tested using the Publishing Wizard to ensure
it's working correctly. An audit trail can be generated from the Revision
Control log.

Regards, Dave.

Re: Table modification

From
Jean-Michel POURE
Date:
>That said, if you look at the new revision control code, that may provide
>some of the answers (note that it's not all in CVS yet as I'm working on it
>at the moment).

Yes, I noticed you were working on revision control code. Revision control
is a great tool to perform upgrades. I doubt this is a reliable tool for
rebuilding.

> > 'safe' means dependencies shall not be based on OIDs (if an object is
> > dropped, something is broken).
> > The problem with pg_dump is that it is a backup tool. What about:
> > 1) multi-user access -> we have to reload all dependant
> > objects before
> > compiling to be sure to have the latest stage.
> > 2) if one function does not compile, all dependant objects are broken.
>
>Yes, but in the example I gave, there really is no way to dump/reload (or
>recompile) the objects involved, and it's not an unreasonable scenario to
>get into. I use pg_dump as a reference, because if anything can reverse
>engineer the database correctly it will be that, and essentially that's what
>we are also trying to do.

I do not agree. The one and only way to solve all circular/dependency/PosgreSQL
questions is to read/write information in development tables.

I would swear that even Oracle is not modifying code directly in the
database schema.
Oracle has got the same dependency problems. i.e. Code is stored in
separate tables.

> > There have been many discussion about this on pg-hackers.
> > There is no answer to date unless there is real 1) schema and
> > 2) a real
> > depency table.
> > I think we should better go for storing all schema objects in
> > development
> > tables.
> >
> > PostgreSQL and pgAdmin II are professional tools. We cannot
> > take the risk
> > of loosing code when editing server-side objects. My company
> > framework is
> > entirely based on PostgreSQL.and pg/PLSQL. I think it is too
> > risky to go
> > for 'interpreted' solutions. What if I loose code and
> > something is broken?
>
>Then you'll need to completely rewrite pgSchema to work that way (which I'm
>not about to do). For months pgSchema has been developed in an interpretive
>way (which you were well aware of) - I'm not about to throw all those months
>of work down the drain now. I think it is a little late to be saying that
>this is not the way it should work.

We have been doing it for a long time in pgAdmin I.
This is the way developers work when writing queries in text documents.

-> Object.CodeRepository = 'production' or 'development'.
-> Object.Move (fromRepository, TargetRepository) will move code and/or
compile it (like in pgAdmin I).

Objects shall be Table, View, Function, Trigger, etc...
pgSchema does not need much modification to handle 'production'.

> > The way that most PL/pgSQL developers handle this problem is
> > that they
> > write functions, views and triggers in a single text file
> > with DROP/CREATE
> > stuff. This works for less than 10 objects. Everyone stores
> > source code in
> > a separate place. pgAdmin I solution is interesting because it stores
> > source code in the database itself. This is not a new
> > concept. If we try to
> > mix source/compiled code, we are going in a wrong direction.
>
>If pgAdmin II stored objects as source code then it would still need to
>interpret object design by parsing SQL queries back into values and options.
>This is significantly more difficult than interpreting the data from the
>system catalogs.

Not necessarily. If people want to access development tables directly, they
can do it.
For example, if phpPgAdmin writes in development tables, code will still
compile.

No need to parse SQL queries when using Object.Move (from-> to) ilke in
pgAdmin I.

>The way it works presently is the most straight forward in my opinion. The
>SQL reverse engineering can be tested using the Publishing Wizard to ensure
>it's working correctly. An audit trail can be generated from the Revision
>Control log.

I agree this is the most straight forward, but does it suit professional needs?

As for me, I still stick to pgAdmin I CVS version because my whole business
process is based on PL/pgSQL.
I only trust dependency queries because they are real, whereas a revision
log is only based on user input.

/Later, Jean-Michel




Re: Table modification

From
Dave Page
Date:

> -----Original Message-----
> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]
> Sent: 02 October 2001 13:26
> To: pgadmin-hackers@postgresql.org
> Subject: Re: [pgadmin-hackers] Table modification
>
>
>
> >That said, if you look at the new revision control code, that may
> >provide some of the answers (note that it's not all in CVS
> yet as I'm
> >working on it at the moment).
>
> Yes, I noticed you were working on revision control code.
> Revision control is a great tool to perform upgrades. I doubt
> this is a reliable tool for
> rebuilding.

No, I didn't say it was. But it does provide some indication of when the
current object in the rcs doesn't match the real version of that object.

> I do not agree. The one and only way to solve all
> circular/dependency/PosgreSQL questions is to read/write
> information in development tables.

That will not solve a circular dependency.

> I would swear that even Oracle is not modifying code directly in the
> database schema.
> Oracle has got the same dependency problems. i.e. Code is stored in
> separate tables.
>
> >Then you'll need to completely rewrite pgSchema to work that
> way (which
> >I'm not about to do). For months pgSchema has been developed in an
> >interpretive way (which you were well aware of) - I'm not about to
> >throw all those months of work down the drain now. I think it is a
> >little late to be saying that this is not the way it should work.
>
> We have been doing it for a long time in pgAdmin I.
> This is the way developers work when writing queries in text
> documents.
>
> -> Object.CodeRepository = 'production' or 'development'. Object.Move
> -> (fromRepository, TargetRepository) will move code and/or
> compile it (like in pgAdmin I).
>
> Objects shall be Table, View, Function, Trigger, etc...
> pgSchema does not need much modification to handle 'production'.

Fine if you think it can be done. Propose a way of doing it. Bear in mind
that most people (myself included) don't work with such large databases and
prefer to work as pgAdmin currently does, so that should not be prevented
(in fact should probably be the default).

> > > The way that most PL/pgSQL developers handle this problem is that
> > > they write functions, views and triggers in a single text file
> > > with DROP/CREATE
> > > stuff. This works for less than 10 objects. Everyone stores
> > > source code in
> > > a separate place. pgAdmin I solution is interesting
> because it stores
> > > source code in the database itself. This is not a new
> > > concept. If we try to
> > > mix source/compiled code, we are going in a wrong direction.
> >
> >If pgAdmin II stored objects as source code then it would
> still need to
> >interpret object design by parsing SQL queries back into values and
> >options. This is significantly more difficult than interpreting the
> >data from the system catalogs.
>
> Not necessarily. If people want to access development tables
> directly, they
> can do it.
> For example, if phpPgAdmin writes in development tables, code
> will still
> compile.
>
> No need to parse SQL queries when using Object.Move (from->
> to) ilke in
> pgAdmin I.

OK, like I said above - what would you want to change/add to
pgSchema/pgAdmin? As a starting point, I would want pgAdmin to display as it
does now, but if the database is in dev mode, each object has an extra
property (like the Revision Control one that's now there) that indicates the
objects 'live' status. Other than that it should look pretty much the same.

> >The way it works presently is the most straight forward in
> my opinion.
> >The SQL reverse engineering can be tested using the
> Publishing Wizard
> >to ensure it's working correctly. An audit trail can be
> generated from
> >the Revision Control log.
>
> I agree this is the most straight forward, but does it suit
> professional needs?
>
> As for me, I still stick to pgAdmin I CVS version because my
> whole business
> process is based on PL/pgSQL.

pgAdmin I *will not* work with PostgreSQL 7.2. Bear that in mind if you look
to upgrade.

> I only trust dependency queries because they are real,
> whereas a revision
> log is only based on user input.

Where else would the input come from if not the user?

The logs are more reliable than trying to figure out what's dependent on
what. This is true because the logfiles represent the actual chronological
sequence of object development. Therefore, following this *must* create
dependencies in the correct order.


Another idea to consider: The Publishing Wizard. I believe this provides
roughly the same facilities but in a much safer environment:

When intending to use the publishing Wizard, you will build your PL/pgSQL
code in a development database. This is safer than building in development
tables because it uses the actual database to store items so you will
instantly become aware of any problems building objects, and more
importantly, you have no developers with hair-delete keys anywhere near your
production systems.

Once you are happy that your 'staging' database is correct, you create
another test database, into which you publish the staging database. Further
checking should prove that the test database is correct (if not then a
pgAdmin bug report would be appropriate).

Now that you've successfully completed a test publication of your new code,
you can publish to your production database(s).

By using a process like this, you:

A) develop on non-production systems without risking the live or production
systems (and therefore your or your shareholders profit).
B) prove your database before going live.
C) can publish to many production systems.

This system has already saved one of my staff from having to re-key a 200
row table just this week (and it's only Tuesday lunchtime now)!

This is a different working methodology, but it does give the same results
with low risk. Thoughts or comments welcomed of course...

Regards, Dave.

Re: Table modification

From
Jean-Michel POURE
Date:
>pgAdmin I *will not* work with PostgreSQL 7.2. Bear that in mind if you look
>to upgrade.

I will not upgrade to PostgreSQL 7.2 before pgAdmin I development features
are ported, one way or another, to pgAdmin II.

>Where else would the input come from if not the user?
>
>The logs are more reliable than trying to figure out what's dependent on
>what. This is true because the logfiles represent the actual chronological
>sequence of object development. Therefore, following this *must* create
>dependencies in the correct order.

I don't agree. logfiles do not always represent the actual chronological
sequence of object development.
Again any application, i.e. phpPgAdmin, should be able to write in pgAdmin
development tables.

Looking for dependencies in the source code is much more relevant and
consistent.
Then you compile code and review compilation errors.

>Another idea to consider: The Publishing Wizard. I believe this provides
>roughly the same facilities but in a much safer environment:
>
>When intending to use the publishing Wizard, you will build your PL/pgSQL
>code in a development database. This is safer than building in development
>tables because it uses the actual database to store items so you will
>instantly become aware of any problems building objects, and more
>importantly, you have no developers with hair-delete keys anywhere near your
>production systems.

No, it not safer. If you run a DROP FUNCTION query in psql, you might not
notice it before publishing.
I will never believe that storing source code in two different places
(PostgreSQL and revision logs)
is better than in one place development tables. This is a problem of
consistency.

Using development tables, anything can be organized within one transaction.
In the long run, we could also write PL/pgSQL triggers to compile code
interactively!!!

It will never be the case with revision logs.

>Once you are happy that your 'staging' database is correct, you create
>another test database, into which you publish the staging database. Further
>checking should prove that the test database is correct (if not then a
>pgAdmin bug report would be appropriate).
>
>Now that you've successfully completed a test publication of your new code,
>you can publish to your production database(s).

I agree that a publishing wizard is needed.

>By using a process like this, you:
>
>A) develop on non-production systems without risking the live or production
>systems (and therefore your or your shareholders profit).
>B) prove your database before going live.
>C) can publish to many production systems.
>
>This system has already saved one of my staff from having to re-key a 200
>row table just this week (and it's only Tuesday lunchtime now)!
>
>This is a different working methodology, but it does give the same results
>with low risk. Thoughts or comments welcomed of course...

Well, we are discussing about two different issues:
1) keeping source code in development tables better <-> retrieving source
code from logs in case of problems.
2) compilation in dependency order (=publishing, I think it is close).

We agree on 2) and disagree on 1).

Later/JMP






Re: Table modification

From
Dave Page
Date:

> -----Original Message-----
> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]
> Sent: 02 October 2001 14:47
> To: pgadmin-hackers@postgresql.org
> Subject: Re: [pgadmin-hackers] Table modification
>
> >Another idea to consider: The Publishing Wizard. I believe this
> >provides roughly the same facilities but in a much safer environment:
> >
> >When intending to use the publishing Wizard, you will build your
> >PL/pgSQL code in a development database. This is safer than
> building in
> >development tables because it uses the actual database to
> store items
> >so you will instantly become aware of any problems building objects,
> >and more importantly, you have no developers with hair-delete keys
> >anywhere near your production systems.
>
> No, it not safer. If you run a DROP FUNCTION query in psql,
> you might not
> notice it before publishing.

??? Of course the system will notice! If the function has been dropped,
pgAdmin can't generate the SQL to create it on the target server!

Obviously it is possible to drop the function in psql between pgSchema
reading the database and the Wizard actually running, but you just need to
make sure your development procedures will prevent that from happening -
that's common sense though, you don't want anyone continuing development
whilst you are publishing.

> I will never believe that storing source code in two different places
> (PostgreSQL and revision logs)
> is better than in one place development tables. This is a problem of
> consistency.

Precisely my argument (which is why I'm now confused). PostgreSQL itself
should be the primary repository as only it will truly validate any object
i.e. if it's invalid, PostgreSQL will through an error.

The Revision logs are not used for any purpose other than an audit trail and
providing a mechanism for rolling back changes. I'm not proposing that we
use them for anything else, I was merely pointing out that the code that
tracks the status of an objects rcs entry (i.e. whether it is up-to-date,
not in the rcs or modified since the last commit (by psql for example)) may
be of use.

> Using development tables, anything can be organized within
> one transaction. In the long run, we could also write
> PL/pgSQL triggers to compile code
> interactively!!!
>
> It will never be the case with revision logs.

Like I say, this is not what they are intended for. I think we are arguing
at cross purposes.

> >Once you are happy that your 'staging' database is correct,
> you create
> >another test database, into which you publish the staging database.
> >Further checking should prove that the test database is
> correct (if not
> >then a pgAdmin bug report would be appropriate).
> >
> >Now that you've successfully completed a test publication of
> your new
> >code, you can publish to your production database(s).
>
> I agree that a publishing wizard is needed.
>
> >By using a process like this, you:
> >
> >A) develop on non-production systems without risking the live or
> >production systems (and therefore your or your shareholders profit).
> >B) prove your database before going live.
> >C) can publish to many production systems.
> >
> >This system has already saved one of my staff from having to
> re-key a
> >200 row table just this week (and it's only Tuesday lunchtime now)!
> >
> >This is a different working methodology, but it does give the same
> >results with low risk. Thoughts or comments welcomed of course...
>
> Well, we are discussing about two different issues:
> 1) keeping source code in development tables better <->
> retrieving source
> code from logs in case of problems.
> 2) compilation in dependency order (=publishing, I think it is close).
>
> We agree on 2) and disagree on 1).

I'll disagree with that ;-)

1) I don't want to retrieve source from logs. I want to analyse the database
and generate the source from what is known to work and compile correctly.
There is no better test for an object than to build it in PostgreSQL.

2) Yes, compilation must be in dependancy order (which Publishing is). I
don't know how we could ever get over my circular dependency problem though.
That *would* require logs to figure out the sequence of queries required to
create the circular dependency in the first place. And yes, I do agree that
that is nasty...


A good heated debate does brighten up the day don't you think?!

Later, Dave.

Re: Table modification

From
Jean-Michel POURE
Date:
>1) I don't want to retrieve source from logs. I want to analyse the database
>and generate the source from what is known to work and compile correctly.
>There is no better test for an object than to build it in PostgreSQL.

When an object is broken, its source code disappears or cannot be retrieved
because PostgreSQL internals are based on OIDs.
Example: if you create a view based on a function and drop the function,
you will not be able to read the view source code.

When several developers are working on the same database at the same time,
you cannot guarantee consistency at PostgreSQL level.
Developer1 will be working in function1, while developer2 will be working
no function2 based on function1. Developer circular dependency. Boom!

When an object is broken, with your system, you have to retrieve source
code from revision logs.
I don't know if we can speak of ACID rules, but ... this is not a
consistent and atomic choice.

>2) Yes, compilation must be in dependancy order (which Publishing is). I
>don't know how we could ever get over my circular dependency problem though.
>That *would* require logs to figure out the sequence of queries required to
>create the circular dependency in the first place. And yes, I do agree that
>that is nasty...
>A good heated debate does brighten up the day don't you think?!

Yes my friend. Indeed, we are discussing about the (future) schema features
of PostgreSQL.
We need more input on PostgreSQL core team schema and dependency projects
... or start our own project.

/Later, Jean-Michel

Re: Table modification

From
Dave Page
Date:

> -----Original Message-----
> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]
> Sent: 02 October 2001 15:51
> To: pgadmin-hackers@postgresql.org
> Subject: Re: [pgadmin-hackers] Table modification
>
>
>
> >1) I don't want to retrieve source from logs. I want to analyse the
> >database and generate the source from what is known to work
> and compile
> >correctly. There is no better test for an object than to build it in
> >PostgreSQL.
>
> When an object is broken, its source code disappears or
> cannot be retrieved
> because PostgreSQL internals are based on OIDs.
> Example: if you create a view based on a function and drop
> the function,
> you will not be able to read the view source code.

Yes. So you use Revision Controls Rollback option (planned, but not yet
implemented) to undelete the object if it wasn't supposed to be dropped,
otherwise it doesn't matter because you obviously don't want to publish it
anyway.

> When several developers are working on the same database at
> the same time,
> you cannot guarantee consistency at PostgreSQL level.
> Developer1 will be working in function1, while developer2
> will be working
> no function2 based on function1. Developer circular dependency. Boom!

That is indeed the case, especially when working on a development table. If
working on a real (but staging not production) database, then the PostgreSQL
backend will prevent the circular dependency occuring in most cases.

> When an object is broken, with your system, you have to
> retrieve source
> code from revision logs.
> I don't know if we can speak of ACID rules, but ... this is not a
> consistent and atomic choice.

If you or I break code in CVS (or M$ Visual Sourcesafe), then we rollback
using a log. The fact that my system stores the SQL required to re-create
the object is neither here nor there, it's just the easiest way or
representing *any* object in a standard way.

I did consider storing copies of each attribute of each object of each type,
however the work involved in keeping that upgradeable through versions of
PostgreSQL would be huge. That's why the PostgreSQL upgrade procedure is
Dump/Reload rather than pg_upgrade or similar (I believe this was tried
once).

There are 2 key points here for my system to work:

1) The SQL generation must be correct. In reality (once past testing), most
errors already appear to occur in an obvious way - the SQL syntax is
normally screwed because of a a double quote _in_ an object name, or there's
a missing space before a little used qualifier...

2) You _must_ have suitable procedures in place for publishing. i.e. all
developers commit and logout and you refresh pgSchema before publishing
(actually, I think I'll make pgSchema do that).


> >2) Yes, compilation must be in dependancy order (which
> Publishing is).
> >I don't know how we could ever get over my circular
> dependency problem
> >though. That *would* require logs to figure out the sequence
> of queries
> >required to create the circular dependency in the first
> place. And yes,
> >I do agree that that is nasty... A good heated debate does
> brighten up
> >the day don't you think?!
>
> Yes my friend. Indeed, we are discussing about the (future)
> schema features
> of PostgreSQL.
> We need more input on PostgreSQL core team schema and
> dependency projects
> ... or start our own project.

Yes, it would be good to know where they are going. If there will be a
pg_dependencies in 7.3 for example, it might be worth holding off for that
on some tasks...

Later, Dave.

Re: Table modification

From
Jean-Michel POURE
Date:
>Yes. So you use Revision Controls Rollback option (planned, but not yet
>implemented) to undelete the object if it wasn't supposed to be dropped,
>otherwise it doesn't matter because you obviously don't want to publish it
>anyway.

No, source code should never be 'rolled back'.
We have to distinguish source (source code) from target (compiled code).

In case of problem, we roll back target, not source.

> > When several developers are working on the same database at
> > the same time,
> > you cannot guarantee consistency at PostgreSQL level.
> > Developer1 will be working in function1, while developer2
> > will be working
> > no function2 based on function1. Developer circular dependency. Boom!
>
>That is indeed the case, especially when working on a development table. If
>working on a real (but staging not production) database, then the PostgreSQL
>backend will prevent the circular dependency occuring in most cases.

No, it will not prevent it, because in many cases unresolved dependencies
are necessary and needed.
Especially when starting a project. You write that functionA is based on
functionB, and functionB does not exist.
It is normal, this is the development process. Someone else might work on
functionB the day after.

> > When an object is broken, with your system, you have to
> > retrieve source
> > code from revision logs.
> > I don't know if we can speak of ACID rules, but ... this is not a
> > consistent and atomic choice.
>
>If you or I break code in CVS (or M$ Visual Sourcesafe), then we rollback
>using a log. The fact that my system stores the SQL required to re-create
>the object is neither here nor there, it's just the easiest way or
>representing *any* object in a standard way.

CVS cannot be compared to a programming language but to a backup system.

What we call compilation is a process that starts from source (even if
source code is 'ljkgksdjgsdhglsgh')
and finishes with an executable (workable or not). The most important thing
is too preserver source code,
even in case of power failure and therefore:
- store source code in separate tables (because PostgreSQL screws up source
code with OIDs and custom modifications),
- use transactions to preserver ACID rules.

>I did consider storing copies of each attribute of each object of each type,
>however the work involved in keeping that upgradeable through versions of
>PostgreSQL would be huge. That's why the PostgreSQL upgrade procedure is
>Dump/Reload rather than pg_upgrade or similar (I believe this was tried
>once).
>
>There are 2 key points here for my system to work:
>
>1) The SQL generation must be correct. In reality (once past testing), most
>errors already appear to occur in an obvious way - the SQL syntax is
>normally screwed because of a a double quote _in_ an object name, or there's
>a missing space before a little used qualifier...

This is a huge problem.

>2) You _must_ have suitable procedures in place for publishing. i.e. all
>developers commit and logout and you refresh pgSchema before publishing
>(actually, I think I'll make pgSchema do that).

This is workable in a small team, not in a mixed pgAdmin / phppgAdmin world.

> > >2) Yes, compilation must be in dependancy order (which
> > Publishing is).
> > >I don't know how we could ever get over my circular
> > dependency problem
> > >though. That *would* require logs to figure out the sequence
> > of queries
> > >required to create the circular dependency in the first
> > place. And yes,
> > >I do agree that that is nasty... A good heated debate does
> > brighten up
> > >the day don't you think?!
> >
> > Yes my friend. Indeed, we are discussing about the (future)
> > schema features
> > of PostgreSQL.
> > We need more input on PostgreSQL core team schema and
> > dependency projects
> > ... or start our own project.
>
>Yes, it would be good to know where they are going. If there will be a
>pg_dependencies in 7.3 for example, it might be worth holding off for that
>on some tasks...

I too.

I will be helping you this time. As you know, my business depends on
PostgreSQL which has better UNICODE support in 7.2.
Our problem comes from PostgreSQL code style, between 'interpreted' and
'compiled' type.

Later/JMP


Re: Table modification

From
Dave Page
Date:

> -----Original Message-----
> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]
> Sent: 02 October 2001 18:38
> To: pgadmin-hackers@postgresql.org
> Subject: Re: [pgadmin-hackers] Table modification
>
>
>
> >Yes. So you use Revision Controls Rollback option (planned,
> but not yet
> >implemented) to undelete the object if it wasn't supposed to be
> >dropped, otherwise it doesn't matter because you obviously
> don't want
> >to publish it anyway.
>
> No, source code should never be 'rolled back'.
> We have to distinguish source (source code) from target
> (compiled code).

If you screw up CVS, then I will roll back to the last good version of the
code. Your broken code may well remain as a previous version in CVS, but the
current version will once again be the working code. This may then be
re-compiled.

Switching this around...

If I break a PL/pgSQL function then you will roll back to the last good
version. My broken code may remain in Revision Control, but the current
version will be the working code. This may then be recompiled (in fact would
be because the current version is built when created).

> In case of problem, we roll back target, not source.

This is terminology. The net effect is that the old source becomes the
current version again as does the old target. eg.

V1 Is good
V2 Is bad
V3 Is a copy of V1

This way, we have the full audit trail of what has been done, and the good
code is the current. This is rebuilt and in theory the target becomes
identical to how it was with V1.


> > > When several developers are working on the same database
> at the same
> > > time, you cannot guarantee consistency at PostgreSQL level.
> > > Developer1 will be working in function1, while developer2
> > > will be working
> > > no function2 based on function1. Developer circular
> dependency. Boom!
> >
> >That is indeed the case, especially when working on a development
> >table. If working on a real (but staging not production)
> database, then
> >the PostgreSQL backend will prevent the circular dependency
> occuring in
> >most cases.
>
> No, it will not prevent it, because in many cases unresolved
> dependencies
> are necessary and needed.
> Especially when starting a project. You write that functionA
> is based on
> functionB, and functionB does not exist.
> It is normal, this is the development process. Someone else
> might work on
> functionB the day after.

OK, following some tests I find that PostgreSQL doesn't validate a PL/pgSQL
function when it builds it as I thought. I tend to use plain SQL functions
which are validated so the situation you describe above cannot occur. That
is why I figure that the best validation is to actually build each object.

>
> > > When an object is broken, with your system, you have to retrieve
> > > source code from revision logs.
> > > I don't know if we can speak of ACID rules, but ... this is not a
> > > consistent and atomic choice.
> >
> >If you or I break code in CVS (or M$ Visual Sourcesafe), then we
> >rollback using a log. The fact that my system stores the SQL
> required
> >to re-create the object is neither here nor there, it's just the
> >easiest way or representing *any* object in a standard way.
>
> CVS cannot be compared to a programming language but to a
> backup system.

Which does exactly what I designed Revision Control for. Where's the
problem, I don't understand?

> What we call compilation is a process that starts from source
> (even if
> source code is 'ljkgksdjgsdhglsgh')
> and finishes with an executable (workable or not). The most
> important thing
> is too preserver source code,
> even in case of power failure and therefore:
> - store source code in separate tables (because PostgreSQL
> screws up source
> code with OIDs and custom modifications),

Herein lies the problem. Whilst functions can be coded in this way, other
objects (which are equally important) are not coded in one go. To store the
source to a table, you might have to store the original create table query,
and then numerous alter tables, and select intos etc. The table evolves over
time, and whilst publishing the last thing you want to do is repeat all
those edits (which you yourself pointed out recently). Instead, we analyse
the finished result and generate one create table query that builds the
table correctly from the outset.

I understand your reasoning wrt functions (following my discovery that they
aren't validated on build), but what do we do about it? The system that is
in place (and being developed) will work well for my and my staff's
purposes, and would also work well in the software houses I've previously
worked (developing Financials/Payroll/Personnel/Project Management
software), but obviously we need to address your requirements as we.

How do we achieve both?

> - use transactions to preserver ACID rules.

ACID? Do you mean ensuring the either *everything* is built or *nothing* is
built?

> >I did consider storing copies of each attribute of each
> object of each
> >type, however the work involved in keeping that upgradeable through
> >versions of PostgreSQL would be huge. That's why the
> PostgreSQL upgrade
> >procedure is Dump/Reload rather than pg_upgrade or similar
> (I believe
> >this was tried once).
> >
> >There are 2 key points here for my system to work:
> >
> >1) The SQL generation must be correct. In reality (once past
> testing),
> >most errors already appear to occur in an obvious way - the
> SQL syntax
> >is normally screwed because of a a double quote _in_ an
> object name, or
> >there's a missing space before a little used qualifier...
>
> This is a huge problem.

It isn't a huge problem because before publishing to the live system, you do
a test run to highlight such problems. Also, most of these problems will
hopefully show up in beta releases (which is their purpose).

>
> >2) You _must_ have suitable procedures in place for publishing. i.e.
> >all developers commit and logout and you refresh pgSchema before
> >publishing (actually, I think I'll make pgSchema do that).
>
> This is workable in a small team, not in a mixed pgAdmin /
> phppgAdmin world.

Why? When your qa people agree that all is well, you go into code freeze and
begin testing the publish process. If you're samll enough that you do your
own qa, then the chances are that you can shout to all your developers!

>
> I will be helping you this time. As you know, my business depends on
> PostgreSQL which has better UNICODE support in 7.2.
> Our problem comes from PostgreSQL code style, between
> 'interpreted' and
> 'compiled' type.

Yes, you've used these words previously, but I think I misunderstood your
meaning. I guess now that you are referring to what the difference I
mentioned above between an SQL function and a PL/pgSQL function?

/D

Re: Table modification

From
Jean-Michel POURE
Date:
>OK, following some tests I find that PostgreSQL doesn't validate a PL/pgSQL
>function when it builds it as I thought. I tend to use plain SQL functions
>which are validated so the situation you describe above cannot occur. That
>is why I figure that the best validation is to actually build each object.

I did this test: create functionA based on functionB, drop functionB,
create functionB.
functionA is not broken. Things are much more easy than I thought !!!!!

No need to use development tables for functions !!!!!
OK, I admit there are serious **holes** in my point of view.

Later, /JMP

Re: Table modification

From
Dave Page
Date:

> -----Original Message-----
> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr]
> Sent: 02 October 2001 21:30
> To: pgadmin-hackers@postgresql.org
> Subject: Re: [pgadmin-hackers] Table modification
>
>
>
> >OK, following some tests I find that PostgreSQL doesn't validate a
> >PL/pgSQL function when it builds it as I thought. I tend to
> use plain
> >SQL functions which are validated so the situation you
> describe above
> >cannot occur. That is why I figure that the best validation is to
> >actually build each object.
>
> I did this test: create functionA based on functionB, drop functionB,
> create functionB.
> functionA is not broken. Things are much more easy than I
> thought !!!!!
>
> No need to use development tables for functions !!!!!
> OK, I admit there are serious **holes** in my point of view.

Ahh, so we're both a bit wrong! I thought it was just me...

Right, my suggestion is this:

Let me finish Revision Control, then take a look at what it does. It *will*
be a useful system to some people, however, it might not be what you
require. It might be easier then to figure out how to add what you require.

I have now added (in CVS) a Commit option to allow committing of changes
made outside pgAdmin, and a History viewer. This History Viewer will include
the Rollback option, but that's not there just yet. Currently these options
are on the right click menu.

I have tomorrow off work and the house to myself so I should be able to
finish the rcs then.

Good night,

Dave.

Re: Table modification

From
Jean-Michel POURE
Date:
>Good night,
Good night!

Re: Table modification

From
Jean-Michel POURE
Date:
Hi Dave,

I had a look at triggers and views where functions are referenced using OIDs.

Cheers/ JMP