=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@kurilemu.de> writes:
> I think it would be more helpful to have a test module that
> 1. installs an event trigger on ddl_command_end for CREATE for
> object being created
> 2. runs all the tests in parallel_schedule
> 3. do [... something ...] with the event trigger to generate the DDL
> using the new functions, and compare with the object created
> originally. (There's a lot of handwaving here. Maybe pg_dump both
> and compare?)
While I agree that automating this might be helpful, please please
please do not create yet another execution of the core regression
tests. There is far too much stuff in there that is not DDL and
will only be useless cycles for this purpose.
I wonder if it'd be practical to extract just the DDL commands from
the core scripts, and then run just those through a process like
you suggest?
I agree that the "handwaving" part is trickier than it looks.
If memory serves, we've had bugs-of-omission where somebody
forgot to update pg_dump for some new feature, and it wasn't
obvious because comparing pg_dump output against pg_dump
output didn't show that the relevant object property wasn't
copied correctly. In this context, forgetting to update both
pg_dump and the DDL-dumping function would mask both omissions.
Maybe that's unlikely, but ...
> Another possibility is to use the pg_dump/t/002_pg_dump.pl database
> instead of the stock regression one, which is perhaps richer in object
> type diversity.
I think that test script also suffers from the out-of-sight,
out-of-mind problem. Not to mention that you need a lot of study
to figure out how to modify it at all. I certainly avoid doing so.
regards, tom lane