Re: [HACKERS] pgbench tap tests & minor fixes - Mailing list pgsql-hackers

From Fabien COELHO
Subject Re: [HACKERS] pgbench tap tests & minor fixes
Date
Msg-id alpine.DEB.2.20.1704231957020.14381@lancre
Whole thread Raw
In response to Re: [HACKERS] pgbench tap tests & minor fixes  (Nikolay Shaplov <dhyan@nataraj.su>)
Responses Re: [HACKERS] pgbench tap tests & minor fixes
Re: [HACKERS] pgbench tap tests & minor fixes
List pgsql-hackers
Hello Nikolay,

>> Hmmm. The pre-existing TAP test in pgbench is about concurrent commits, it
>> is not to test pgbench itself. Pgbench allows to run some programmable
>> clients in parallel very easily, which cannot be done simply otherwise. I
>> think that having it there would encourage such uses.

> Since none of us is Tom Lane, I think we have no moral right to encourage
> somebody doing somebody in postgres.

I do not get it.

> People do what they think better to do.

Probably.

>> [...] abstraction. For me, all pg standard executables could have their 
>> methods in PostgresNode.pm.

> you are speaking about
> local $ENV{PGPORT} = $self->port;
> ?

Yes. My point is that this is the internal stuff of PostgresNode and that 
it should stay inside that class. The point of the class is to organize 
postgres instances for testing, including client-server connections. From 
an object oriented point of view, the method to identify the server could 
vary, thus the testing part should not need to know, unless what is tested 
is this connection variations, hence it make sense to have it there.

> Why do you need it here after all? Lots of TAP tests for bin utilities runs
> them using command_like function from TestLib.pm and need no setting
> $ENV{PGPORT}.

The test I've seen that do not need it do not connect to the server.
In order to connect to the server they get through variants from 
PostgresNode which set the variables before calling the TestLib function.

> Is pgbench so special? If it is so, may be it is good reason to
> fix utilities from TestLib.pm, so they can take port from PostgresNode.

Nope. TestLib does not know about PostgresNode, the idea is rather that 
PostgresNode knows and wraps around TestLib when needed.

> If in these tests we can use command_like instead of pgbench_likes it 
> would increase maintainability of the code.

"command_like" variants do not look precisely at all results (status, 
stdout, stderr) and are limited to what they check (eg one regex at a 
time). Another thing I do not like is that they expect commands as a list 
of pieces, which is not very readable.

Now I can add a command_likes which allows to manage status, stdout and 
stderr and add that in TestLib & PostgresNode.

> If we need more close integration with PostgresNode then command_like
> provides, then may be we should improve command_like or add another function
> that provides things that are needed.

Yep, this is possible.

> psql -- is a core utility for postgres node. pgbench is optional.

AFAICS pgbench is a core utility as well. I do not know how not to compile 
pg without pgbench. It was optional when in contrib, it is not anymore.

>>> I think all the data from this file should be somehow imported into
>>> 001_pgbench.pl and these files should be created only when tests is
>>> running, and then removed when testing is done.
>>
>> Hmmm. I thought of that but was very unconvinced by the approach: pgbench
>> basically always requires a file, so what the stuff was doing was writting
>> the script into a file, checking for possible errors when doing so, then
>> unlinking the file and checking for errors again after the run.

> I think I was wrong about deleting file after tests are finished. Better keep
> them.

Hmmm... Then what is the point not to have them as files to begin with?

>> Then you have to do some escaping the the pgbench script themselves, 
>> and the perl script becomes pretty horrible and unreadable with plenty 
>> of perl, SQL, backslash commands in strings...

> Perl provides a lot of tools for escaping.

Sure. It does not mean that Perl is the best place to store dozens of 
pgbench scripts.

> If once chooses the right one, there would be no need in additional 
> backslashes.

This does not fix the issue with having a mixture of files and languages 
in the test script, which I think is basically a bad idea for readability.

>> Finally, if the script is inside the perl script it makes it hard to 
>> run the test outside of it when a problem is found, so it is a pain.

> I've took back my words about deleting. After a first run one will have these
> files "in flesh" so they would be available for further experiments.

I'm lost. So where is the problem with having them as file in the first 
place, if you want to keep them anyway?

> I would speak again about maintainability. Having 36 files, most of them <100b
> size -- is a very bad idea for maintainability.

I do not understand why. Running from files is a pgbench requirement. Each 
test file is quite well defined, could be commented more about what it is 
testing, and it is easy to see which pgbench run uses it.

> If each commit of new functionality would add 36 small files, we will 
> drown in these files quite soon. These files should be generated on fly. 
> I am 100% sure of it.

Good for you:-)

I think that it would lead to a horrible test script. I can write horrible 
test scripts, but I wish to avoid it.

> PS. I've read the perl code through much more carefully. All other things are
> looking quite good to me.

Ok, something is positive:-)

To sum up:
 - I agree to add a generic command TestLib & a wrapper in PostgresNode,   instead of having pgbench specific things in
thelater, then call   them from pgbench test script.
 
 - I still think that moving the pgbench scripts inside the test script   is a bad idea (tm).

-- 
Fabien.



pgsql-hackers by date:

Previous
From: Jan Michálek
Date:
Subject: Re: [HACKERS] Other formats in pset like markdown, rst, mediawiki
Next
From: Robert Haas
Date:
Subject: Re: [HACKERS] identity columns