Thread: Confused by 'timing' results

Confused by 'timing' results

From
A J
Date:
time echo '\timing \\select * from  table1 where id = 123;' | psql

I am trying to time a simple select statement from different clients located at different places. The database is on US east-coast.
In the above query. the 'timing' will time the database time and the 'time' command at the very start will time the complete time for the query including network time.

When I run the query on the database, I get:
Time: 2.357 ms
real    0m0.010s

From a separate client on east-coast:
Time: 29.555 ms
real    0m0.164s

From a separate client on the west-coast:
Time: 82.236 ms
real    0m0.408s

From a separate cloud client from asia:
Time: 262.715 ms
real    0m1.311s

While I did expect the 'real' time to be different and increase (from server to east-coast to west-coast to asia ), I did not expect the database time to increase appreciably. Can anyone explain why the database time for a simple select (reading from buffer) would increase so much (from 2.357 to 29.555 to 82.236 to 262.715ms) because of the client location ?

Because I ran the select several times before the above test, am assuming all the selects just read from the shared buffer and did not hit the actual disks on the database server.


Thank you. - AJ

Re: Confused by 'timing' results

From
"Kevin Grittner"
Date:
A J <s5aly@yahoo.com> wrote:

> time echo '\timing \\select * from  table1 where id = 123;' | psql

> In the above query. the 'timing' will time the database time and
> the 'time' command at the very start will time the complete time
> for the query including network time.

No, the 'timing' will say how long it took to send the query from
psql to the server and get the complete response back from the
server.  The 'time' command will also include the time to start
psql, establish the connection to the database, read from the pipe,
and close the connection to the database.

-Kevin

Re: Confused by 'timing' results

From
A J
Date:
OK, thanks Kevin. So to measure just the time take by database server, I guess I need to set the log_min_duration_statement and log_statement parameters in postgresql.conf
log_min_duration_statement output should stay constant for all the different clients across different geographic locations.


From: Kevin Grittner <Kevin.Grittner@wicourts.gov>
To: pgsql-admin@postgresql.org; A J <s5aly@yahoo.com>
Sent: Tue, August 31, 2010 2:14:27 PM
Subject: Re: [ADMIN] Confused by 'timing' results

A J <s5aly@yahoo.com> wrote:

> time echo '\timing \\select * from  table1 where id = 123;' | psql

> In the above query. the 'timing' will time the database time and
> the 'time' command at the very start will time the complete time
> for the query including network time.

No, the 'timing' will say how long it took to send the query from
psql to the server and get the complete response back from the
server.  The 'time' command will also include the time to start
psql, establish the connection to the database, read from the pipe,
and close the connection to the database.

-Kevin

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: Confused by 'timing' results

From
"Kevin Grittner"
Date:
A J <s5aly@yahoo.com> wrote:

> log_min_duration_statement output should stay constant for all the
> different clients across different geographic locations.

I'm not sure timings there will be totally immune to network speed.
The whole execution engine is designed around the top level pulling
rows from other levels.  If it is sending those rows over the wire
as it pulls them, it could block at times, causing a delay before
completion of the query.

I haven't looked at the PostgreSQL wire protocol in detail, but
other database products where I became familiar with the wire
protocol had many round trips per query run -- making run time
sensitive to the network latency.  I've generally gone with a thin
tier on the database server to field requests and provide responses
using stream-oriented protocols with moving windows so I can keep
the database protocol to the fastest, lowest latency connection I
can arrange.  I like to use queues in that middle tier for incoming
requests and outgoing responses, to minimize the impact of network
throughput and latency.

-Kevin

Changing locale to 1250

From
Łukasz Brodziak
Date:
Hello,

The question is maybe stupid but it's a trouble to me. I would like to perform some tasks on my linux (ubuntu) using db from work unfortunatly the DB has WIN1250 encoding and the PostgreSQL in linux has UTF-8 is it possible to somehow change the server's locale as I may not change the encoding of the DB. Please help.

Re: Changing locale to 1250

From
"Kevin Grittner"
Date:
*ukasz Brodziak<lukasz.brodziak@hotmail.com> wrote:

> I would like to perform some tasks on my linux (ubuntu) using db
> from work unfortunatly the DB has WIN1250 encoding and the
> PostgreSQL in linux has UTF-8 is it possible to somehow change the
> server's locale as I may not change the encoding of the DB.

Are you trying to connect to a PostgreSQL server which runs on
Windows using a client on Linux, or are you trying to copy the
database and run your own server on Linux?  What database
version(s), exactly?

-Kevin

Re: Changing locale to 1250

From
Łukasz Brodziak
Date:
The version in 8.4. I'm trying to copy the database from Windows to Linux. If it is of any importance the version of "Windows Postgres" is 8.2

> Date: Tue, 31 Aug 2010 14:37:19 -0500
> From: Kevin.Grittner@wicourts.gov
> To: lukasz.brodziak@hotmail.com; pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Changing locale to 1250
> Are you trying to connect to a PostgreSQL server which runs on
> Windows using a client on Linux, or are you trying to copy the
> database and run your own server on Linux? What database
> version(s), exactly?
>
> -Kevin
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin

Re: Confused by 'timing' results

From
Scott Marlowe
Date:
On Tue, Aug 31, 2010 at 1:01 PM, A J <s5aly@yahoo.com> wrote:
> OK, thanks Kevin. So to measure just the time take by database server, I
> guess I need to set the log_min_duration_statement and log_statement
> parameters in postgresql.conf
> log_min_duration_statement output should stay constant for all the different
> clients across different geographic locations.

Also, if you want to test turn around time without the disk drives
being an issue, you can do "select 1" instead of "select * from table
(yada)"

Re: Changing locale to 1250

From
"Kevin Grittner"
Date:
*ukasz Brodziak<lukasz.brodziak@hotmail.com> wrote:

> The version in 8.4. I'm trying to copy the database from Windows
> to Linux.  If it is of any importance the version of "Windows
> Postgres" is 8.2

Use pg_dump from your 8.4 machine.  I would use the --encoding
switch when I ran pg_dump to get UTF-8 encoding in the dump file.
Make sure you initialize your new database with a character set
which supports all the characters from the old database.

-Kevin

Re: Confused by 'timing' results

From
Willy-Bas Loos
Date:
>> time echo '\timing \\select * from  table1 where id = 123;' | psql
>> In the above query. the 'timing' will time the database time and
>> the 'time' command at the very start will time the complete time
>> for the query including network time.
>
> No, the 'timing' will say how long it took to send the query from
> psql to the server and get the complete response back from the
> server.  The 'time' command will also include the time to start
> psql, establish the connection to the database, read from the pipe,
> and close the connection to the database.

if you'd want to get the real execution time for the query from a
faraway client (without network latency), you could open an ssh
session and start psql on the database server.


--
"Patriotism is the conviction that your country is superior to all
others because you were born in it." -- George Bernard Shaw

Re: Confused by 'timing' results

From
A J
Date:
I am conducting the test with several concurrent clients.
The problem I am now facing in using log_min_duration_statement is that all the clients have to write to a single log file in the pg_log directory. So they have to wait for the other writes to happen before completing their write. This seems to be reason why the measured duration in the log file (for several concurrent clients) is way more, infact much more than what was measured by psql timing from the client side.

(The problem with ssh tunnel and then psql on database server is that the database will think the connections are local. I want to mimic real-life where the tcp connections are opened directly by clients from different IPs.)

related settings:
log_destination = 'stderr'              # Valid values are combinations of
#log_directory = 'pg_log'               # directory where log files are written,
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'        # log file name pattern,



From: Scott Marlowe <scott.marlowe@gmail.com>
To: A J <s5aly@yahoo.com>
Cc: Kevin Grittner <Kevin.Grittner@wicourts.gov>; pgsql-admin@postgresql.org
Sent: Tue, August 31, 2010 4:02:33 PM
Subject: Re: [ADMIN] Confused by 'timing' results

On Tue, Aug 31, 2010 at 1:01 PM, A J <s5aly@yahoo.com> wrote:
> OK, thanks Kevin. So to measure just the time take by database server, I
> guess I need to set the log_min_duration_statement and log_statement
> parameters in postgresql.conf
> log_min_duration_statement output should stay constant for all the different
> clients across different geographic locations.

Also, if you want to test turn around time without the disk drives
being an issue, you can do "select 1" instead of "select * from table
(yada)"

Re: Confused by 'timing' results

From
"Kevin Grittner"
Date:
A J <s5aly@yahoo.com> wrote:
> I am conducting the test with several concurrent clients.

I didn't see a question in your latest email.  Do you now understand
why the network affects timings?  Do you have any other questions?
Is there some particular problem you're trying to solve, for which
you don't yet have a solution?  (If so, please describe the problem
you're trying to solve; someone might be able to suggest a solution
you won't get to by asking narrower questions.)

-Kevin

Re: Confused by 'timing' results

From
Tom Lane
Date:
A J <s5aly@yahoo.com> writes:
> The problem I am now facing in using log_min_duration_statement is that all the
> clients have to write to a single log file in the pg_log directory. So they have
> to wait for the other writes to happen before completing their write. This seems
> to be reason why the measured duration in the log file (for several concurrent
> clients) is way more, infact much more than what was measured by psql timing
> from the client side.

Do you have any actual evidence for that being the problem, as opposed
to some other effect?  Because I don't recall anybody else complaining
about this.

Personally I'd wonder for example about whether the server has a really
slow gettimeofday()...

            regards, tom lane

Re: Confused by 'timing' results

From
A J
Date:
Kevin
The problem I am trying to solve is:
measure accurately both the database server time + network time when several clients connect to the database from different geographic location.
All the clients hit the database simultaneously with a long script each of insert/update/select queries.

I don't need aggregate numbers but the 2 components of the time taken for each and every query to drive analysis. So need in some fashion:
Query1, DB Time, Total Time (or Network Time)
....
Query<n>, DB Time, Total Time (or Network Time)

From the query, I can relate to what client in what geographic location fired that query and so have the full picture.

I initially thought combination  of time + timing will give me this. Now I know 'timing' includes part of network time , so it is not just the database server time.
On second try, by trying to log to log_directory/log_filename by setting log_min_duration_statement=0, seems to be doing something weird. The durations are very very high in the file and cannot be true. My theory is that it is a single file being written by several concurrent queries and so they might be all queuing up causing the time being registered in this log file to be way high. (I might be wrong)
I could not believe the times being registered in the log file (several hundred ms as opposed to the expected few tens of ms) and ran the test several times over couple of days, still getting the same high numbers.
On setting log_min_duration_statement=-1, the performance comes back to the normal acceptable performance (but I cannot measure the db time).

So really what I want to measure is the database time for several queries by several concurrent users. Because each query takes only a few ms, any sort of overhead has to be put carefully to not skew the measurement being tried.

Looking for suggestions to solve this.

Thank you, AJ




From: Kevin Grittner <Kevin.Grittner@wicourts.gov>
To: Scott Marlowe <scott.marlowe@gmail.com>; A J <s5aly@yahoo.com>
Cc: pgsql-admin@postgresql.org
Sent: Thu, September 2, 2010 12:48:58 PM
Subject: Re: [ADMIN] Confused by 'timing' results

A J <s5aly@yahoo.com> wrote:
> I am conducting the test with several concurrent clients.

I didn't see a question in your latest email.  Do you now understand
why the network affects timings?  Do you have any other questions?
Is there some particular problem you're trying to solve, for which
you don't yet have a solution?  (If so, please describe the problem
you're trying to solve; someone might be able to suggest a solution
you won't get to by asking narrower questions.)

-Kevin

Re: Confused by 'timing' results

From
Scott Marlowe
Date:
On Thu, Sep 2, 2010 at 11:34 AM, A J <s5aly@yahoo.com> wrote:
> Kevin
> The problem I am trying to solve is:
> measure accurately both the database server time + network time when several
> clients connect to the database from different geographic location.
> All the clients hit the database simultaneously with a long script each of
> insert/update/select queries.

Then that's what you should test.  create long scripts, run them from
different locales, and measure the overall time differences, if any,
of the same file from different locales.

Re: Confused by 'timing' results

From
Tom Lane
Date:
A J <s5aly@yahoo.com> writes:
> On second try, by trying to log to log_directory/log_filename by
> setting log_min_duration_statement=0, seems to be doing something weird. The
> durations are very very high in the file and cannot be true.

You're not being very clear here.  Did the logged durations not
correspond to reality?  Or did the performance as seen from the clients
drop substantially when you turned on extra logging?  Also, exactly
how are you doing logging (ie, what are your settings for
log_destination and related parameters)?

            regards, tom lane

Re: Confused by 'timing' results

From
A J
Date:
The performance as seen from the clients dropped substantially after turning on the extra logging. The numbers were real but the performance dropped significantly.

All the log related settings in postgresql.conf are below:
log_destination = 'stderr'              # Valid values are combinations of
#log_directory = 'pg_log'               # directory where log files are written,
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'        # log file name pattern,
#log_truncate_on_rotation = off         # If on, an existing log file of the
#log_rotation_age = 1d                  # Automatic rotation of logfiles will
#log_rotation_size = 10MB               # Automatic rotation of logfiles will
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#log_min_messages = warning             # values in order of decreasing detail:
#log_error_verbosity = default          # terse, default, or verbose messages
#log_min_error_statement = error        # values in order of decreasing detail:
log_min_duration_statement = 0 # -1 is disabled, 0 logs all statements
log_checkpoints = on
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_hostname = off
log_line_prefix = '%t '                 # special values:
#log_lock_waits = off                   # log lock waits >= deadlock_timeout
log_statement = 'none'                  # none, ddl, mod, all
#log_temp_files = -1                    # log temporary files equal or larger
#log_timezone = unknown                 # actually, defaults to TZ environment
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#log_autovacuum_min_duration = -1       # -1 disables, 0 logs all actions and



From: Tom Lane <tgl@sss.pgh.pa.us>
To: A J <s5aly@yahoo.com>
Cc: Kevin Grittner <Kevin.Grittner@wicourts.gov>; Scott Marlowe <scott.marlowe@gmail.com>; pgsql-admin@postgresql.org
Sent: Thu, September 2, 2010 2:06:43 PM
Subject: Re: [ADMIN] Confused by 'timing' results

A J <s5aly@yahoo.com> writes:
> On second try, by trying to log to log_directory/log_filename by
> setting log_min_duration_statement=0, seems to be doing something weird. The
> durations are very very high in the file and cannot be true.

You're not being very clear here.  Did the logged durations not
correspond to reality?  Or did the performance as seen from the clients
drop substantially when you turned on extra logging?  Also, exactly
how are you doing logging (ie, what are your settings for
log_destination and related parameters)?

            regards, tom lane

Re: Confused by 'timing' results

From
"Kevin Grittner"
Date:
Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Thu, Sep 2, 2010 at 11:34 AM, A J <s5aly@yahoo.com> wrote:

>> The problem I am trying to solve is:
>> measure accurately both the database server time + network time
>> when several clients connect to the database from different
>> geographic location.  All the clients hit the database
>> simultaneously with a long script each of insert/update/select
>> queries.
>
> Then that's what you should test.  create long scripts, run them
> from different locales, and measure the overall time differences,
> if any, of the same file from different locales.

I'm inclined to agree with Scott.  The effects of the network come
into play in several different ways, and I can't think of a better
way to isolate those effects from the query run time itself than to
run exactly the same queries on the server itself and from the
various remote locations.  Subtract the server-based time from each
location's time to find the impact of the network.  Doesn't that
address your problem fairly directly and accurately?

-Kevin

Re: Confused by 'timing' results

From
A J
Date:
With this approach, I will be assuming that the query time does not change due to client location, which though reasonable, is still an assumption. If I could have tested without making this assumption (or any) , it would have been better.
But looks like there is no choice as getting to query time measurement  for queries fired by clients is not possible.

I would still be firing concurrent clients across the different locations but measuring the 'psql timing' for only the queries fired on the database server. Will extrapolate the outlier % of the queries on database server (say queries that take more than 200 ms due to flushing of checkpoints etc) to get to the total outliers.

This is good enough for the time being and will try it. If you can think of alternatives where I don't have to assume/extrapolate, please let me know.

Do you think changing log_destination to syslog may make a difference (Kevin mentioned even this timing is not totally immune from network effects but if possible to measure should be very close to the query time) ?



From: Kevin Grittner <Kevin.Grittner@wicourts.gov>
To: Scott Marlowe <scott.marlowe@gmail.com>; A J <s5aly@yahoo.com>
Cc: pgsql-admin@postgresql.org
Sent: Thu, September 2, 2010 2:31:24 PM
Subject: Re: [ADMIN] Confused by 'timing' results

Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Thu, Sep 2, 2010 at 11:34 AM, A J <s5aly@yahoo.com> wrote:

>> The problem I am trying to solve is:
>> measure accurately both the database server time + network time
>> when several clients connect to the database from different
>> geographic location.  All the clients hit the database
>> simultaneously with a long script each of insert/update/select
>> queries.
>
> Then that's what you should test.  create long scripts, run them
> from different locales, and measure the overall time differences,
> if any, of the same file from different locales.

I'm inclined to agree with Scott.  The effects of the network come
into play in several different ways, and I can't think of a better
way to isolate those effects from the query run time itself than to
run exactly the same queries on the server itself and from the
various remote locations.  Subtract the server-based time from each
location's time to find the impact of the network.  Doesn't that
address your problem fairly directly and accurately?

-Kevin

Re: Confused by 'timing' results

From
Tom Lane
Date:
A J <s5aly@yahoo.com> writes:
> The performance as seen from the clients dropped substantially after turning on
> the extra logging. The numbers were real but the performance dropped
> significantly.

> All the log related settings in postgresql.conf are below:

Hmm, what about logging_collector?  (Or it might be called
redirect_stderr, depending on which PG version this is.)
If it's currently off, see whether turning it on improves matters.

            regards, tom lane

Re: Confused by 'timing' results

From
A J
Date:
Sorry, forgot to mention that:
logging_collector = on          # Enable capturing of stderr and csvlog

Infact I was thinking of the other way, to switch it off and somehow display the stderr(or syslog) directly on console (rather than writing to a file) to see if it helps.



From: Tom Lane <tgl@sss.pgh.pa.us>
To: A J <s5aly@yahoo.com>
Cc: Kevin Grittner <Kevin.Grittner@wicourts.gov>; Scott Marlowe <scott.marlowe@gmail.com>; pgsql-admin@postgresql.org
Sent: Thu, September 2, 2010 3:03:33 PM
Subject: Re: [ADMIN] Confused by 'timing' results

A J <s5aly@yahoo.com> writes:
> The performance as seen from the clients dropped substantially after turning on
> the extra logging. The numbers were real but the performance dropped
> significantly.

> All the log related settings in postgresql.conf are below:

Hmm, what about logging_collector?  (Or it might be called
redirect_stderr, depending on which PG version this is.)
If it's currently off, see whether turning it on improves matters.

            regards, tom lane

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: Confused by 'timing' results

From
"Kevin Grittner"
Date:
A J <s5aly@yahoo.com> wrote:

> With this approach, I will be assuming that the query time does
> not change due to client location, which though reasonable, is
> still an assumption.

As I explained in an earlier post, the query can block on the server
due to network bandwidth or latency.  So the "wall time" for query
execution can indeed be different based on location, especially if
you are returning a large result set.  But why would you want to
separate out this type of network delay from the others?

If you want to eliminate it as a factor, you need some tier to
receive requests and queue them, pull them from the queue and run
them with another queue for results, and send the results from the
queue back to the requester.  This is what we do, BTW, and it does
give us the ability to totally isolate run time from network
influences.

-Kevin

Re: Confused by 'timing' results

From
Scott Marlowe
Date:
On Thu, Sep 2, 2010 at 1:02 PM, A J <s5aly@yahoo.com> wrote:
> Do you think changing log_destination to syslog may make a difference (Kevin
> mentioned even this timing is not totally immune from network effects but if
> possible to measure should be very close to the query time) ?

At least try putting it on a different drive if you can, or remote
logging if you can't plug another drive in to the machine to put the
logs on.


--
To understand recursion, one must first understand recursion.

Re: Confused by 'timing' results

From
Tom Lane
Date:
A J <s5aly@yahoo.com> writes:
> Do you think changing log_destination to syslog may make a difference

It's worth trying alternatives anyway.  It is odd that you are seeing
such a slowdown when using the collector --- many people push very high
log volumes through the collector without problems.  What PG version is
this exactly, and on what platform?

            regards, tom lane

Re: Confused by 'timing' results

From
A J
Date:
PostgreSQL 8.4
CentoOS 5.5
I have got WCE=0, on the drive that mounts the data directory with all its subdirectory (including pg_log)

Maybe I should try to mount pg_log to a different drive and have write cache enabled on that one.



From: Tom Lane <tgl@sss.pgh.pa.us>
To: A J <s5aly@yahoo.com>
Cc: Kevin Grittner <Kevin.Grittner@wicourts.gov>; Scott Marlowe <scott.marlowe@gmail.com>; pgsql-admin@postgresql.org
Sent: Thu, September 2, 2010 3:17:42 PM
Subject: Re: [ADMIN] Confused by 'timing' results

A J <s5aly@yahoo.com> writes:
> Do you think changing log_destination to syslog may make a difference

It's worth trying alternatives anyway.  It is odd that you are seeing
such a slowdown when using the collector --- many people push very high
log volumes through the collector without problems.  What PG version is
this exactly, and on what platform?

            regards, tom lane

Re: Confused by 'timing' results

From
Scott Marlowe
Date:
On Thu, Sep 2, 2010 at 1:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> A J <s5aly@yahoo.com> writes:
>> Do you think changing log_destination to syslog may make a difference
>
> It's worth trying alternatives anyway.  It is odd that you are seeing
> such a slowdown when using the collector --- many people push very high
> log volumes through the collector without problems.  What PG version is
> this exactly, and on what platform?

We push a ton through the collector (up to 10 Megs a second at peak)
with no problems. but it doesn't share a drive with the main store or
pg_xlog.

Re: Confused by 'timing' results

From
A J
Date:
Just to give an update, changing pg_log to a different drive that is write cache enabled (and to further make it fast, kept it data=writeback), helped quite a bit.

The average time for several clients hitting concurrently was 15ms each for east-coast as well as west-coast clients. Still some impact of network is still at play as the time taken directly on database server was much less at 10ms.

Not logging at all is still better (the time on database with log_min_duration=-1, is 5.4ms) but putting the log on a different drive adds only minimal overhead and ability to measure query time reasonably by discarding most of the network variance.




From: A J <s5aly@yahoo.com>
To: Tom Lane <tgl@sss.pgh.pa.us>
Cc: Kevin Grittner <Kevin.Grittner@wicourts.gov>; Scott Marlowe <scott.marlowe@gmail.com>; pgsql-admin@postgresql.org
Sent: Thu, September 2, 2010 3:21:24 PM
Subject: Re: [ADMIN] Confused by 'timing' results

PostgreSQL 8.4
CentoOS 5.5
I have got WCE=0, on the drive that mounts the data directory with all its subdirectory (including pg_log)

Maybe I should try to mount pg_log to a different drive and have write cache enabled on that one.



From: Tom Lane <tgl@sss.pgh.pa.us>
To: A J <s5aly@yahoo.com>
Cc: Kevin Grittner <Kevin.Grittner@wicourts.gov>; Scott Marlowe <scott.marlowe@gmail.com>; pgsql-admin@postgresql.org
Sent: Thu, September 2, 2010 3:17:42 PM
Subject: Re: [ADMIN] Confused by 'timing' results

A J <s5aly@yahoo.com> writes:
> Do you think changing log_destination to syslog may make a difference

It's worth trying alternatives anyway.  It is odd that you are seeing
such a slowdown when using the collector --- many people push very high
log volumes through the collector without problems.  What PG version is
this exactly, and on what platform?

            regards, tom lane