Thread: Implementing Frontend/Backend Protocol TCP/IP

Implementing Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:

Hello,


I'm trying to implement the front-end protocol with TCP from REALbasic
to PostgreSQL.

The docs from PostgreSQL, well, I understand almost, but there are
some points that maybe need more clarification.

Anyone have some experience to start making questions ?

:-)


The front-end tool is REALbasic but can be any tool that have TCP/IP
comunication, so here is irrelevant.

Actually I can connect to Postgre Server, get and parse some
parameters, and send some SELECT, but I don't like how I'm doing, so
any guidence or wiki or blog or how-to where I can get more
information, it would be perfect...

thanks for your time,

regards,

r.







Re: Implementing Frontend/Backend Protocol TCP/IP

From
Alvaro Herrera
Date:
Raimon Fernandez wrote:
>
>
> Hello,
>
>
> I'm trying to implement the front-end protocol with TCP from
> REALbasic to PostgreSQL.

That sounds the most difficult way to do it.  Can't you just embed
libpq?

--
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: Implementing Frontend/Backend Protocol TCP/IP

From
John R Pierce
Date:
Alvaro Herrera wrote:
>> I'm trying to implement the front-end protocol with TCP from
>> REALbasic to PostgreSQL.
>>
>
> That sounds the most difficult way to do it.  Can't you just embed
> libpq?
>

yah, seriously.   the binary protocol is not considered stable, it can
change in subtle ways in each version.  libpq handles the current
version and all previous versions, and exposes all methods.



Re: Implementing Frontend/Backend Protocol TCP/IP

From
Craig Ringer
Date:
On Mon, 2009-10-26 at 20:15 -0300, Alvaro Herrera wrote:
Raimon Fernandez wrote:
> 
> 
> Hello,
> 
> 
> I'm trying to implement the front-end protocol with TCP from
> REALbasic to PostgreSQL.

That sounds the most difficult way to do it.  Can't you just embed
libpq?

+1

Almost all languages support some kind of C bindings or provide a dlopen-like mechanism to dynamically call C functions from shared libraries.

RealBasic appears to have fairly dynamic, dlopen-style bindings. I'm sure you can find more information in the manual, but here's an example of some syntax:

http://forums.realsoftware.com/viewtopic.php?t=5050

You'll have to do a bit more work to produce bindings for libpq, though, especially if you have to produce bindings for any data types (C structures). If all you have to bind is function calls, and you can handle any libpq-specific structures as opaque void pointers then it shouldn't be too hard to just bind the function calls you need.

--
Craig Ringer

Re: Implementing Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 27/10/2009, at 0:17, John R Pierce wrote:

> Alvaro Herrera wrote:
>>> I'm trying to implement the front-end protocol with TCP from
>>> REALbasic to PostgreSQL.
>>>
>>
>> That sounds the most difficult way to do it.  Can't you just embed
>> libpq?
>>
>
> yah, seriously.   the binary protocol is not considered stable, it
> can change in subtle ways in each version.  libpq handles the
> current version and all previous versions, and exposes all methods.

Well, if I specify that I'm using the protocol 300 it should work, and
be stable, not ?

REALbasic has plugin for PostgreSQL, but they are synchronous  and
freeze the GUI when interacting with PG. This is not a problem
noramlly, as the SELECTS/UPDATES/... are fast enopugh, but sometimes
we need to fetch 1000, 5000 or more rows and the application stops to
respond, I can't have a progressbar because all is freeze, until all
data has come from PG, so we need a better way.

I found someone who created what I'm trying to do, with the same
language, with the same protocol, with the same version, but it's a
comercial app, and we need the source code. The communication is made
through TCP/IP, really fast, and always asynchronous, our application
is always responsive.

I don't know almost nothing about C and implementing it would be too
much work, and maybe we would have the same problem.

Anyway, I'll try to go further with the binary implementation, at
least, as a learn-approach ...

:-)

thanks,


regards,


r.


Re: Implementing Frontend/Backend Protocol TCP/IP

From
John R Pierce
Date:
Raimon Fernandez wrote:
> REALbasic has plugin for PostgreSQL, but they are synchronous  and
> freeze the GUI when interacting with PG. This is not a problem
> noramlly, as the SELECTS/UPDATES/... are fast enopugh, but sometimes
> we need to fetch 1000, 5000 or more rows and the application stops to
> respond, I can't have a progressbar because all is freeze, until all
> data has come from PG, so we need a better way.

I would think the better solution would be to get the vendor to fix its
native plugin to support an asynchronous mode.   Or, does this RealBasic
support any sort of client-server or multi-task type programming?  if
so, have a separate task or thread that does the SQL operations which
your interactice program interfaces with...






Re: Implementing Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 27/10/2009, at 8:29, John R Pierce wrote:

> Raimon Fernandez wrote:
>> REALbasic has plugin for PostgreSQL, but they are synchronous  and
>> freeze the GUI when interacting with PG. This is not a problem
>> noramlly, as the SELECTS/UPDATES/... are fast enopugh, but
>> sometimes we need to fetch 1000, 5000 or more rows and the
>> application stops to respond, I can't have a progressbar because
>> all is freeze, until all data has come from PG, so we need a better
>> way.
>
> I would think the better solution would be to get the vendor to fix
> its native plugin to support an asynchronous mode.   Or, does this
> RealBasic support any sort of client-server or multi-task type
> programming?  if so, have a separate task or thread that does the
> SQL operations which your interactice program interfaces with...

The plugin is from the same company REALbasic, and it's free.

They don't have any plans to support asynchronous mode, maybe only in
the plugin for their own database, REALserver, wich serves a SQLite
database

REALbasic supports threads (multitasking), but also they freeze when
using the plugin and waiting for a complete answer from the plugin
call ...

Thanks,

regards,


raimon


Re: Implementing Frontend/Backend Protocol TCP/IP

From
"Albe Laurenz"
Date:
John R Pierce wrote:
> yah, seriously.   the binary protocol is not considered stable, it can
> change in subtle ways in each version.  libpq handles the current
> version and all previous versions, and exposes all methods.

That's probably not the problem in the original message, but there
are things you can do with the frontend/backend protocol that libpq
does not expose: for example, with the extended query protocol you can
send a "Bind" call that requests that some of the results should be
returned in text format, others in binary.

Yours,
Laurenz Albe

Re: Implementing Frontend/Backend Protocol TCP/IP

From
Craig Ringer
Date:
On 27/10/2009 3:20 PM, Raimon Fernandez wrote:

> REALbasic has plugin for PostgreSQL, but they are synchronous  and
> freeze the GUI when interacting with PG. This is not a problem noramlly,
> as the SELECTS/UPDATES/... are fast enopugh, but sometimes we need to
> fetch 1000, 5000 or more rows and the application stops to respond, I
> can't have a progressbar because all is freeze, until all data has come
> from PG, so we need a better way.

You're tackling a pretty big project given the problem you're trying to
solve. The ongoing maintenance burden is likely to be significant. I'd
be really, REALLY surprised if it was worth it in the long run.



Can you not do the Pg operations in another thread? libpq is safe to use
in a multi-threaded program so long as you never try to share a
connection, result set, etc between threads. In most cases, you never
want to use any of libpq outside one "database worker" thread, in which
case it's dead safe. You can have your worker thread raise flags / post
events / whatever to notify the main thread when it's done some work.



If that approach isn't palatable to you or isn't suitable in your
environment, another option is to just use a cursor. If you have a big
fetch to do, instead of:

  SELECT * FROM customer;

issue:

  BEGIN;
  DECLARE customer_curs CURSOR FOR SELECT * FROM customer;

... then progressively FETCH blocks of results from the cursor:

  FETCH 100 FROM customer_curs;

... until there's nothing left and you can close the transaction or, if
you want to keep using the transaction, just close the cursor.

See:

  http://www.postgresql.org/docs/8.4/static/sql-declare.html
  http://www.postgresql.org/docs/8.4/static/sql-fetch.html
  http://www.postgresql.org/docs/8.4/static/sql-close.html


--
Craig Ringer

Re: Implementing Frontend/Backend Protocol TCP/IP

From
John DeSoi
Date:
On Oct 27, 2009, at 4:55 AM, Albe Laurenz wrote:

> That's probably not the problem in the original message, but there
> are things you can do with the frontend/backend protocol that libpq
> does not expose: for example, with the extended query protocol you can
> send a "Bind" call that requests that some of the results should be
> returned in text format, others in binary.


Another protocol feature that I don't think is exposed in libpq is the
ability to limit the maximum number of rows returned by a query. So if
you are executing end user queries, you don't have to worry about
processing a massive result set or somehow parsing the query to add a
limit clause.




John DeSoi, Ph.D.





Re: Implementing Frontend/Backend Protocol TCP/IP

From
Alvaro Herrera
Date:
Raimon Fernandez wrote:

> REALbasic has plugin for PostgreSQL, but they are synchronous  and
> freeze the GUI when interacting with PG. This is not a problem
> noramlly, as the SELECTS/UPDATES/... are fast enopugh, but sometimes
> we need to fetch 1000, 5000 or more rows and the application stops
> to respond, I can't have a progressbar because all is freeze, until
> all data has come from PG, so we need a better way.

If you need to fetch large numbers of rows, perhaps it would be better
to use a cursor and fetch a few at a time, moving the progress bar in
the pauses.  So instead of

SELECT * FROM sometab;

you would o
DECLARE foo CURSOR FOR SELECT * FROM sometab;

and then, repeatedly,
FETCH 50 FROM foo

Until there are no more rows.

This can still freeze your app in certain cases, but it will be probably
a lot better than what you currently have.  And it will be MUCH easier/
cheaper to do than working with the FE/BE protocol yourself.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: Implementing Frontend/Backend Protocol TCP/IP

From
John DeSoi
Date:
On Oct 26, 2009, at 7:17 PM, John R Pierce wrote:

> yah, seriously.   the binary protocol is not considered stable, it
> can change in subtle ways in each version.  libpq handles the
> current version and all previous versions, and exposes all methods.

I don't think the frontend/backend protocol has changed since version
7.4. All data can be in text format; you don't have to deal with binary.

I have implemented an interface in Lisp. I found it much easier and
more flexible than the foreign function interface with C and libpq.




John DeSoi, Ph.D.





Re: Implementing Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 27/10/2009, at 14:00, Alvaro Herrera wrote:

> Raimon Fernandez wrote:
>
>> REALbasic has plugin for PostgreSQL, but they are synchronous  and
>> freeze the GUI when interacting with PG. This is not a problem
>> noramlly, as the SELECTS/UPDATES/... are fast enopugh, but sometimes
>> we need to fetch 1000, 5000 or more rows and the application stops
>> to respond, I can't have a progressbar because all is freeze, until
>> all data has come from PG, so we need a better way.
>
> If you need to fetch large numbers of rows, perhaps it would be better
> to use a cursor and fetch a few at a time, moving the progress bar in
> the pauses.  So instead of
>
> SELECT * FROM sometab;
>
> you would o
> DECLARE foo CURSOR FOR SELECT * FROM sometab;
>
> and then, repeatedly,
> FETCH 50 FROM foo
>
> Until there are no more rows.
>
> This can still freeze your app in certain cases, but it will be
> probably
> a lot better than what you currently have.  And it will be MUCH
> easier/
> cheaper to do than working with the FE/BE protocol yourself.

Yes, I'm aware of this possibility but it's a lot of extra work also.

The initial idea of TCP/IP still remains in my brain ...

:-)

thanks,


raimon


Re: Implementing Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
Hello,


As this thread it's alive, I'm going to ask more specific questions:


After sending the satartup sequence, I receive the paramlist. I don't
need to send Authentication as I'm using a Trust user, for making
things easier.

I receive string data, I suppose it's text data.

I can parse the data received, search for a B.

I don't know if it's better to transform the data into Hex.

After the S I found thre char(0) and later the size of the packet, and
later the name + char(0) (separator between value and parameter), the
parameter, and so on.

Why I found those three char(0) after the S and before the packet
length?

Or it's because the Int32 that has 4 bytes ?

thanks,

regards,

raimon



Documentation:
-----------------------------------------------------------------------------
ParameterStatus (B)
Byte1(’S’)
Identifies the message as a run-time parameter status report.
Int32
Length of message contents in bytes, including self.
String
The name of the run-time parameter being reported.
String
The current value of the parameter.





Re: Implementing Frontend/Backend Protocol TCP/IP

From
Alvaro Herrera
Date:
Raimon Fernandez wrote:

> After the S I found thre char(0) and later the size of the packet,
> and later the name + char(0) (separator between value and
> parameter), the parameter, and so on.
>
> Why I found those three char(0) after the S and before the packet
> length?

Because the length is an int32.  There are 3 zeros because the packet
length is less than 256.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: Implementing Frontend/Backend Protocol TCP/IP

From
Alvaro Herrera
Date:
Raimon Fernandez wrote:

> I receive string data, I suppose it's text data.
>
> I can parse the data received, search for a B.

You don't search for a B.  You search for an S.  The B in the
documentation you quote indicates that this message can be sent by the
backend only.  You'll notice others have an F (sent by frontend only)
and some have F & B.

> Documentation:
> -----------------------------------------------------------------------------
> ParameterStatus (B)
> Byte1(’S’)
> Identifies the message as a run-time parameter status report.
> Int32
> Length of message contents in bytes, including self.
> String
> The name of the run-time parameter being reported.
> String
> The current value of the parameter.


--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

Re: Implementing Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 27/10/2009, at 14:41, Alvaro Herrera wrote:

> Raimon Fernandez wrote:
>
>> After the S I found thre char(0) and later the size of the packet,
>> and later the name + char(0) (separator between value and
>> parameter), the parameter, and so on.
>>
>> Why I found those three char(0) after the S and before the packet
>> length?
>
> Because the length is an int32.  There are 3 zeros because the packet
> length is less than 256.

here is where I'm doing a bad parsing.

how I know where the length ends ?

I know it starts after the S, and for the parsing that I have, always
the length is 4 chars.

I have to evaluate one by one ?

thanks,

r.


> --
> Alvaro Herrera                                http://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



Re: Implementing Frontend/Backend Protocol TCP/IP

From
Alvaro Herrera
Date:
Raimon Fernandez wrote:

> how I know where the length ends ?

You count 4 bytes.


--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

Re: Implementing Frontend/Backend Protocol TCP/IP

From
Merlin Moncure
Date:
On Mon, Oct 26, 2009 at 7:17 PM, John R Pierce <pierce@hogranch.com> wrote:
> Alvaro Herrera wrote:
>>>
>>> I'm trying to implement the front-end protocol with TCP from
>>> REALbasic to PostgreSQL.
>>>
>>
>> That sounds the most difficult way to do it.  Can't you just embed
>> libpq?
>>
>
> yah, seriously.   the binary protocol is not considered stable, it can
> change in subtle ways in each version.  libpq handles the current version
> and all previous versions, and exposes all methods.

small clarification:

There is only one protocol and it is binary.  For example the length
of datums is never sent as a string.  The protocol is quite
stable...it hasn't changed since 7.4 and there hasn't really been a
big call (some of the quite interesting comments in this thread aside)
for it to change IMO.

The protocol has a binary or text mode, so that user data can be
sent/received in text or binary. Using the binary mode is not stable,
which is what I think you were basically saying.

Now, (self serving pitch here) if you use libpqtypes, you get all the
benefits of binary protocol mode (performance, easy data marshaling)
without having to worry about data format changes between versions
:-D.

merlin

Re: Implementing Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 27/10/2009, at 15:06, Alvaro Herrera wrote:

> Raimon Fernandez wrote:
>
>> how I know where the length ends ?
>
> You count 4 bytes.

thanks,

I'm parsing now the resulted string as a binarystring and all is
getting sense ...

thanks for your help,

raimon


> --
> Alvaro Herrera                                http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
Hello,


For what I've read in the manuals, this operation is only valid before
PostgreSQL has finished processing the SELECT statement ?

If for example I send  a SELECT * from myTable, it has 20000 rows, and
postgre starts sending the rows, how I can cancel this operation ?

I thought Cancelling Requests would be perfect for this ... the
workaround is closing and opening again the connection but I don't
like this approach ...

thanks,

regards,


raimon


Re: DataRow Null values Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
Here I'm again ...


I'm parsing the DataRow(B), and I'm having problems with NULL values.

In the docs I can read they have a -1 value, an no bytes follow them
for the value.

But I'm getting a 1020 value instead of -1

Int32  The length of the column value, in bytes (this count does not
include itself). Can be zero.
As a special case, -1 indicates a NULL column value. No value bytes
follow in the NULL case.

byte 1: 255 &HFF
byte 2: 255 &HFF
byte 3: 255 &HFF
byte 4: 255 &HFF
-----------------
1020 decimal or

&HFFFFFFFF 4294967295

but never -1

If I change my code to be aware of the 1020 instead of -1, I can
extract the next field/values without problems.


thanks,


regards,


raimon


Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Craig Ringer
Date:
On 2/11/2009 5:15 PM, Raimon Fernandez wrote:

> For what I've read in the manuals, this operation is only valid before
> PostgreSQL has finished processing the SELECT statement ?
>
> If for example I send  a SELECT * from myTable, it has 20000 rows, and
> postgre starts sending the rows, how I can cancel this operation ?

Assuming you're asking "is there any way to cancel a running query using
the postgresql tcp/ip protocol" then, as you've noted, you can cancel
the request until you start getting data.

After that, you can still cancel the query by establishing another
connection to the server and calling pg_cancel_backend( ) at the SQL
level. This does, unfortunately, involve the overhead of setting up and
tearing down a connection and the associated backend.

--
Craig Ringer

Re: DataRow Null values Frontend/Backend Protocol TCP/IP

From
Craig Ringer
Date:
On 2/11/2009 5:21 PM, Raimon Fernandez wrote:
> Here I'm again ...
>
>
> I'm parsing the DataRow(B), and I'm having problems with NULL values.
>
> In the docs I can read they have a -1 value, an no bytes follow them for
> the value.
>
> But I'm getting a 1020 value instead of -1

You're using RealBasic or something, right?

If so, you're probably doing something funky with signed/unsigned
integer handling and the representation of integers.

-1 is 0xffffffff as a _signed_ 32 bit integer, same in little-endian or
big-endian form. The same hex value as an unsigned integer is 4294967295 .

Simple example in C++:

#include <iostream>
#include <iomanip>
int main() {
        std::cout << std::dec << (signed int)(-1) << ' ' <<
                     std::hex << (signed int)(-1) << std::endl;
        std::cout << std::dec << (unsigned int)(-1) << ' ' <<
                     std::hex << (unsigned int)(-1) << std::endl;
}

produces:

-1 ffffffff
4294967295 ffffffff


I don't know where you're getting the 1020, but 4294967295 is MAXUINT32
and suggests you're treating the data as an unsigned rather than a
signed 32 bit integer.

--
Craig Ringer

Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 02/11/2009, at 10:29, Craig Ringer wrote:

> On 2/11/2009 5:15 PM, Raimon Fernandez wrote:
>
>> For what I've read in the manuals, this operation is only valid
>> before
>> PostgreSQL has finished processing the SELECT statement ?
>>
>> If for example I send  a SELECT * from myTable, it has 20000 rows,
>> and
>> postgre starts sending the rows, how I can cancel this operation ?
>
> Assuming you're asking "is there any way to cancel a running query
> using
> the postgresql tcp/ip protocol" then, as you've noted, you can cancel
> the request until you start getting data.

yes,


> After that, you can still cancel the query by establishing another
> connection to the server and calling pg_cancel_backend( ) at the SQL
> level. This does, unfortunately, involve the overhead of setting up
> and
> tearing down a connection and the associated backend.

I assume the PID is the same as the process_id that I get from
BackendKeyData ?


BackendKeyData (B)
Byte1(’K’) Identifies the message as cancellation key data. The
frontend must save these values if it  wishes to be able to issue
CancelRequest messages later.
Int32(12)  Length of message contents in bytes, including self.
Int32  The process ID of this backend.
Int32  The secret key of this backend.

process_id= 22403

I can send a SELECT, and while it's running, I open a new connection
with psql and send a SELECT pg_cancel_backend(22403)   and postgresql
returns t (true), but I'm still receiving rows in the first process ...

thanks,

regards,

r.

Re: DataRow Null values Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 02/11/2009, at 10:37, Craig Ringer wrote:

> On 2/11/2009 5:21 PM, Raimon Fernandez wrote:
>> Here I'm again ...
>>
>>
>> I'm parsing the DataRow(B), and I'm having problems with NULL values.
>>
>> In the docs I can read they have a -1 value, an no bytes follow
>> them for
>> the value.
>>
>> But I'm getting a 1020 value instead of -1
>
> You're using RealBasic or something, right?
>
> If so, you're probably doing something funky with signed/unsigned
> integer handling and the representation of integers.
>
> -1 is 0xffffffff as a _signed_ 32 bit integer, same in little-endian
> or
> big-endian form. The same hex value as an unsigned integer is
> 4294967295 .
>
> ...
>
> I don't know where you're getting the 1020, but 4294967295 is
> MAXUINT32
> and suggests you're treating the data as an unsigned rather than a
> signed 32 bit integer.

yes, you're right, I had and old legacy code that was processing
incorrectly the received data as string directly than getting the
binary ...

what I see that it was working all the time, except the -1 ...


now it returns -1 ...

thanks !

refards,


raimon



Re: DataRow Null values Frontend/Backend Protocol TCP/IP

From
Alban Hertroys
Date:
On 2 Nov 2009, at 10:21, Raimon Fernandez wrote:

> byte 1: 255 &HFF
> byte 2: 255 &HFF
> byte 3: 255 &HFF
> byte 4: 255 &HFF
> -----------------
> 1020 decimal or

Thou shalt not sum the byte-values of a 32-bit number!

> &HFFFFFFFF 4294967295
>
> but never -1


That is -1.

Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:737,4aeeb14511075663045293!



Re: DataRow Null values Frontend/Backend Protocol TCP/IP

From
Alban Hertroys
Date:
On 2 Nov 2009, at 11:15, Alban Hertroys wrote:
> That is -1.


Pressed Send too soon, that's only true for signed 32-bit integers of
course.

Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:737,4aeeb49f11071380273732!



Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
John DeSoi
Date:
On Nov 2, 2009, at 4:15 AM, Raimon Fernandez wrote:

> If for example I send  a SELECT * from myTable, it has 20000 rows,
> and postgre starts sending the rows, how I can cancel this operation ?
>
> I thought Cancelling Requests would be perfect for this ... the
> workaround is closing and opening again the connection but I don't
> like this approach ...

A cleaner solution is to use the extended query protocol to limit the
total number of rows returned. For example, in my application I limit
the result set to 1000 rows. I have not received all of the results, I
switch to a server side cursor but still have the first 1000 rows for
immediate display to the end user.



John DeSoi, Ph.D.





Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Tom Lane
Date:
Craig Ringer <craig@postnewspapers.com.au> writes:
> On 2/11/2009 5:15 PM, Raimon Fernandez wrote:
>> If for example I send  a SELECT * from myTable, it has 20000 rows, and
>> postgre starts sending the rows, how I can cancel this operation ?

> Assuming you're asking "is there any way to cancel a running query using
> the postgresql tcp/ip protocol" then, as you've noted, you can cancel
> the request until you start getting data.

> After that, you can still cancel the query by establishing another
> connection to the server and calling pg_cancel_backend( ) at the SQL
> level. This does, unfortunately, involve the overhead of setting up and
> tearing down a connection and the associated backend.

The above distinction is nonsense.  Query cancel works the same way
whether you have started receiving data or not --- it will open a
transient connection in any case.  Otherwise there would be race
condition problems if the backend is just about to start sending data.

            regards, tom lane

Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 02/11/2009, at 15:38, Tom Lane wrote:

> Craig Ringer <craig@postnewspapers.com.au> writes:
>> On 2/11/2009 5:15 PM, Raimon Fernandez wrote:
>>> If for example I send  a SELECT * from myTable, it has 20000 rows,
>>> and
>>> postgre starts sending the rows, how I can cancel this operation ?
>
>> Assuming you're asking "is there any way to cancel a running query
>> using
>> the postgresql tcp/ip protocol" then, as you've noted, you can cancel
>> the request until you start getting data.
>
>> After that, you can still cancel the query by establishing another
>> connection to the server and calling pg_cancel_backend( ) at the SQL
>> level. This does, unfortunately, involve the overhead of setting up
>> and
>> tearing down a connection and the associated backend.
>
> The above distinction is nonsense.  Query cancel works the same way
> whether you have started receiving data or not --- it will open a
> transient connection in any case.  Otherwise there would be race
> condition problems if the backend is just about to start sending data.

So my approach of sending only the CancelRequest should work ?

Always from a new connection, before sending the StartUpSequence, just
open, connect and send it ?


Quoted from the documentation:

"The cancellation signal might or might not have any effect — for
example, if it arrives after the
backend has finished processing the query, then it will have no
effect. If the cancellation is effective,
it results in the current command being terminated early with an error
message. "

Here I understand that maybe it will have NO effect, so postgresql
will still sending rows and rows and rows ...


thanks,

regards,

raimon

Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 02/11/2009, at 15:12, John DeSoi wrote:

>
> On Nov 2, 2009, at 4:15 AM, Raimon Fernandez wrote:
>
>> If for example I send  a SELECT * from myTable, it has 20000 rows,
>> and postgre starts sending the rows, how I can cancel this
>> operation ?
>>
>> I thought Cancelling Requests would be perfect for this ... the
>> workaround is closing and opening again the connection but I don't
>> like this approach ...
>
> A cleaner solution is to use the extended query protocol to limit
> the total number of rows returned. For example, in my application I
> limit the result set to 1000 rows. I have not received all of the
> results, I switch to a server side cursor but still have the first
> 1000 rows for immediate display to the end user.

This is another option, but at least I want to make it to work the
CancelRequest ...

:-)

As an experiment, I'm doing with LIMIT and OFFSET instead of cursors
at this moment ...

thanks,


regards,


raimon


Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Tom Lane
Date:
Raimon Fernandez <coder@montx.com> writes:
> Quoted from the documentation:
> "The cancellation signal might or might not have any effect — for
> example, if it arrives after the
> backend has finished processing the query, then it will have no
> effect.

> Here I understand that maybe it will have NO effect, so postgresql
> will still sending rows and rows and rows ...

If you're too late, the backend has already sent all the rows.  There's
not much we can do about data that's already in flight.  There probably
won't be that much of it though, as TCP stacks don't buffer infinite
amounts of data.

            regards, tom lane

Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Craig Ringer
Date:
On 2/11/2009 10:38 PM, Tom Lane wrote:
> Craig Ringer <craig@postnewspapers.com.au> writes:
>> On 2/11/2009 5:15 PM, Raimon Fernandez wrote:
>>> If for example I send  a SELECT * from myTable, it has 20000 rows, and
>>> postgre starts sending the rows, how I can cancel this operation ?
>
>> Assuming you're asking "is there any way to cancel a running query using
>> the postgresql tcp/ip protocol" then, as you've noted, you can cancel
>> the request until you start getting data.
>
>> After that, you can still cancel the query by establishing another
>> connection to the server and calling pg_cancel_backend( ) at the SQL
>> level. This does, unfortunately, involve the overhead of setting up and
>> tearing down a connection and the associated backend.
>
> The above distinction is nonsense.  Query cancel works the same way
> whether you have started receiving data or not --- it will open a
> transient connection in any case.  Otherwise there would be race
> condition problems if the backend is just about to start sending data.


Whoops - and thanks for clearing that up. For some reason I though it
used the existing connection if possible, but you've rather succinctly
pointed out why that wouldn't work sanely.

--
Craig Ringer

Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 02/11/2009, at 17:35, Tom Lane wrote:

> Raimon Fernandez <coder@montx.com> writes:
>> Quoted from the documentation:
>> "The cancellation signal might or might not have any effect — for
>> example, if it arrives after the
>> backend has finished processing the query, then it will have no
>> effect.
>
>> Here I understand that maybe it will have NO effect, so postgresql
>> will still sending rows and rows and rows ...
>
> If you're too late, the backend has already sent all the rows.
> There's
> not much we can do about data that's already in flight.  There
> probably
> won't be that much of it though, as TCP stacks don't buffer infinite
> amounts of data.

The sentence 'backend has finished processing the query' means that
postgresql has finished processing the select and also has sent all
the rows ?

I thought it meant only processing the request, and the rows were not
yet sent all of them.

If the rows have been sent, and there are data in the TCP buffer,
that's another story ...

thanks,

raimon

Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Tom Lane
Date:
Raimon Fernandez <coder@montx.com> writes:
> The sentence 'backend has finished processing the query' means that
> postgresql has finished processing the select and also has sent all
> the rows ?

There is no distinction; rows are sent as they are generated.

            regards, tom lane

Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 02/11/2009, at 17:53, Tom Lane wrote:

> Raimon Fernandez <coder@montx.com> writes:
>> The sentence 'backend has finished processing the query' means that
>> postgresql has finished processing the select and also has sent all
>> the rows ?
>
> There is no distinction; rows are sent as they are generated.

Yes, but  a SELECT can return 50000 rows, and as you say, postgresql
sends the rows as they are generated.

My question still remain unclear to me:

when postgres has finished processing the select, just before sending
the first row(1), in the middle(2), or at the end(3), when the last
row has been sent ?

If I send the CancelRequest when postgres is in point 3, I'm too late,
but if postgres is in 1 or 2, the CancelRequest will have some effect.

I'm still wrong here ?

thanks for clarification the concept!

regards,

raimon

Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
John DeSoi
Date:
On Nov 2, 2009, at 12:17 PM, Raimon Fernandez wrote:

> when postgres has finished processing the select, just before
> sending the first row(1), in the middle(2), or at the end(3), when
> the last row has been sent ?
>
> If I send the CancelRequest when postgres is in point 3, I'm too
> late, but if postgres is in 1 or 2, the CancelRequest will have some
> effect.
>
> I'm still wrong here ?
>
> thanks for clarification the concept!

Yes, it will have some effect in cases 1 and 2. You will know it
worked because you'll get error 57014 - canceling statement due to
user request.

An easy way to test this out is to call pg_sleep with a big number and
then cancel the query on another connection. You won't have to worry
about the timing of receiving all rows or not.




John DeSoi, Ph.D.





Re: Cancelling Requests Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 02/11/2009, at 20:01, John DeSoi wrote:

>
> On Nov 2, 2009, at 12:17 PM, Raimon Fernandez wrote:
>
>> when postgres has finished processing the select, just before
>> sending the first row(1), in the middle(2), or at the end(3), when
>> the last row has been sent ?
>>
>> If I send the CancelRequest when postgres is in point 3, I'm too
>> late, but if postgres is in 1 or 2, the CancelRequest will have
>> some effect.
>>
>> I'm still wrong here ?
>>
>> thanks for clarification the concept!
>
> Yes, it will have some effect in cases 1 and 2. You will know it
> worked because you'll get error 57014 - canceling statement due to
> user request.
>
> An easy way to test this out is to call pg_sleep with a big number
> and then cancel the query on another connection. You won't have to
> worry about the timing of receiving all rows or not.

thanks!

Now I can Cancel them using the Front End or the pg_cancel_backend, I
had an error in my FrontEnd function, no is working ...

:-)

thanks for all,

regards,

raimon


Encoding using the Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
Hello,


I'm trying to send some strings that have chars outside from standar ascii, like çñàèó

Once I'm connected, the client and server both uses UT8Encoding.

And I'm sending all the strings encoded in UTF8.

At least the received ones are working, as I get the text exactly as it is, with special chars.

But when I'm trying to update a row using some of them, I'm getting an error:

ERROR
08P01
Invalid string in message
pqformat.c
691
pq_getmstring
Invalid Front End message type 0
postgres.c
408
socketbackend
you have been disconected

How should I encode ????


thanks,

regards,


r.

Re: Encoding using the Frontend/Backend Protocol TCP/IP

From
Kovalevski Andrei
Date:
Hi

could it be that you have errors in your UTF8 string? For example you
might use UTF16 encoding, it can explain why some characters force
errors but others are not. Can you post here the string and its' encoded
version?

Raimon Fernandez wrote:
> Hello,
>
>
> I'm trying to send some strings that have chars outside from standar ascii, like çñàèó
>
> Once I'm connected, the client and server both uses UT8Encoding.
>
> And I'm sending all the strings encoded in UTF8.
>
> At least the received ones are working, as I get the text exactly as it is, with special chars.
>
> But when I'm trying to update a row using some of them, I'm getting an error:
>
> ERROR
> 08P01
> Invalid string in message
> pqformat.c
> 691
> pq_getmstring
> Invalid Front End message type 0
> postgres.c
> 408
> socketbackend
> you have been disconected
>
> How should I encode ????
>
>
> thanks,
>
> regards,
>
>
> r.
>
>


Re: Encoding using the Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 19/11/2009, at 17:27, Kovalevski Andrei wrote:

> Hi
>
> could it be that you have errors in your UTF8 string? For example you might use UTF16 encoding, it can explain why
somecharacters force errors but others are not. 

It only happens with values like àéïçñ I think UTF8 can handle this ...



> Can you post here the string and its' encoded version?
>

Original string:
Q

Re: Encoding using the Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:
On 19/11/2009, at 18:13, Raimon Fernandez wrote:

>
> On 19/11/2009, at 17:27, Kovalevski Andrei wrote:
>
>> Hi
>>
>> could it be that you have errors in your UTF8 string? For example you might use UTF16 encoding, it can explain why
somecharacters force errors but others are not. 
>
> It only happens with values like àéïçñ I think UTF8 can handle this ...


yes, It can handle it ...

if I send the decoding by hand in a very simple update, it works, so there's something with UTF8 conversion that dosn't
work... 

for example, instead of sending Ç i send their equivalent in UTF8 &HC3+&H87 and it works ...

thanks,

regards,




Re: Encoding using the Frontend/Backend Protocol TCP/IP

From
Kovalevski Andrei
Date:
Hi,

the string is ok, but the problem is inside the message. The length of the message is incorrect:

your message:
5100000046557064617465207472616E73616374696F6E7320736574206465736372697074696F6E3D27546573742056616C75657364C387272077686572652069643D313133
it should be:
5100000045557064617465207472616E73616374696F6E7320736574206465736372697074696F6E3D27546573742056616C75657364C387272077686572652069643D313133

Raimon Fernandez wrote:
On 19/11/2009, at 18:13, Raimon Fernandez wrote:
 
On 19/11/2009, at 17:27, Kovalevski Andrei wrote:
   
Hi

could it be that you have errors in your UTF8 string? For example you might use UTF16 encoding, it can explain why some characters force errors but others are not.     
It only happens with values like àéïçñ I think UTF8 can handle this ...   

yes, It can handle it ...

if I send the decoding by hand in a very simple update, it works, so there's something with UTF8 conversion that dosn't work ...

for example, instead of sending Ç i send their equivalent in UTF8 &HC3+&H87 and it works ...

thanks,

regards,



 

Re: Encoding using the Frontend/Backend Protocol TCP/IP

From
Raimon Fernandez
Date:

On 19/11/2009, at 21:21, Kovalevski Andrei wrote:

Hi,

the string is ok, but the problem is inside the message. The length of the message is incorrect:

your message:
5100000046557064617465207472616E73616374696F6E7320736574206465736372697074696F6E3D27546573742056616C75657364C387272077686572652069643D313133
it should be:
5100000045557064617465207472616E73616374696F6E7320736574206465736372697074696F6E3D27546573742056616C75657364C387272077686572652069643D313133
ok, thanks.

Finally it's working, there was a mistake from my part sending the encoding ...

:-)

regards,

raimon

Extended Query using the Frontend/Backend Protocol 3.0

From
Raimon Fernandez
Date:
Hello again,


I'm trying to integrate the extended query protocol with my libraries.

I'm sending a simple SELECT to validate the method, but I'm getting an Invalid Message Format.


50 => P
00 00 00 29 => length
6D7973746174656D656E74 00 => mystatement + null
73656C656374202A2066726F6D206D797461626C653B 00 => select * from mytable; + null
00 00 => number of parameters, zero

any idea ?

thanks and regards,


raimon




Re: Extended Query using the Frontend/Backend Protocol 3.0

From
John DeSoi
Date:
On Dec 17, 2009, at 11:13 AM, Raimon Fernandez wrote:

> I'm trying to integrate the extended query protocol with my libraries.
>
> I'm sending a simple SELECT to validate the method, but I'm getting an Invalid Message Format.

I did not add up your byte count, but maybe this will help:


(write-byte p stream)
(write-int32 (+ int32-length (length name) 1 (length sql-string) 1 int16-length (* int32-length param-count)) stream)
(write-cstring name stream)
(write-cstring sql-string stream)
(write-int16 param-count stream)



John DeSoi, Ph.D.





Re: Extended Query using the Frontend/Backend Protocol 3.0

From
Raimon Fernandez
Date:
On 18/12/2009, at 2:26, John DeSoi wrote:

>
> On Dec 17, 2009, at 11:13 AM, Raimon Fernandez wrote:
>
>> I'm trying to integrate the extended query protocol with my libraries.
>>
>> I'm sending a simple SELECT to validate the method, but I'm getting an Invalid Message Format.
>
> I did not add up your byte count, but maybe this will help:
>
>
> (write-byte p stream)
> (write-int32 (+ int32-length (length name) 1 (length sql-string) 1 int16-length (* int32-length param-count)) stream)

> (write-cstring name stream)
> (write-cstring sql-string stream)
> (write-int16 param-count stream)

I'm doing as you say:

mystatement => 11
select * from mytable; => 22

> (write-byte p stream)
50 => P


> (write-int32 (+ int32-length (length name) 1 (length sql-string) 1 int16-length (* int32-length param-count)) stream)


4 + 11 + 1 + 22 + 1 + 2 + 0 (param count=0) => 41

00 00 00 29 => length

> (write-cstring name stream)

6D7973746174656D656E74 00 => mystatement + null

> (write-cstring sql-string stream)

73656C656374202A2066726F6D206D797461626C653B 00 => select * from mytable; + null

> (write-int16 param-count stream)


00 00 => number of parameters, zero


any idea ???????

thanks,

regards,

r.





Extended Query, flush or sync ?

From
Raimon Fernandez
Date:
Hello,

It's not clear for me if I have to issue a flush or sync after each process of an extended query.

It's almost working for me only when I send a sync, but not when I send a flush. With the flush, the connection seems
freezed,or at least, I don't get any data from postgre. 


- Send the parse command
- sync
- Receive the ParseComplete
-sync
- Send the Bind
- sync
- Receive the BincComplete
- send the Execute
- receive an error => "portal xxxxxxx does not exist"


thanks,

regards,


r.

Re: Extended Query, flush or sync ?

From
John DeSoi
Date:
Hi Raimon,

On Dec 20, 2009, at 2:11 PM, Raimon Fernandez wrote:

>
> I'm not seeing my e-mails on the PostgreSQL General List ...
>
> ??????

Yes, my last message did not make it to the list yesterday (you obviously received it). I double checked and it was cc
tothe list. 


> I can pack all of them and send them at the same time, except de Parse, that will go at the connection beggining in
mycase. 

I have two routines, prepare and exec_prepare.

To prepare a named statement for multiple uses, I use prepare (parse, describe, sync).

exec_prepare can take a statement from prepare OR you can pass it the unparsed SQL instead (along with the parameters).
Inthe second case it performs the parse first with the unnamed prepared statement (empty string) and then executes it.
Thisis nice because if you don't need multiple executions, you can build and execute with a single network write and
read.You get the safety of parameterized execution and you don't have a prepared statement to dispose of in another
operation.


John DeSoi, Ph.D.





Re: Extended Query, flush or sync ?

From
Raimon Fernandez
Date:
Hello,

On 19/12/2009, at 4:31, John DeSoi wrote:

>
> On Dec 18, 2009, at 4:44 PM, Raimon Fernandez wrote:
>
>> It's not clear for me if I have to issue a flush or sync after each process of an extended query.
>>
>> It's almost working for me only when I send a sync, but not when I send a flush. With the flush, the connection
seemsfreezed, or at least, I don't get any data from postgre. 
>>
>>
>> - Send the parse command
>> - sync
>> - Receive the ParseComplete
>> -sync
>> - Send the Bind
>> - sync
>> - Receive the BincComplete
>> - send the Execute
>> - receive an error => "portal xxxxxxx does not exist"
>
>
> I send:
>
> parse
> bind
> describe
> execute
> sync
>
> and then loop on the connection stream to receive the responses.

And do you get the parseComplete after sending the parse or after sending the sync ?

I'm not getting parseComplete, bindComplete if I don't send a sync after each command.

If I follow your advice, after the sync, I get the parseComplete, bincComplete, and portalSuspended (beacuse I've reach
themax rows) 

Don't know if your correct approach is the correct, but why send a Bind if we don't know if the parse has been
successfullycreated ...  

From the docs:

 "A Flush must be sent after any extended-query command except Sync, if the frontend wishes to examine the results of
thatcommand before issuing more commands. 
Without Flush, messages returned by the backend will be combined into the minimum possible number of packets to
minimizenetwork overhead." 

Ok, I see that both approachs should work, but for me, sending a flush after each extended query command like parse,
bind,... doesn't do nothing ... 


And also from the docs:

"If Execute terminates before completing the execution of a portal (due to reaching a nonzero result- row count), it
willsend a PortalSuspended message; t 
he appearance of this message tells the frontend that another Execute should be issued against the same portal to
completethe operation. " 

If I execute with a row limit of 1000, and I know there are more than 1000 rows, I get the portalSuspended as
described.

But, If a issue a new Execute, postgresql says that myPortal doesn't exist anymore.

How I can get those 1000 rows ?


thanks !

regards,


raimon




Re: Extended Query, flush or sync ?

From
John DeSoi
Date:
On Dec 19, 2009, at 2:40 AM, Raimon Fernandez wrote:

>> I send:
>>
>> parse
>> bind
>> describe
>> execute
>> sync
>>
>> and then loop on the connection stream to receive the responses.
>
> And do you get the parseComplete after sending the parse or after sending the sync ?

I don't really know or care. I send the entire sequence above and then read the results handling each possible case. In
otherwords, I don't read anything after each message; I only read after sending the sync. 

> And also from the docs:
>
> "If Execute terminates before completing the execution of a portal (due to reaching a nonzero result- row count), it
willsend a PortalSuspended message; t 
> he appearance of this message tells the frontend that another Execute should be issued against the same portal to
completethe operation. " 
>
> If I execute with a row limit of 1000, and I know there are more than 1000 rows, I get the portalSuspended as
described.
>
> But, If a issue a new Execute, postgresql says that myPortal doesn't exist anymore.
>
> How I can get those 1000 rows ?

Are you using a named portal? Are you reading all responses until you receive a ready for query response? There are a
lotof details - it really helped me to look at the psql source. 



John DeSoi, Ph.D.





Re: Extended Query, flush or sync ?

From
Raimon Fernandez
Date:
On 18/12/2009, at 22:55, Tom Lane wrote:

> Raimon Fernandez <coder@montx.com> writes:
>> It's not clear for me if I have to issue a flush or sync after each process of an extended query.
>
> Basically, you send one of these at the points where you're going to
> wait for an answer back.  Sync is different from Flush in that it also
> provides a resynchronization point after an error: when the backend hits
> an error while processing a given message, it ignores following messages
> up to the next Sync.

So I have to send on of these after sending a Parsing comand, a Bind comand, and Execute ?

It's normal that I don't receive nothing if I send a Flush instead of a Sync ?

regards and thanks,


raimon

Re: Extended Query, flush or sync ?

From
John DeSoi
Date:
On Dec 18, 2009, at 4:44 PM, Raimon Fernandez wrote:

> It's not clear for me if I have to issue a flush or sync after each process of an extended query.
>
> It's almost working for me only when I send a sync, but not when I send a flush. With the flush, the connection seems
freezed,or at least, I don't get any data from postgre. 
>
>
> - Send the parse command
> - sync
> - Receive the ParseComplete
> -sync
> - Send the Bind
> - sync
> - Receive the BincComplete
> - send the Execute
> - receive an error => "portal xxxxxxx does not exist"


I send:

parse
bind
describe
execute
sync

and then loop on the connection stream to receive the responses.



John DeSoi, Ph.D.





Re: Extended Query, flush or sync ?

From
Tom Lane
Date:
Raimon Fernandez <coder@montx.com> writes:
> It's not clear for me if I have to issue a flush or sync after each process of an extended query.

Basically, you send one of these at the points where you're going to
wait for an answer back.  Sync is different from Flush in that it also
provides a resynchronization point after an error: when the backend hits
an error while processing a given message, it ignores following messages
up to the next Sync.

            regards, tom lane

Re: Extended Query, flush or sync ?

From
Raimon Fernandez
Date:
Hi John,


I'm not seeing my e-mails on the PostgreSQL General List ...

??????

On 19/12/2009, at 16:32, John DeSoi wrote:

>
> On Dec 19, 2009, at 2:40 AM, Raimon Fernandez wrote:
>
>>> I send:
>>>
>>> parse
>>> bind
>>> describe
>>> execute
>>> sync
>>>
>>> and then loop on the connection stream to receive the responses.
>>
>> And do you get the parseComplete after sending the parse or after sending the sync ?
>
> I don't really know or care. I send the entire sequence above and then read the results handling each possible case.
Inother words, I don't read anything after each message; I only read after sending the sync. 

I see, I don't know why I was sending each command in a separate communication, I can pack all of them and send them at
thesame time, except de Parse, that will go at the connection beggining in my case. 


>> And also from the docs:
>>
>> "If Execute terminates before completing the execution of a portal (due to reaching a nonzero result- row count), it
willsend a PortalSuspended message; t 
>> he appearance of this message tells the frontend that another Execute should be issued against the same portal to
completethe operation. " 
>>
>> If I execute with a row limit of 1000, and I know there are more than 1000 rows, I get the portalSuspended as
described.
>>
>> But, If a issue a new Execute, postgresql says that myPortal doesn't exist anymore.
>>
>> How I can get those 1000 rows ?
>
> Are you using a named portal? Are you reading all responses until you receive a ready for query response? There are a
lotof details - it really helped me to look at the psql source. 

I'm using Portals with my own name, I'll give a shot later ...

thanks !

regards,

r.

Re: Extended Query, flush or sync ?

From
Raimon Fernandez
Date:
On 19/12/2009, at 16:32, John DeSoi wrote:

>> If I execute with a row limit of 1000, and I know there are more than 1000 rows, I get the portalSuspended as
described.
>>
>> But, If a issue a new Execute, postgresql says that myPortal doesn't exist anymore.
>>
>> How I can get those 1000 rows ?
>
> Are you using a named portal? Are you reading all responses until you receive a ready for query response? There are a
lotof details - it really helped me to look at the psql source. 

Yes, I'm using a named portal.

The new question is:

When I get the PortalSuspended, I get the 1000 rows, and for fetching the others, I have to send a new Execute to the
samePortal: 

"If Execute terminates before completing the execution of a portal (due to reaching a nonzero result- row count), it
willsend a PortalSuspended message; the appearance of this message tells the frontend that another Execute should be
issuedagainst the same portal to complete the operation. " 

But the portal isn't destroyed after a sync ?

I'm getting a "Portal 'myPortal' doesn't exist  "when sending the next Execute ...



1. Parse the Select with some $1, $2

2. Send a Bind + Describe + Execute + Sync

3. received the portalSuspended

4. Send the Execute

5. Receive the error "Portal 'myPortal' doesn't exist "


thanks,


regards,


raimon

Extended Query vs Simple Query

From
Raimon Fernandez
Date:
Hello again,


Now that I have working the Extended Query using the Front End Protocol 3.0, I'm getting better results with simple
queriesthan extended queries. 


table comptes:

Simple query:

 select * from comptes WHERE codi_empresa = '05' AND nivell=11 and clau_compte like '05430%' => 0,0273 seconds for 14
rows



Extened Query: 111074 rows

All three columns are indexed.


Parse: select * from comptes WHERE codi_empresa = $1 AND nivell=$2 and clau_compte like $3

Bind + Execute + Sync in the same packet connection: 05,11,05430% => 0.1046 for 10 rows

I measure the time when binding + executing + Sync.

I'm using prepared named statement and portals.

The difference is really big ...


In the docs I understand that using the unnamed prepared statement with parameters, is planned during the binding
phase,but I'm using a prepared statement ... 

And later, in a Note, I can read:

Note: Query plans generated from a parameterized query might be less efficient than query plans generated from an
equivalentquery with actual parameter values substituted. The query planner cannot make decisions based on actual
parametervalues (for example, index selectivity) when planning a parameterized query assigned to a named
prepared-statementobject. This possible penalty is avoided when using the unnamed statement, since it is not planned
untilactual parameter values are available. The cost is that planning must occur afresh for each Bind, even if the
querystays the same. 

And now it's not clear to me nothing at all ...

What are the advantages of using the extended query ?

thanks,

regards,


raimon

Re: Extended Query, flush or sync ?

From
Tom Lane
Date:
Raimon Fernandez <coder@montx.com> writes:
> But the portal isn't destroyed after a sync ?

Not directly by a Sync, no.

> I'm getting a "Portal 'myPortal' doesn't exist  "when sending the next Execute ...

End of transaction would destroy portals --- are you holding a
transaction open for this?  It's basically just like a cursor.

            regards, tom lane

Re: Extended Query, flush or sync ?

From
Raimon Fernandez
Date:
On 22/12/2009, at 18:15, Tom Lane wrote:

> Raimon Fernandez <coder@montx.com> writes:
>> But the portal isn't destroyed after a sync ?
>
> Not directly by a Sync, no.

ok,


>> I'm getting a "Portal 'myPortal' doesn't exist  "when sending the next Execute ...
>
> End of transaction would destroy portals --- are you holding a
> transaction open for this?  It's basically just like a cursor.

no that I'm aware of it ...

I'll investigate it further ...

thanks!


regards,


raimon

Re: Extended Query, flush or sync ?

From
Raimon Fernandez
Date:
On 22/12/2009, at 18:15, Tom Lane wrote:

> Raimon Fernandez <coder@montx.com> writes:
>> But the portal isn't destroyed after a sync ?
>
> Not directly by a Sync, no.
>
>> I'm getting a "Portal 'myPortal' doesn't exist  "when sending the next Execute ...
>
> End of transaction would destroy portals --- are you holding a
> transaction open for this?  It's basically just like a cursor.


OK, after re-reading your email and the docs again and again, I see that portals must be inside a transaction, now it's
working... 

Here are my steps:

- parse the Selects
...
- start transaction
- bind using a prepared statement name and a portal name
- execute x n
- close transaction
...



is this the correct way ?

And in the case I limit the execute, how I can get the pending rows ?

I'm using a CURSOR with the portal just created, and it works perfectly.

Using a new execute, I'm getting again the previous rows plus the new ones, and with the CURSOR, only the pending rows
...

Is this the correct way ?


And, where I can get more info about when it's better to use an extended query, a portal, a cursor, a simple query, ...
?

thanks!


regards,



raimon

SELECT is immediate but the UPDATE takes forever

From
Raimon Fernandez
Date:
Hi,


I want to understand why one of my postgresql functions takes an eternity to finish.

Here's an example:

UPDATE comptes SET belongs_to_compte_id=42009 WHERE (codi_compte LIKE '10000%' AND empresa_id=2 AND nivell=11); //
takesforever to finish 

QUERY PLAN
--------------------------------------------------------------------------------------------
 Seq Scan on comptes  (cost=0.00..6559.28 rows=18 width=81)
   Filter: (((codi_compte)::text ~~ '10000%'::text) AND (empresa_id = 2) AND (nivell = 11))
(2 rows)


but the same SELECT count, it's immediate:

SELECT count(id) FROM comptes WHERE codi_compte LIKE '10000%' AND empresa_id=2 AND nivell=11;


what I'm doing wrong ?

thanks,

regards,

r.

Re: SELECT is immediate but the UPDATE takes forever

From
Michał Roszka
Date:
Quoting Raimon Fernandez <coder@montx.com>:

> I want to understand why one of my postgresql functions takes an
> eternity to finish.
>
> Here's an example:
>
> UPDATE comptes SET belongs_to_compte_id=42009 WHERE (codi_compte LIKE
> '10000%' AND empresa_id=2 AND nivell=11); // takes forever to finish

[...]

> but the same SELECT count, it's immediate:
>
> SELECT count(id) FROM comptes WHERE codi_compte LIKE '10000%' AND
> empresa_id=2 AND nivell=11;

Maybe there is any check or constraint on belongs_to_compte_id.comptes that
might take longer?

Cheers,

    -Mike

--
Michał Roszka
mike@if-then-else.pl


Re: SELECT is immediate but the UPDATE takes forever

From
Alban Hertroys
Date:
On 7 Dec 2010, at 15:45, Michał Roszka wrote:
>> but the same SELECT count, it's immediate:
>>
>> SELECT count(id) FROM comptes WHERE codi_compte LIKE '10000%' AND
>> empresa_id=2 AND nivell=11;
>
> Maybe there is any check or constraint on belongs_to_compte_id.comptes that
> might take longer?


Or a foreign key constraint or an update trigger, to name a few other possibilities.

Alban Hertroys

--
Screwing up is an excellent way to attach something to the ceiling.


!DSPAM:737,4cfe7af5802659106873227!



Re: SELECT is immediate but the UPDATE takes forever

From
Tom Lane
Date:
=?utf-8?b?TWljaGHFgg==?= Roszka <mike@if-then-else.pl> writes:
> Quoting Raimon Fernandez <coder@montx.com>:
>> I want to understand why one of my postgresql functions takes an
>> eternity to finish.

> Maybe there is any check or constraint on belongs_to_compte_id.comptes that
> might take longer?

Or maybe the UPDATE is blocked on a lock ... did you look into
pg_stat_activity or pg_locks to check?

            regards, tom lane

Re: SELECT is immediate but the UPDATE takes forever

From
Raimon Fernandez
Date:
On 7dic, 2010, at 15:45 , Michał Roszka wrote:

> Quoting Raimon Fernandez <coder@montx.com>:
>
>> I want to understand why one of my postgresql functions takes an
>> eternity to finish.
>>
>> Here's an example:
>>
>> UPDATE comptes SET belongs_to_compte_id=42009 WHERE (codi_compte LIKE
>> '10000%' AND empresa_id=2 AND nivell=11); // takes forever to finish
>
> [...]
>
>> but the same SELECT count, it's immediate:
>>
>> SELECT count(id) FROM comptes WHERE codi_compte LIKE '10000%' AND
>> empresa_id=2 AND nivell=11;
>
> Maybe there is any check or constraint on belongs_to_compte_id.comptes that
> might take longer?

no, there's no check or constraint (no foreign key, ...) on this field.

I'm using now another database with same structure and data and the delay doesn't exist there, there must be something
wrongin my current development database. 

I'm checking this now ...

thanks,

r.


>
> Cheers,
>
>   -Mike
>
> --
> Michał Roszka
> mike@if-then-else.pl
>
>



Re: SELECT is immediate but the UPDATE takes forever

From
Vick Khera
Date:
2010/12/7 Raimon Fernandez <coder@montx.com>:
> I'm using now another database with same structure and data and the delay doesn't exist there, there must be
somethingwrong in my current development database. 
>

does autovacuum run on it? is the table massively bloated?  is your
disk system really, really slow to allocate new space?

Re: SELECT is immediate but the UPDATE takes forever

From
Raimon Fernandez
Date:
On 8dic, 2010, at 18:18 , Vick Khera wrote:

> 2010/12/7 Raimon Fernandez <coder@montx.com>:
>> I'm using now another database with same structure and data and the delay doesn't exist there, there must be
somethingwrong in my current development database. 
>>
>
> does autovacuum run on it?

no

> is the table massively bloated?

no

> is your disk system really, really slow to allocate new space?

no


now:

well, after a VACUUM things are going faster ... I'm still trying to analyze the function as it seems there are other
bottlechecnk,but at least the first update now is faster as before ... 

thanks,

r.

Re: SELECT is immediate but the UPDATE takes forever

From
Raimon Fernandez
Date:
On 7dic, 2010, at 16:37 , Tom Lane wrote:

>> Quoting Raimon Fernandez <coder@montx.com>:
>>> I want to understand why one of my postgresql functions takes an
>>> eternity to finish.
>
>> Maybe there is any check or constraint on belongs_to_compte_id.comptes that
>> might take longer?
>
> Or maybe the UPDATE is blocked on a lock ... did you look into
> pg_stat_activity or pg_locks to check?

no, there's no lock, blocked, ... I'm the only user connected with my developer test database and I'm sure there are no
locks,and more sure after looking at pg_locks :-) 

thanks,

r.

use a variable name for an insert in a trigger for an audit

From
Raimon Fernandez
Date:
Hello,

I have to audit all the changes for all rows of one database.

I have a trigger that executes BEFORE any update or delete, and simply copy the row (INSERT INTO) into the replicated
table.

For example, every table has the same name plus '_audit' at the end and belongs to the schema audit:

table public.persons => audit.persons_audit

I don't want to create specific triggers/functions for every table, so I want to modify the table_name in the INSERT
INTO,using the TG_TABLE_NAME, but I can't make it working. 

Also I can't see a working solution in the archive, and some examples are quite messy to do, so maybe I have to rethink
howI'm doing thinks or just create a specific trigger for each table. 

Here is my function, and I'm only testing now the INSERT:

...
DECLARE
 tableRemote varchar;
BEGIN

IF TG_TABLE_NAME = 'assentaments' THEN
 tableRemote:='audit.'||TG_TABLE_NAME||'_audit';
END IF;

        --
        -- Create a row in table_audit to reflect the operation performed on emp,
        -- make use of the special variable TG_OP to work out the operation.
        --

        IF (TG_OP = 'DELETE') THEN
            EXECUTE 'INSERT INTO audit.assentaments_audit SELECT CURRVAL(''audit_id_seq''),5, OLD.*';
            RETURN OLD;
        ELSIF (TG_OP = 'UPDATE') THEN
            INSERT INTO tableRemote  SELECT CURRVAL('audit.audit_id_seq'),3,OLD.*;
            RETURN OLD;
        END IF;
        RETURN NULL; -- result is ignored since this is an AFTER trigger
    END;
...

thanks,

regards,




Re: SELECT is immediate but the UPDATE takes forever

From
Vick Khera
Date:
On Wed, Dec 8, 2010 at 10:58 PM, Raimon Fernandez <coder@montx.com> wrote:
> well, after a VACUUM things are going faster ... I'm still trying to analyze the function as it seems there are other
bottlechecnk,but at least the first update now is faster as before ... 
>

If that's the case then your 'no' answer to "is the table bloated" was
probably incorrect, and your answer to "is your I/O slow to grow a
file" is also probably incorrect.

Re: SELECT is immediate but the UPDATE takes forever

From
Raimon Fernandez
Date:
On 9dic, 2010, at 14:32 , Vick Khera wrote:

>> well, after a VACUUM things are going faster ... I'm still trying to analyze the function as it seems there are
otherbottlechecnk, but at least the first update now is faster as before ... 
>>
>
> If that's the case then your 'no' answer to "is the table bloated" was probably incorrect,

here you maybe are right

> and your answer to "is your I/O slow to grow a file" is also probably incorrect.

not sure as I'm not experiencing any slownes on the same machine with other postgresql databases that are also more or
lessthe same size, I'm still a real newbie ... 

thanks!

regards,

raimon

Re: use a variable name for an insert in a trigger for an audit

From
Raimon Fernandez
Date:
On 9dic, 2010, at 04:40 , Raimon Fernandez wrote:

> Hello,
>
> I have to audit all the changes for all rows of one database.
>
> I have a trigger that executes BEFORE any update or delete, and simply copy the row (INSERT INTO) into the replicated
table.
>
> For example, every table has the same name plus '_audit' at the end and belongs to the schema audit:
>
> table public.persons => audit.persons_audit
>
> I don't want to create specific triggers/functions for every table, so I want to modify the table_name in the INSERT
INTO,using the TG_TABLE_NAME, but I can't make it working. 
>
> Also I can't see a working solution in the archive, and some examples are quite messy to do, so maybe I have to
rethinkhow I'm doing thinks or just create a specific trigger for each table. 
>
> Here is my function, and I'm only testing now the INSERT:
>
> ...
> DECLARE
> tableRemote varchar;
> BEGIN
>
> IF TG_TABLE_NAME = 'assentaments' THEN
> tableRemote:='audit.'||TG_TABLE_NAME||'_audit';
> END IF;
>
>        --
>        -- Create a row in table_audit to reflect the operation performed on emp,
>        -- make use of the special variable TG_OP to work out the operation.
>        --
>
>         IF (TG_OP = 'DELETE') THEN
>            EXECUTE 'INSERT INTO audit.assentaments_audit SELECT CURRVAL(''audit_id_seq''),5, OLD.*';
>            RETURN OLD;
>        ELSIF (TG_OP = 'UPDATE') THEN
>            INSERT INTO tableRemote  SELECT CURRVAL('audit.audit_id_seq'),3,OLD.*;
>            RETURN OLD;
>        END IF;
>        RETURN NULL; -- result is ignored since this is an AFTER trigger
>    END;
> ...
>
> thanks,
>
> regards,
>

finally I've moved all the audit tables to a new schema called audit, and the tables being audited have now the same
nameas the 'master' tables. 

In the trigger function I want to change the default schema to audit to use the same tablename, but it seems that I
can'tchange the schema in the function. 

Also, as now the audit tables belong to the audit schema and have the same name, I'm trying to use just the
TG_TABLE_NAMEas this: 

 INSERT INTO TG_TABLE_NAME  SELECT CURRVAL('audit.audit_id_seq'),3,OLD.*;

but also isn't allowed ...

I have to specify always a fixed value for the INSERT INTO myTable to work ?

If I use:

 INSERT INTO assentaments  SELECT CURRVAL('audit.audit_id_seq'),3,OLD.*;

this works perfectly, as the trigger function belongs to the audit schema, I can use the same table name, but I can't
usethe TG_TABLE_NAME, and I have only two options: 

- use the same triggger function with IF ELSEIF to test wich table invoked the trigger function
- or just write a different trigger function for each table.

what are the best options ?

thanks for your guide!

regards,

r.



also I'm trying to change the default schema



Re: use a variable name for an insert in a trigger for an audit

From
Merlin Moncure
Date:
On Thursday, December 9, 2010, Raimon Fernandez <coder@montx.com> wrote:
>
> On 9dic, 2010, at 04:40 , Raimon Fernandez wrote:
>
>> Hello,
>>
>> I have to audit all the changes for all rows of one database.
>>
>> I have a trigger that executes BEFORE any update or delete, and simply copy the row (INSERT INTO) into the
replicatedtable. 
>>
>> For example, every table has the same name plus '_audit' at the end and belongs to the schema audit:
>>
>> table public.persons => audit.persons_audit
>>
>> I don't want to create specific triggers/functions for every table, so I want to modify the table_name in the INSERT
INTO,using the TG_TABLE_NAME, but I can't make it working. 
>>
>> Also I can't see a working solution in the archive, and some examples are quite messy to do, so maybe I have to
rethinkhow I'm doing thinks or just create a specific trigger for each table. 
>>
>> Here is my function, and I'm only testing  ,  now the INSERT:
>>
>> ...
>> DECLARE
>> tableRemote varchar;
>> BEGIN
>>
>> IF TG_TABLE_NAME = 'assentaments' THEN
>> tableRemote:='audit.'||TG_TABLE_NAME||'_audit';
>> END IF;
>>
>>        --
>>        -- Create a row in table_audit to reflect the operation performed on emp,
>>        -- make use of the special variable TG_OP to work out the operation.
>>        --
>>
>>               IF (TG_OP = 'DELETE') THEN
>>            EXECUTE 'INSERT INTO audit.assentaments_audit SELECT CURRVAL(''audit_id_seq''),5, OLD.*';
>>            RETURN OLD;
>>        ELSIF (TG_OP = 'UPDATE') THEN
>>            INSERT INTO tableRemote  SELECT CURRVAL('audit.audit_id_seq'),3,OLD.*;
>>            RETURN OLD;
>>        END IF;
>>        RETURN NULL; -- result is ignored since this is an AFTER trigger
>>    END;
>> ...
>>
>> thanks,
>>
>> regards,
>>
>
> finally I've moved all the audit tables to a new schema called audit, and the tables being audited have now the same
nameas the 'master' tables. 
>
> In the trigger function I want to change the default schema to audit to use the same tablename, but it seems that I
can'tchange the schema in the function. 
>
> Also, as now the audit tables belong to the audit schema and have the same name, I'm trying to use just the
TG_TABLE_NAMEas this: 
>
>  INSERT INTO TG_TABLE_NAME  SELECT CURRVAL('audit.audit_id_seq'),3,OLD.*;
>
> but also isn't allowed ...
>
> I have to specify always a fixed value for the INSERT INTO myTable to work ?
>
> If I use:
>
>  INSERT INTO assentaments  SELECT CURRVAL('audit.audit_id_seq'),3,OLD.*;
>
> this works perfectly, as the trigger function belongs to the audit schema, I can use the same table name, but I can't
usethe TG_TABLE_NAME, and I have only two options: 
>
> - use the same triggger function with IF ELSEIF to test wich table invoked the trigger function
> - or just write a different trigger function for each table.
>
> what are the best options ?
>
> thanks for your guide!
>
> regards,
>
> r.
>
>
>
> also I'm trying to change the default schema
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-gener>

Use 'execute' passing record through 'using' as text, recasting and
expanding record in query.

merlin