Thread: postgres wish list

postgres wish list

From
"Sameer Mahajan"
Date:

I recently worked on rewriting a fairly complex Berkeley DB based system using Postgres DB. Following  is a wish list and a brief explanation for each of those items for postgres. It is not necessarily a comparison of postgres functionality with that of Berkeley DB but some required useful functionality currently lacking in postgres. I would like to have some feedback, comments, suggestions regarding these.

 

·         Shared memory based connectivity: As such postgres has client – server model. The TCP-IP nature of its connectivity further adds to the latency of this communication. It will be nice to have a shared memory based connectivity between libpq front end and the back end.

·         Extended SQL functionality

o   Unsigned data types: It would be nice to have native support for unsigned flavor of simple data types (e.g. integer)

o   Nested transaction funtionality: I followed quite a few discussions stating postgres considering it at some time but later abandoning it due to some complexity. The “savepoints” are also believed to provide similar semantics. However it is useful and handy to have the simple nested transaction syntax and functionality.

·         Cursors

o   It would be useful to have updateable cursors. Otherwise the application is required to figure out a required INSERT versus UPDATE amongst other things.

o   Berkeley DB has various positioning (e.g. DB_SET_RANGE ) as well as operational (e.g. DB_NOOVERWRITE)  options in its cursors. Postgres can provide such functionality (e.g. using underlying btree for an index to give DB_SET_RANGE like positioning) very easily.

·         Configurable error handling control: in case of fatal errors the transaction is aborted by default. It would be useful to make some configurable options available for the application to control the error handling in such situations. Berkeley DB has an option where all the exceptions can be turned off and the system returns only error codes which are then required to be handled by the application.

Thanks,

 

Sameer Mahajan
Sr Princ Software Engineer
Symantec Corporation 
www.symantec.com
                                                           

 

Office: +91 (20) 4075-4106


sameer_mahajan@symantec.com

                                                           

cid:image001.gif@01C95F60.3A312C10

                                                                               

http://syminfo.ges.symantec.com/marketing/globalcommunications/images/sig_green_bug.gif

 

Attachment

Re: postgres wish list

From
Craig Ringer
Date:
Sameer Mahajan wrote:
> I recently worked on rewriting a fairly complex Berkeley DB based system
> using Postgres DB. Following  is a wish list and a brief explanation for
> each of those items for postgres. It is not necessarily a comparison of
> postgres functionality with that of Berkeley DB but some required useful
> functionality currently lacking in postgres. I would like to have some
> feedback, comments, suggestions regarding these.
>
>
>
> *         Shared memory based connectivity: As such postgres has client
> - server model. The TCP-IP nature of its connectivity further adds to
> the latency of this communication. It will be nice to have a shared
> memory based connectivity between libpq front end and the back end.

Use UNIX domain sockets. You eliminate the need for TCP/IP completely,
and get rather lower latency and faster communication between client and
server. It's not shared memory, but it's really rather close in
performance terms.

Unlike with Berkeley DB, PostgreSQL's server runs at a different
privilege level to its clients. Using shared memory for client/server
communication would be more complex when you consider the security
issues involved. Additionally, my understanding is that the backend is
used to being able to arrange its memory in the most efficient way
without worrying about what the client expects to be where and when,
concerning its self over the accessibility of data the client doesn't
have permission to see, etc.

> o   Nested transaction funtionality: I followed quite a few discussions
> stating postgres considering it at some time but later abandoning it due
> to some complexity. The "savepoints" are also believed to provide
> similar semantics. However it is useful and handy to have the simple
> nested transaction syntax and functionality.

I guess I can see the appeal at the application programming level if
porting an app that's used to nested transactions. Personally I'm very
happy with savepoints, though - what about them doesn't satisfy your
particular needs?

I'd expect to be able to translate a BEGIN within an existing
transaction to a `SAVEPOINT <generated-name>', a COMMIT to `RELEASE
SAVEPOINT <generated-name>' and a ROLLBACK to `ROLLBACK TO SAVEPOINT
<generated-name>'. Wouldn't that achieve the desired effect? Or is the
problem that you don't want your application code to have to know
whether it's being called from within an existing transaction or not?

If the latter is the issue, you can provide a fairly simple
context-aware wrapper that does the right thing.

trans_begin(connection_object)
   -> examines transaction state of connection object
      and if no active transaction issues BEGIN, otherwise
      generates and stores a savepoint name.

etc.

You'd just use a simple wrapper structure around the libpq connection
object to track your state.

> *         Cursors
>
> o   It would be useful to have updateable cursors. Otherwise the
> application is required to figure out a required INSERT versus UPDATE
> amongst other things.

Pg does have updatable cursors.

http://www.postgresql.org/docs/current/static/sql-declare.html

See "WHERE CURRENT OF" in:

http://www.postgresql.org/docs/current/static/sql-update.html
http://www.postgresql.org/docs/current/static/sql-delete.html

Also, for Pl/PgSQL:

http://www.postgresql.org/docs/current/static/plpgsql-cursors.html

> o   Berkeley DB has various positioning (e.g. DB_SET_RANGE ) as well as
> operational (e.g. DB_NOOVERWRITE)  options in its cursors. Postgres can
> provide such functionality (e.g. using underlying btree for an index to
> give DB_SET_RANGE like positioning) very easily.

Maybe you could explain what the options you find useful are, and
provide examples as to why existing PostgreSQL functionality doesn't do
what you need? It might help people understand what you want and why.

> *         Configurable error handling control: in case of fatal errors
> the transaction is aborted by default. It would be useful to make some
> configurable options available for the application to control the error
> handling in such situations. Berkeley DB has an option where all the
> exceptions can be turned off and the system returns only error codes
> which are then required to be handled by the application.

I've mostly worked with Pg via psycopg (Python), PL/PgSQL running within
Pg, and with JDBC (often wrapped by Hibernate). In all these cases error
handling has been pretty flexible and easy.

I assume what you're really getting at is that you want an implicit
savepoint before each statement that's released after successful
statement execution, or rolled back to then released if the statement
fails and before error information is reported to the application?

If so, while AFAIK there's no built-in facility for this, it's not
difficult to implement at the application level. Personally I'm dubious
about the value of including something like that in Pg its self, as it
encourages apps to throw queries at the DB and see if they work, rather
than issuing queries that're designed to always succeed. For example,
rather than (application pseudocode):

sql.execute("SAVEPOINT x; INSERT INTO tablename (key, value) VALUES (1,
9);");
if (sql.error == unique_violation)
{
     sql.execute("ROLLBACK TO SAVEPOINT x; UPDATE tablename SET value =
9 WHERE key = 1;");
}
sql.execute("RELEASE SAVEPOINT x;");


... it's probably better to do:

sql.execute("UPDATE tablename SET value = 9 WHERE key = 1; INSERT INTO
tablename (key, value) SELECT 1, 9 WHERE NOT EXISTS(SELECT 1 FROM
tablename WHERE key = 1);");

The above will update the value if it's already present, or failing that
insert a new record with that value. It can be done in a batch with far
fewer DB round trips and you avoid the overhead of creating and
releasing savepoints all the time.

In this particular case it'd be nice if Pg had a "MERGE" / "UPSERT"
feature to handle this case but you can probably see my point.

I've rewritten code that used to use try-and-see (either in client code
or Pl/PgSQL) to work in an always-succeed style and found that it often
runs literally hundreds of times faster. It's also much more amenable to
  tricks like loading a temp table full of dirty data and running a
single query over it to do the work you want to, which can be VASTLY
faster than having the app send it all over piecemeal. Once you're
dealing with network latency with remote clients this starts to become
very significant.

--
Craig Ringer

Re: postgres wish list

From
"Sameer Mahajan"
Date:
Thanks Craig.

Comments inline.

> -----Original Message-----
> From: Craig Ringer [mailto:craig@postnewspapers.com.au]
> Sent: Thursday, February 19, 2009 12:56 PM
> To: Sameer Mahajan
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] postgres wish list
>
> Sameer Mahajan wrote:
> > I recently worked on rewriting a fairly complex Berkeley DB based
> system
> > using Postgres DB. Following  is a wish list and a brief explanation
> for
> > each of those items for postgres. It is not necessarily a comparison
> of
> > postgres functionality with that of Berkeley DB but some required
> useful
> > functionality currently lacking in postgres. I would like to have
> some
> > feedback, comments, suggestions regarding these.
> >
> >
> >
> > *         Shared memory based connectivity: As such postgres has
> client
> > - server model. The TCP-IP nature of its connectivity further adds
to
> > the latency of this communication. It will be nice to have a shared
> > memory based connectivity between libpq front end and the back end.
>
> Use UNIX domain sockets. You eliminate the need for TCP/IP completely,
> and get rather lower latency and faster communication between client
> and
> server. It's not shared memory, but it's really rather close in
> performance terms.
>
> Unlike with Berkeley DB, PostgreSQL's server runs at a different
> privilege level to its clients. Using shared memory for client/server
> communication would be more complex when you consider the security
> issues involved. Additionally, my understanding is that the backend is
> used to being able to arrange its memory in the most efficient way
> without worrying about what the client expects to be where and when,
> concerning its self over the accessibility of data the client doesn't
> have permission to see, etc.
>
[Sameer Mahajan] I will investigate how the unix domain sockets help in
my case. Why isn't it the default for postgres installations? Or it
isn't believed to be generic enough / straight forward enough to
configure?
> > o   Nested transaction funtionality: I followed quite a few
> discussions
> > stating postgres considering it at some time but later abandoning it
> due
> > to some complexity. The "savepoints" are also believed to provide
> > similar semantics. However it is useful and handy to have the simple
> > nested transaction syntax and functionality.
>
> I guess I can see the appeal at the application programming level if
> porting an app that's used to nested transactions. Personally I'm very
> happy with savepoints, though - what about them doesn't satisfy your
> particular needs?
>
> I'd expect to be able to translate a BEGIN within an existing
> transaction to a `SAVEPOINT <generated-name>', a COMMIT to `RELEASE
> SAVEPOINT <generated-name>' and a ROLLBACK to `ROLLBACK TO SAVEPOINT
> <generated-name>'. Wouldn't that achieve the desired effect? Or is the
> problem that you don't want your application code to have to know
> whether it's being called from within an existing transaction or not?
>
> If the latter is the issue, you can provide a fairly simple
> context-aware wrapper that does the right thing.
>
> trans_begin(connection_object)
>    -> examines transaction state of connection object
>       and if no active transaction issues BEGIN, otherwise
>       generates and stores a savepoint name.
>
> etc.
>
> You'd just use a simple wrapper structure around the libpq connection
> object to track your state.
>
[Sameer Mahajan] all this is a little bit of baggage to carry around in
the application...
> > *         Cursors
> >
> > o   It would be useful to have updateable cursors. Otherwise the
> > application is required to figure out a required INSERT versus
UPDATE
> > amongst other things.
>
> Pg does have updatable cursors.
>
> http://www.postgresql.org/docs/current/static/sql-declare.html
>
> See "WHERE CURRENT OF" in:
>
> http://www.postgresql.org/docs/current/static/sql-update.html
> http://www.postgresql.org/docs/current/static/sql-delete.html
>
> Also, for Pl/PgSQL:
>
> http://www.postgresql.org/docs/current/static/plpgsql-cursors.html
>
[Sameer Mahajan] hmm.... I guess I have to stick to an earlier version
for some other reasons and it doesn't have it...
> > o   Berkeley DB has various positioning (e.g. DB_SET_RANGE ) as well
> as
> > operational (e.g. DB_NOOVERWRITE)  options in its cursors. Postgres
> can
> > provide such functionality (e.g. using underlying btree for an index
> to
> > give DB_SET_RANGE like positioning) very easily.
>
> Maybe you could explain what the options you find useful are, and
> provide examples as to why existing PostgreSQL functionality doesn't
do
> what you need? It might help people understand what you want and why.
>
[Sameer Mahajan] the most useful one that I found was DB_SET_RANGE which
internally can use the efficient btree indexing. Of course I can perform
a serial scan in my application to achieve the same result but it would
be a little in efficient.
> > *         Configurable error handling control: in case of fatal
> errors
> > the transaction is aborted by default. It would be useful to make
> some
> > configurable options available for the application to control the
> error
> > handling in such situations. Berkeley DB has an option where all the
> > exceptions can be turned off and the system returns only error codes
> > which are then required to be handled by the application.
>
> I've mostly worked with Pg via psycopg (Python), PL/PgSQL running
> within
> Pg, and with JDBC (often wrapped by Hibernate). In all these cases
> error
> handling has been pretty flexible and easy.
>
> I assume what you're really getting at is that you want an implicit
> savepoint before each statement that's released after successful
> statement execution, or rolled back to then released if the statement
> fails and before error information is reported to the application?
>
> If so, while AFAIK there's no built-in facility for this, it's not
> difficult to implement at the application level. Personally I'm
dubious
> about the value of including something like that in Pg its self, as it
> encourages apps to throw queries at the DB and see if they work,
rather
> than issuing queries that're designed to always succeed. For example,
> rather than (application pseudocode):
>
> sql.execute("SAVEPOINT x; INSERT INTO tablename (key, value) VALUES
(1,
> 9);");
> if (sql.error == unique_violation)
> {
>      sql.execute("ROLLBACK TO SAVEPOINT x; UPDATE tablename SET value
=
> 9 WHERE key = 1;");
> }
> sql.execute("RELEASE SAVEPOINT x;");
>
>
> ... it's probably better to do:
>
> sql.execute("UPDATE tablename SET value = 9 WHERE key = 1; INSERT INTO
> tablename (key, value) SELECT 1, 9 WHERE NOT EXISTS(SELECT 1 FROM
> tablename WHERE key = 1);");
>
> The above will update the value if it's already present, or failing
> that
> insert a new record with that value. It can be done in a batch with
far
> fewer DB round trips and you avoid the overhead of creating and
> releasing savepoints all the time.
>
> In this particular case it'd be nice if Pg had a "MERGE" / "UPSERT"
> feature to handle this case but you can probably see my point.
>
> I've rewritten code that used to use try-and-see (either in client
code
> or Pl/PgSQL) to work in an always-succeed style and found that it
often
> runs literally hundreds of times faster. It's also much more amenable
> to
>   tricks like loading a temp table full of dirty data and running a
> single query over it to do the work you want to, which can be VASTLY
> faster than having the app send it all over piecemeal. Once you're
> dealing with network latency with remote clients this starts to become
> very significant.
>
[Sameer Mahajan] I guess I would buy this argument since the places I
need it justifies the amount of logic I can / need to add in the
application layer.
> --
> Craig Ringer

Re: postgres wish list

From
Grzegorz Jaśkiewicz
Date:
and I wish you didn't send emails with images attached to public lists...

Re: postgres wish list

From
John R Pierce
Date:
Sameer Mahajan wrote:
> [Sameer Mahajan] I will investigate how the unix domain sockets help in
> my case. Why isn't it the default for postgres installations? Or it
> isn't believed to be generic enough / straight forward enough to
> configure?
>

hmm?  Domain sockets -are- the default.   just don't specify a host at
all.  thats what all the command line tools use by default on uninx







Re: postgres wish list

From
Thomas Kellerer
Date:
Sameer Mahajan, 19.02.2009 07:38:
> *         Shared memory based connectivity: As such postgres has client
> - server model. The TCP-IP nature of its connectivity further adds to
> the latency of this communication. It will be nice to have a shared
> memory based connectivity between libpq front end and the back end.

I have never worked with an application that ran on the same server as the database.

Application server (e.g. for webapps) and database server are almost always on different machines (even more so for
client/serverapplications).  

So an application in production system doesn't really benefit from this. It will always have to go through tcp.

Or am I missing something?

Thomas

Re: postgres wish list

From
Chander Ganesan
Date:
Thomas Kellerer wrote:
> Sameer Mahajan, 19.02.2009 07:38:
>> *         Shared memory based connectivity: As such postgres has client
>> - server model. The TCP-IP nature of its connectivity further adds to
>> the latency of this communication. It will be nice to have a shared
>> memory based connectivity between libpq front end and the back end.
>
> I have never worked with an application that ran on the same server as
> the database.
> Application server (e.g. for webapps) and database server are almost
> always on different machines (even more so for client/server
> applications).
> So an application in production system doesn't really benefit from
> this. It will always have to go through tcp.
>
> Or am I missing something?
This isn't all that unusual when you talk about non-traditional
client-server applications.  For example, when you install a desktop app
(such as ACT, Peachtree, Quickbooks, etc.) that works in a "standalone"
mode without a dedicated server.  In such cases the stand-alone app
installs the database server locally (and its started and stopped with
the app - or just started and left running and the app connects
locally), but when it operates in a workgroup setting a centralized
database would be used.

Basically, if you try to use PostgreSQL as a local data store for a
desktop app, you'll likely want to use socket connections.
Unfortunately, if its a windows app, I'm not sure if there is much
benefit to using a socket over tcp/ip .  I thought at one point there
was an issue with the windows socket implementation that made it less
than ideal...but that might have been a long time ago....does anyone know?

--
Chander Ganesan
Open Technology Group, Inc.
One Copley Parkway, Suite 210
Morrisville, NC  27560
919-463-0999/877-258-8987
http://www.otg-nc.com
Ask me about expert PostgreSQL, PostGIS & other Open Source Training!


Re: postgres wish list

From
Craig Ringer
Date:
Sameer Mahajan wrote:

> [Sameer Mahajan] I will investigate how the unix domain sockets help in
> my case. Why isn't it the default for postgres installations? Or it
> isn't believed to be generic enough / straight forward enough to
> configure?

It is the default. Just connect without a host parameter (the default
with most client library interfaces) to use a unix domain socket.

>> trans_begin(connection_object)
>>    -> examines transaction state of connection object
>>       and if no active transaction issues BEGIN, otherwise
>>       generates and stores a savepoint name.
>>
>> etc.
>>
>> You'd just use a simple wrapper structure around the libpq connection
>> object to track your state.
>>
> [Sameer Mahajan] all this is a little bit of baggage to carry around in
> the application...

I do agree that it'd be nice to have the ability to nest transactions or
just have the DB translate to savepoint usage, just to simplify these
cases. I don't think it's much baggage at all on the app side, though;
I'd expect something like this to be trivial to implement in most apps.

> [Sameer Mahajan] hmm.... I guess I have to stick to an earlier version
> for some other reasons and it doesn't have it...

"Some reason?"

If you don't know why you're using an old version, and the new one does
things you need...

> [Sameer Mahajan] the most useful one that I found was DB_SET_RANGE which
> internally can use the efficient btree indexing. Of course I can perform
> a serial scan in my application to achieve the same result but it would
> be a little in efficient.

I don't speak Berkely DB, and I doubt all that many others do here
either. How would you explain what that does, and why you use it, to
somebody who doesn't use Berkeley DB?

The documentation:
http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/dbc_get.html

suggests that it's for a very different DB access approach where
everything goes through a cursor. As far as I can tell the close
equivalent for Pg would be something like:

SELECT key, value
FROM table
WHERE key >= 'something'
ORDER BY key
LIMIT 1;

(possibly used in a DECLARE ... CURSOR statement)

Pg doesn't have cursors that can be scrolled using criteria like "the
first record with field `x' having a value equal to or greater than
`y'". It might be interesting - in the case of updatable cursors that
operate directly on tables - to support "seeking" a cursor this way.

I ported an app that used a very old shared-access ISAM table structure
to a Pg backend some time ago, and I faced issues somewhat similar to
yours. I ended up having to implement the required cursor behaviour at
the application level, mapping it onto ordinary SQL operations. Pg
didn't support scrollable read/write cursors at the time, but even if it
had they wouldn't have done the job because it wasn't possible to "seek"
the cursor based on WHERE-style matching criteria.

In all honesty, it was pretty trivial to do. Given that the main use of
all this is the fairly unusual use case of porting apps with "legacy"
database access methods and APIs, I'm not sure it'd be worth putting
much effort into.

> [Sameer Mahajan] I guess I would buy this argument since the places I
> need it justifies the amount of logic I can / need to add in the
> application layer.

My whole point was that you shouldn't *need* logic at the application
layer. Many of the cases where your code currently tries to do something
and handles an error if it fails, you should be able to do with SQL
that'll do the right thing in the first place, without any special error
handling at the application layer at all.

--
Craig Ringer

Re: postgres wish list

From
John R Pierce
Date:
Chander Ganesan wrote:
> Basically, if you try to use PostgreSQL as a local data store for a
> desktop app, you'll likely want to use socket connections.
> Unfortunately, if its a windows app, I'm not sure if there is much
> benefit to using a socket over tcp/ip .  I thought at one point there
> was an issue with the windows socket implementation that made it less
> than ideal...but that might have been a long time ago....does anyone
> know?
>

PG on Windows doesn't support socket connections at all.


Re: postgres wish list

From
Jasen Betts
Date:
On 2009-02-19, Sameer Mahajan <Sameer_Mahajan@symantec.com> wrote:
> Thanks Craig.
>
> Comments inline.
>
>>
> [Sameer Mahajan] I will investigate how the unix domain sockets help in
> my case. Why isn't it the default for postgres installations? Or it
> isn't believed to be generic enough / straight forward enough to
> configure?

it is enabled by default, but if you specify a host address when
connecting (eg 127.0.0.1) you'll get a TCP/IP connection instead.

by the default authentication for unix domain sockets is "ident" which
(AIUI) requires the caller have the a unix username that matches their
postgres username, but you can change that to "md5" (password based)
or "trust" (no authentication) in pg_hba.conf


Re: postgres wish list

From
Martijn van Oosterhout
Date:
On Fri, Feb 20, 2009 at 09:50:25AM +0000, Jasen Betts wrote:
> by the default authentication for unix domain sockets is "ident" which
> (AIUI) requires the caller have the a unix username that matches their
> postgres username, but you can change that to "md5" (password based)
> or "trust" (no authentication) in pg_hba.conf

With ident auth you can also specify a mapping file, which lists which
UNIX users can connect as which postgresql users without password. An
extremely useful feature.

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while
> boarding. Thank you for flying nlogn airlines.

Attachment