Unanswered questions about Postgre - Mailing list pgsql-general
From | Joe Kislo |
---|---|
Subject | Unanswered questions about Postgre |
Date | |
Msg-id | 3A2528CC.61505AB7@athenium.com Whole thread Raw |
List | pgsql-general |
Sorry if this is a repost, but it appears the postgre mailing list filtered my original mail without warning. I have a bunch of questions, and I've looked through the documentation without any answers. I also looked in the FAQ.. I'm guessing that most of these questions -are- FAQs, so maybe they should be put there after some answers are assembled. Also, have you guys considered a searchable version of your documentation, or a version of your documentation all on one page, so it can be searched in the browser? 1) in psql, if I make a typeo, the transaction is automatically aborted. For example: ---- apollodemo=# \d List of relations Name | Type | Owner ------+-------+------- boo | table | kislo (1 row) apollodemo=# begin; BEGIN apollodemo=# create table moo (i int); CREATE apollodemo=# Oops this is a typeo; ERROR: parser: parse error at or near "oops" apollodemo=# insert into moo values (1); NOTICE: mdopen: couldn't open moo: No such file or directory NOTICE: RelationIdBuildRelation: smgropen(moo): No such file or directory NOTICE: mdopen: couldn't open moo: No such file or directory NOTICE: mdopen: couldn't open moo: No such file or directory NOTICE: mdopen: couldn't open moo: No such file or directory NOTICE: current transaction is aborted, queries ignored until end of transaction block *ABORT STATE* apollodemo=# \d NOTICE: current transaction is aborted, queries ignored until end of transaction block No relations found. apollodemo=# \d NOTICE: current transaction is aborted, queries ignored until end of transaction block No relations found. apollodemo=# commit; COMMIT apollodemo=# \d List of relations Name | Type | Owner ------+-------+------- boo | table | kislo (1 row) apollodemo=# --- Arr, that's really annoying when you're in Psql, and you just trashed your entire transaction because you made a type-o. It's even more rediculous when your application is acctually doing a transaction, and because an error occoured, the entire transaction is now trashed. Why exactly does Postgre abort the transaction is there is an error? It doesn't have to, and no other RDBMS's do it AFAIK (interbase/oracle). for example: if you start a transaction, do some work, then try to insert a record which violates a dataintegrity constraint, the ENTIRE transaction is now wiped out. Instead, the application should be able to parse the integrity contraint problem, and keep on going, or -choose- to rollback the transaction. I assume this -must- be user settable, how do I do this? And what was the rationale behind automatically aborting at the first sign of imperfection? 2) In the above example, when I issue the "insert into moo values (1);",PostGre spews out all manners of errors. Should one of the following happen?: a) Given that the transaction is ABORTed, shouldn't it just entirly ignore the insert request (as advertised), and thus NOT even bother querying the operating system for the database file? b) Given that the transaction has been ABORTed, it shouldn't bother querying the OS for the table, since it knows it doesn't exist c) Given that the table does not exist, and this is what you would expect, not spew operating system errors to the screen 3) You have some nice documentation describing transaction isolation; but you fail to follow through in saying how to CHANGE the transaction isolation mode. The dox are here: http://postgresql.readysetnet.com/users-lounge/docs/v7.0/user/mvcc4564.htm How do you change the isolation mode? 4) I was doing some testing of my Database Abstraction layer, and I was running the same code between MySQL, Postgresql, and Interbase. I made an interesting discovery. After running the test sequence once on Postgre, each sucsessive run of the test suite (or "benchmark" since I was timing it.. but it really wasn't a benchmark) postgre's number's got progressivly worse. Infact, on -each- sucsessive run, postgre's time-to-completion -doubled-. This was across the boards, on selects by key, selects not by key, inserts, updates... Dropping the table, and recreating it got postgre back to square one again... I assume this has to do with the fact that about 1000 records were inserted, and about a 1000 rows were deleted (over the course of the test)... Is this -normal- for postgre? I realize that maybe some sort of transaction audit needs to be stored, but these transactions are committed... Do we need that audit anymore? Plus, should that "audit" so -massivly- effect the entire database's operational speed? Why should running a query on a table be -so- -so- much slower if there have been repeated inserted and deleted records in it? We're talking about a massive slow down here, this test originally ran in about 7 seconds.. After a few runs, it was taking 2 minutes. There are no rows left in the table at the end... Shouldn't pgsql just overwrite the dead space, and why is the dead space effecting performance? ( I don't care about disk space concerns). I did notice vaccuming the database between runs kept postgre working at a reasonable pace. Do people need to vaccume their databases hourly? Can you vaccume while a database is in use? Any discussion on this curious phenomenon would be appreciated. It still boggles me. 5) BLOB Support. Ugh? I cannot find anything in any of the manuals about BLOBs, yet I know PGsql supports them because I've found lots of people in the mailing list yelling at people to use BLOBs when they complain that the max row size is 8k. Yet no dox (that I can find!). I did find -one- piece of documentation. However it only described how to load a BLOB from -the harddrive of the server-, and how to retrieve a BLOB -directly onto the harddrive of the server-. This, ofcourse, is entirly useless in a client/server application. Does PGSQL have BLOB support which can work over the wire? If so, where are the dox? If not, isn't this an incredibly massive limitation? How do people work around this? 6) Two things about the JDBC Driver for postgre. First, If you create a Prepared Statement, and are populating the fields, if you try to use setObject to store an object, the JDBC driver will crash if the object is null. So if you write something like this: void sillyFunction (Integer i, Integer ii) { [..] preparedStatement.setObject(1,i); preparedStatement.setObject(2,ii); } And some wise ass passes in a null Integer, the JDBC driver will crash with a NullPointerException. Instead you must first test to see if the object is null, and if not, then insert it using setObject, otherwise insert it using setNull. Is this right? I've used quite a few other JDBC drivers, and none of them seem to crash in this manner if you pass in a null. I've never had to use setNull explicitly before. Two, if you have a resultset, and try to fetch an int (or anything) from column 0, the entire JVM goes down. I am entirly baffled by how the entire JVM could crash (with a seg fault). Does your JDBC code use native code? I tried two different JVMs (IBM JDK1.3 and Blackdown 1.2.2), both with and without JITs active. I also tried the JDBC driver fresh off the site, aswell as the one shipped with postgre 7.0.3.. All combinations crash.. Do you guys know why this might be? What error checking do you have for this case? Thanks guys, -Joe
pgsql-general by date: