Thread: Trap Constraint Errors from perl DBI
Hi, Is there a list of Error Codes and Messages posted some where. I couldn't find one. I'm trying to use trap errors caused by database constraints using perl DBI. ie: - create table foo (id, user, name, constraint foo_user_name_ind unique (user, name); ... my $insert = "insert into foo (user, name) values ('dlink', 'fish')"; my $err = ""; $dbh->do($insert) or $err = DBI::err; if ($err) { my $errstr = DBI::errstr; print "Error: $err\n"; print DBI::errstr, "\n"; } By doing this I see that "ERROR: Cannot insert a duplicate key into unique index X" has an error number 7. Thanks. __________________________________________________ Do you Yahoo!? New DSL Internet Access from SBC & Yahoo! http://sbc.yahoo.com
At the company I work at we have undergone a fairly large consolidation plan. Due to Oracle's new licensing schemes, we have opted for a model with many Oracle instances on shared hardware (IBM RS6000 AIX) where we pay a per CPU license. In this model, it sounds like we are allocating memory to each Oracle instance by setting the System Global Area (SGA) for each instance to a fixed value. I'm no Oracle DBA, so I don't know all the details, but this essentially limits the amount of memory that each instance can consume. My question then is, does postgreSQL have any similar features that would allow hosting multiple instances running on the same hardware within fixed memory allocations??? I am just curious if PostgreSQL may be able to replace some of our smaller Oracle instances in this environment with an eye to expansion to the larger instances in the future. Thanks, Keehan __________________________________________________ Do you Yahoo!? New DSL Internet Access from SBC & Yahoo! http://sbc.yahoo.com
Keehan Mallon wrote: > At the company I work at we have undergone a fairly large consolidation > plan. Due to Oracle's new licensing schemes, we have opted for a model > with many Oracle instances on shared hardware (IBM RS6000 AIX) where we > pay a per CPU license. > > In this model, it sounds like we are allocating memory to each Oracle > instance by setting the System Global Area (SGA) for each instance to a > fixed value. I'm no Oracle DBA, so I don't know all the details, but > this essentially limits the amount of memory that each instance can > consume. > > My question then is, does postgreSQL have any similar features that > would allow hosting multiple instances running on the same hardware > within fixed memory allocations??? > > I am just curious if PostgreSQL may be able to replace some of our > smaller Oracle instances in this environment with an eye to expansion > to the larger instances in the future. We do allow limitation on how mush _shared_ memory an instanace can have, but each session allocates its own memory independently as a Unix process. We do support multiple databases and, in 7.3beta, schemas. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
> We do allow limitation on how mush _shared_ memory an instanace can > have, but each session allocates its own memory independently as a Unix > process. We do support multiple databases and, in 7.3beta, schemas. Wouldn't it be pretty easy to run postgres through a process that has a setrlimit on the memory? I assume that postgres handles memory errors gracefully. I think the shared memory limit combined with setrlimit would give him what he needs. It might be a good idea to try to run all the databases under one postmaster, since that way you can set the shared memory higher, and all the backends would benefit. Regards, Jeff Davis
Jeff Davis wrote: > > We do allow limitation on how mush _shared_ memory an instanace can > > have, but each session allocates its own memory independently as a Unix > > process. We do support multiple databases and, in 7.3beta, schemas. > > Wouldn't it be pretty easy to run postgres through a process that has a > setrlimit on the memory? I assume that postgres handles memory errors > gracefully. I think the shared memory limit combined with setrlimit would > give him what he needs. Yes, you could do that. Of course, a backend that exceeds it is going to die, but it seems that's what he wants. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
Bruce Momjian wrote: > Jeff Davis wrote: > >>>We do allow limitation on how mush _shared_ memory an instanace can >>>have, but each session allocates its own memory independently as a Unix >>>process. We do support multiple databases and, in 7.3beta, schemas. >> >>Wouldn't it be pretty easy to run postgres through a process that has a >>setrlimit on the memory? I assume that postgres handles memory errors >>gracefully. I think the shared memory limit combined with setrlimit would >>give him what he needs. > > Yes, you could do that. Of course, a backend that exceeds it is going > to die, but it seems that's what he wants. > I think the question from the original poster is really this: how can I have multiple databases on one server and control the resources consumed. The way the question is phrased is based on some familiarity with Oracle and none with Postgres. In Oracle, *each* database gets an entire copy of Oracle running, with its own system global area (SGA), on its own listener port. Hence, if you have three databases running on one server, you need to run three copies of Oracle and limit resource consumption of each one so the total does not exceed what you have. It would be like *requiring* three postmasters running on three ports with Postgres in order to have three databases. The point is, that there is no need to run three copies of Postgres in order to have three databases -- you just need one. If PostgreSQL's shared_buffers is set to something reasonable, and the total Oracle SGAs is also, you ought to be just fine. Joe
> > > > Wouldn't it be pretty easy to run postgres through a process that has a > > setrlimit on the memory? I assume that postgres handles memory errors > > gracefully. I think the shared memory limit combined with setrlimit would > > give him what he needs. > > Yes, you could do that. Of course, a backend that exceeds it is going > to die, but it seems that's what he wants. Can postgres handle out-of-memory errors gracefully? I don't even know if it would really make sense, but I suppose it could just give an error and ROLLBACK the current transaction. If it had a minimal amount of memory (just enough to perform basic tasks) to work with it seems that it could handle it a little bit gracefully. Although this might be tough to do in a cross-platform way. Regards, Jeff Davis
Jeff Davis wrote: > > > > > > Wouldn't it be pretty easy to run postgres through a process that has a > > > setrlimit on the memory? I assume that postgres handles memory errors > > > gracefully. I think the shared memory limit combined with setrlimit would > > > give him what he needs. > > > > Yes, you could do that. Of course, a backend that exceeds it is going > > to die, but it seems that's what he wants. > > Can postgres handle out-of-memory errors gracefully? I don't even know if it > would really make sense, but I suppose it could just give an error and > ROLLBACK the current transaction. If it had a minimal amount of memory (just > enough to perform basic tasks) to work with it seems that it could handle it > a little bit gracefully. Although this might be tough to do in a > cross-platform way. Out of memory throws FATAL, which just exits the backend. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
Bruce Momjian <pgman@candle.pha.pa.us> writes: > Out of memory throws FATAL, which just exits the backend. Au contraire --- in most places it just throws ERROR. There are a few poorly-coded spots that throw FATAL if malloc fails. That should be cleaned up someday, at least for the ones that are likely to occur while running (startup code I think isn't an issue). regards, tom lane
Tom Lane wrote: > Bruce Momjian <pgman@candle.pha.pa.us> writes: > > Out of memory throws FATAL, which just exits the backend. > > Au contraire --- in most places it just throws ERROR. > > There are a few poorly-coded spots that throw FATAL if malloc fails. > That should be cleaned up someday, at least for the ones that are likely > to occur while running (startup code I think isn't an issue). Oh, I knew some threw fatal, but yes, the common ones just throw an error on the query. Once malloc gives an error, any attempt to increase our memory will fail, but we continue with the memory we have already allocated. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073