Thread: Effective usage without unique key -- suggestion
Hi, I am using postgresql 7.2.1 on redhat linux 7.3 I have a table in very high production database which is circulating several thousands of records per day ie.. count does not exceed several thousands. Here for some technical reason i dont have unique key on this table, but it should contain unique records. I know that without using unique index it can be achieved in any of the following methods. Method 1 -------- * check for the record. * if exists update, else insert Method 2 -------- * delete the record (trigger on before insert) * insert the record So Can you please suggest the best among the above 2 methods? BTW, internals of the above 2 methods would be different. For example in method 2, frequent deletion of records calls for vacuuming the database. FYI, i does VACUUM ANALYZE every day. So Can you please suggest the best among the above 2 methods which well suits me and to use postgres more effectively? Suggestion or a link is suffice. TIA. regards, bhuvaneswaran
On Wednesday 07 Aug 2002 9:27 am, Bhuvan A wrote: > Hi, > > I am using postgresql 7.2.1 on redhat linux 7.3 > > I have a table in very high production database which is circulating > several thousands of records per day ie.. count does not exceed several > thousands. Here for some technical reason i dont have unique key on this > table, but it should contain unique records. I know that without using > unique index it can be achieved in any of the following methods. Why don't you create a unique index on the fields concerned? You can always create a functional index if uniqueness is a complicated thing for you. CREATE TABLE foo (a int, b int); CREATE FUNCTION foo_add(int, int) RETURNS int AS ' SELECT $1 + $2; ' LANGUAGE 'sql' WITH (iscachable); CREATE UNIQUE INDEX foo_uniq_idx ON foo ( foo_add(a,b) ); richardh=> INSERT INTO foo values (1,4); INSERT 7024674 1 richardh=> INSERT INTO foo values (2,4); INSERT 7024675 1 richardh=> INSERT INTO foo values (2,3); ERROR: Cannot insert a duplicate key into unique index foo_uniq_idx > Method 1 > * check for the record. > * if exists update, else insert > > Method 2 > * delete the record (trigger on before insert) > * insert the record > > So Can you please suggest the best among the above 2 methods? > > BTW, internals of the above 2 methods would be different. For example in > method 2, frequent deletion of records calls for vacuuming the database. > FYI, i does VACUUM ANALYZE every day. So Can you please suggest the best > among the above 2 methods which well suits me and to use postgres more > effectively? Don't think it matters which way you do it with MVCC - updating is equivalent to a delete + insert. You say this table has a high rate of change, so you might want to VACUUM it more often than daily. - Richard Huxton
Bhuvan A wrote: > Hi, > > I am using postgresql 7.2.1 on redhat linux 7.3 > > I have a table in very high production database which is circulating > several thousands of records per day ie.. count does not exceed several > thousands. Here for some technical reason i dont have unique key on this > table, but it should contain unique records. I know that without using > unique index it can be achieved in any of the following methods. > > Method 1 > -------- > * check for the record. > * if exists update, else insert > > Method 2 > -------- > * delete the record (trigger on before insert) > * insert the record > > So Can you please suggest the best among the above 2 methods? > > BTW, internals of the above 2 methods would be different. For example in > method 2, frequent deletion of records calls for vacuuming the database. > FYI, i does VACUUM ANALYZE every day. So Can you please suggest the best > among the above 2 methods which well suits me and to use postgres more > effectively? > Not really - internally update does pretty much the same thing as delete+insert - so, either way, you won't be able to get away from vacuum'ing it... I *think* the second method should be (a little) more effective... I hope, it helps... Dima