Thread: rotate records
Hello.
I’m using Postgresql 7.3 on Linux.
I created sequence
CREATE SEQUENCE event_id_seq
INCREMENT 1
MINVALUE 1
MAXVALUE 5
START 1
CACHE 1
CYCLE;
And I created table which uses this sequence as primary key.
CREATE TABLE hw_messages
(
event_id INT PRIMARY KEY DEFAULT nextval(event_id_seq ') NOT NULL,
device_name varchar(50) NOT NULL
CONSTRAINT hw_messages_pkey PRIMARY KEY (event_id)
)
WITH OIDS;
My question is how I can rotate the records in the table. I have maximum number of records in the table defined by sequence. Every time when I try to insert a new record, I get an error about duplicate key. Even if I manually delete a record somewhere from the middle I still might get this error. If I state CYCLE attribute in the sequence, doesn’t it mean that while inserting new records into database if the maximum is met the old records should be deleted? If it is not correct, how can I rotate the records in the table?
Thanks you in advance,
Natasha.
On Fri, Feb 24, 2006 at 04:31:48PM -0800, Natasha Galkina wrote: > I created sequence > > CREATE SEQUENCE event_id_seq > INCREMENT 1 > MINVALUE 1 > MAXVALUE 5 > START 1 > CACHE 1 > CYCLE; [...] > My question is how I can rotate the records in the table. I have maximum > number of records in the table defined by sequence. Sequences are just number generators; they don't impose constraints on tables that use them. The sequence's maximum value has no bearing on the number of records that a table can hold. > Every time when I try to insert a new record, I get an error about > duplicate key. Even if I manually delete a record somewhere from > the middle I still might get this error. If I state CYCLE attribute > in the sequence, doesn't it mean that while inserting new records > into database if the maximum is met the old records should be deleted? No, CYCLE affects only the values returned by the sequence. > If it is not correct, how can I rotate the records in the table? One way would be to use a trigger to delete records having the same event_id as the record being inserted. Here's an example: CREATE SEQUENCE foo_seq MINVALUE 1 MAXVALUE 3 CYCLE; CREATE TABLE foo ( id integer PRIMARY KEY DEFAULT nextval('foo_seq') CHECK (id BETWEEN 1 AND 3), val text NOT NULL ); CREATE FUNCTION foo_rotate() RETURNS trigger AS ' BEGIN DELETE FROM foo WHERE id = NEW.id; RETURN NEW; END; ' LANGUAGE plpgsql; CREATE TRIGGER foo_insert BEFORE INSERT ON foo FOR EACH ROW EXECUTE PROCEDURE foo_rotate(); INSERT INTO foo (val) VALUES ('one'); INSERT INTO foo (val) VALUES ('two'); INSERT INTO foo (val) VALUES ('three'); SELECT * FROM foo; id | val ----+------- 1 | one 2 | two 3 | three (3 rows) INSERT INTO foo (val) VALUES ('four'); INSERT INTO foo (val) VALUES ('five'); SELECT * FROM foo; id | val ----+------- 3 | three 1 | four 2 | five (3 rows) The CHECK constraint on the primary key ensures that you can't exceed the record limit by explicitly setting the primary key value. If you do something like this then be sure to vacuum the table frequently, as each insert that "rotates" creates a dead tuple due to the delete. -- Michael Fuhr
On Sat, Feb 25, 2006 at 10:12:38AM -0700, Michael Fuhr wrote: > > If it is not correct, how can I rotate the records in the table? > > One way would be to use a trigger to delete records having the same > event_id as the record being inserted. I should mention that with the example I posted you can still get duplicate key violations if enough concurrent transactions insert into the table at the same time. I'll have to think a little more about the best way to avoid that. -- Michael Fuhr
Michael, Thank you very much for your response. I tried your solutions but still it looks like it doesn't work when I delete random records. select * from foo; id | val ----+----- 1 | 13 2 | 14 3 | 15 (3 rows) delete from foo where val = '13'; DELETE 1 delete from foo where val = '15'; DELETE 1 select * from foo; id | val ----+----- 2 | 14 (1 row) insert into foo (val) values ('16'); INSERT 34533 1 psimc_db=# select * from foo; id | val ----+----- 2 | 14 1 | 16 (2 rows) insert into foo (val) values ('17'); INSERT 34534 1 psimc_db=# psimc_db=# select * from foo; id | val ----+----- 1 | 16 2 | 17 As you can see the record with value '14' is gone without explicit delete, which is not what I expected. Do you have any ideas on how to avoid this? Natasha Galkina. -----Original Message----- From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Michael Fuhr Sent: Saturday, February 25, 2006 10:39 AM To: Natasha Galkina Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] rotate records On Sat, Feb 25, 2006 at 10:12:38AM -0700, Michael Fuhr wrote: > > If it is not correct, how can I rotate the records in the table? > > One way would be to use a trigger to delete records having the same > event_id as the record being inserted. I should mention that with the example I posted you can still get duplicate key violations if enough concurrent transactions insert into the table at the same time. I'll have to think a little more about the best way to avoid that. -- Michael Fuhr ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster
Hi all, I am facing performance issues even with less than 3000 records, I am using Triggers/SPs in all the tables. What could be the problem. Any idea it is good to use triggers w.r.t performance? Regards, Jeeva.K
On Tue, Feb 28, 2006 at 09:14:59 +0530, "Jeevanandam, Kathirvel (IE10)" <Jeevanandam.Kathirvel@honeywell.com> wrote: > Hi all, Please don't hijack existing threads to start new ones. This can cause people to miss your question and messes up the archives. Performance questions should generally be posted to the performance list. I have redirected followups to there. > > I am facing performance issues even with less than 3000 records, I am > using Triggers/SPs in all the tables. What could be the problem. > Any idea it is good to use triggers w.r.t performance? A common cause of this kind of thing is not running vacuum often enough leaving you with a lot of dead tuples. You should probably start by doing a vacuum full analyse and then showing the list some problem query sources along with explain analyse output for them. > > Regards, > Jeeva.K > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Have you searched our list archives? > > http://archives.postgresql.org
On Mon, Feb 27, 2006 at 07:39:22PM -0800, Natasha Galkina wrote: > I tried your solutions but still it looks like it doesn't work when I > delete random records. [...] > As you can see the record with value '14' is gone without explicit > delete, which is not what I expected. Do you have any ideas on how to > avoid this? Deletes are indeed a problem with the simple example I posted. It simply re-uses the next value in the sequence, deleting any row that currently has that value; it doesn't count the number of rows and delete the excess ones. I recall past discussions about mechanisms to limit the number of rows in a table. Here's a recent message with a suggestion: http://archives.postgresql.org/pgsql-novice/2005-12/msg00323.php Before putting more thought into it I'd want to search for other past discussion. A couple of problems are efficiency and concurrency: how to quickly know which, if any, old rows to delete, and how to ensure that inserts in concurrent transactions can't result in the table having more rows than it should. -- Michael Fuhr
Jeevanandam, Kathirvel (IE10) schrieb: > Hi all, > > I am facing performance issues even with less than 3000 records, I am > using Triggers/SPs in all the tables. What could be the problem. > Any idea it is good to use triggers w.r.t performance? Much to general. What triggers? (what are they doing, when are they invoked...?). Please provide much greater details with your request or nobody can help. Regards Tino PS: and try not to steal threads