Thread: Re: Re: Monitoring new records

Re: Re: Monitoring new records

From
messias
Date:
--- Sean Davis <sdavis2@mail.nih.gov> wrote (2008.02.28 22:56:08):
---



> 2008/2/28 messias <messias@fw.hu>:

> > Hello

> >

> >

> >

> >  I am new in SQL programing and I have a problem. I would like

to

> >

> >  monitor the new records which is inserted into an SQL table.

> >

> >  I found some solution which mentioned a queue (log) table which

can

> >

> >  be filled by a trigger. The client would read from that queue

table,

> >

> >  and delete the readed data.

> >

> >  I think this solution can work with only one client.

> >

> >  But I would have more than one client at the same time, and I
do

not

> >

> >  know how I can delete the data from the queue table.

> >

> >  I can make queue table for every client, but I think this is
not

a

> >

> >  nice solution.

> >

> >  I am affraid of lost client....

> >

> >  I am intrested in every solution which can solve my problem.

>

> Why not simply add a timestamp column to each row.  Then, your

client

> could query for all rows created or modified after a certain

> time/date.

Yes it can be a good solution, but that table can have more than

100000 records.



I don't think a trigger and a separate table would be

> necessary, but it is hard to know without more information.

>

> Sean

>



I think if I made a copy of the new records into a separate table
the

selection would be more efficient. However, somebody has to delete

from that table.



Ferenc



Re: Monitoring new records

From
"Sean Davis"
Date:
2008/2/29 messias <messias@fw.hu>:
> --- Sean Davis <sdavis2@mail.nih.gov> wrote (2008.02.28 22:56:08):
>  ---
>
>
>
>  > 2008/2/28 messias <messias@fw.hu>:
>
>  > > Hello
>
>  > >
>
>  > >
>
>  > >
>
>  > >  I am new in SQL programing and I have a problem. I would like
>
>  to
>
>  > >
>
>  > >  monitor the new records which is inserted into an SQL table.
>
>  > >
>
>  > >  I found some solution which mentioned a queue (log) table which
>
>  can
>
>  > >
>
>  > >  be filled by a trigger. The client would read from that queue
>
>  table,
>
>  > >
>
>  > >  and delete the readed data.
>
>  > >
>
>  > >  I think this solution can work with only one client.
>
>  > >
>
>  > >  But I would have more than one client at the same time, and I
>  do
>
>  not
>
>  > >
>
>  > >  know how I can delete the data from the queue table.
>
>  > >
>
>  > >  I can make queue table for every client, but I think this is
>  not
>
>  a
>
>  > >
>
>  > >  nice solution.
>
>  > >
>
>  > >  I am affraid of lost client....
>
>  > >
>
>  > >  I am intrested in every solution which can solve my problem.
>
>  >
>
>  > Why not simply add a timestamp column to each row.  Then, your
>
>  client
>
>  > could query for all rows created or modified after a certain
>
>  > time/date.
>
>  Yes it can be a good solution, but that table can have more than
>
>  100000 records.

100000 records is not large by any means.  I am routinely using tables
with 500 million rows and sometimes even larger.


>  I don't think a trigger and a separate table would be
>
>  > necessary, but it is hard to know without more information.
>
>  >
>
>  > Sean
>
>  >
>
>
>
>  I think if I made a copy of the new records into a separate table
>  the
>
>  selection would be more efficient. However, somebody has to delete
>
>  from that table.

I think what you are suggesting with triggers, deletes, etc., is
probably "premature optimization" and is actually likely to be slower
for many operations than the single table; it is definitely more
complicated.  If I were you, I would simply make a single table, load
some test data, and then try running your queries.  I was talking
about a timestamp column as a way to get the most recent records, so
you will want to index that column.  The index for 100000 timestamps
is likely to be small enough to fit into memory on even a laptop, so I
would expect that performance is likely to be very good for doing
selects of the most recent records.

Sean