On Thursday 01 March 2001 21:33, Tom Lane wrote:
> Denis Perchine <dyp@perchine.com> writes:
> > I declare a cursor on the table of approx. 1 million rows.
> > And start fetching data by 1000 rows at each fetch.
> > Data processing can take quite a long time (3-4 days)
> > Theoretically postgres process should remain the same in size.
> > But it grows... In the end of 3rd day it becames 256Mb large!!!!
>
> Query details please? You can't expect any results from such a
> vague report.
:-)))
That's right.
declare senders_c cursor for select email, first_name, last_name from senders
order by email
fetch 1000 from senders_c
db=# explain declare senders_c cursor for select email, first_name, last_name
from senders order by email;
NOTICE: QUERY PLAN:
Index Scan using senders_email_key on senders (cost=0.00..197005.37
rows=928696 width=36)
db=# \d senders
Table "senders"
Attribute | Type | Modifier
------------+-----------+----------
email | text |
first_name | text |
last_name | text |
stamp | timestamp |
Index: senders_email_key
db=# \d senders_email_key
Index "senders_email_key"
Attribute | Type
-----------+------
email | text
unique btree
That's all. I could not imagine anything more simple...
--
Sincerely Yours,
Denis Perchine
----------------------------------
E-Mail: dyp@perchine.com
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
----------------------------------