On 10/5/07, Cláudia Macedo Amorim <claudia.amorim@pcinformatica.com.br> wrote:
>
> I'm new in PostGreSQL and I need some help.
>
> I have a table with ~2 million records. Queries in this table are too slow
> and some are not completed.
> I think it must be a simple question to solve but, I'm trying without
> success. I'm worried because next week I will need to work with tables
> with ~100 million records.
>
> I'm using:
> O.S.: Windows XP;
> PostgreSQL 8.2;
> Index type: btree.
>
> I have 2 GB of RAM.
>
> POSTGRESQL XXX.LOG:
>
> <2007-10-05 09:01:42%SELECT> LOG: could not send data to client: Unknown
> winsock error 10061
> <2007-10-05 09:03:03%idle> LOG: could not receive data from client: Unknown
> winsock error 10061
> <2007-10-05 09:03:03%idle> LOG: unexpected EOF on client connection
This looks like your client is dying on receiving too much data. You
can either try to fix the client to handle more data, which isn't the
best way to proceed, or you can retrieve your data with a cursor a
chunk at a time.
> PSQLODBC.LOG:
> [13236.501]CONN ERROR: func=SC_execute, desc='(null)', errnum=109,
> errmsg='Out of memory while reading tuples.'
Assuming this is the client side error, yes, you're simply reading too
many rows at once.
> POSTGRESQL.CONF:
> shared_buffers = 512MB # min 128kB or max_connections*16kB
Reasonable for a machine with 2 G ram.
> work_mem = 256MB # min 64kB
If and only if you have one or two users, this is ok. Otherwise it's
a bit high.
Take a look at cursors, here's the declare ref page:
http://www.postgresql.org/docs/8.2/static/sql-declare.html