Hard to say. Here are the initial questions I have:
What is the nature of the query? Returns single row or 10,000? Single table
or multiple joined tables? Aggregates? Grouping?
Is the database updated often? How about vacuumed to manage on-disk size?
Is data delivered locally or over the wire? What is the client?
What platform (CPU, RAM, Disk, OS) and PG version?
What is the system load - single PG user or hundreds of simultaneous queries
and running the web server and mail server, too? What does sar report for CPU
and disk load?
If you are using a 7.3 version, what does psql report when you run the query
with \timing on?
Out of curiosity I just did a quick search for all records for a single hour
from a table that has part of our phone bill from last year (1.8 million
records) and the result (810 records) was returned in under 0.5 seconds. This
table resides on my desktop so I'm the only user but it's also far from a
"server-class" machine.
Cheers,
Steve
On Friday 18 April 2003 11:24 am, Derek Hamilton wrote:
> Hello all,
>
> We're using PostgresQL with a fairly large database (about 2GB). I have
> one table that currently exceeds 4.5 million records and will probably grow
> to well over 5 fairly soon. The searching of this table is basically done
> on one field, which field I have set up a btree index on. My question is,
> if I search this table and get the results back in about 6-7 seconds is
> that pretty good, not so good...? What are the things I should look at in
> determining the performance on this?
>
> BTW, forgive the lack of information. I'd be happy to post more info on
> the table, hardware, etc. I just didn't want to overwhelm the initial
> question.
>
> Thanks,
> Derek Hamilton
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster