First thing to try is to do
explain select * from weather where weatherid = 12372;
and see if it's doing a sequential scan on your table.
If it is, the index and table stats may be out of date
and I think you fix those using vacuum analyze.
One other thing is that if you only have 1100 rows of the sizes
you describe, even a table scan shouldn't take as long as you
say unless the machine is either under serious load or doesn't
have enough memory to run postgres without swapping. (or the disk is
freaking out with io errors)
Alex.
On Mon, 29 Jan 2001, chris markiewicz wrote:
> hello.
>
> this might be as much of a general database question as it is a postgres
> question...
>
> i have a table with 5 columns...a primary key (integer), three small (10
> character) text fields, and one semi-large (1400 characters) text field.
> note that only a small percentage (5% ?) of the rows contain 1400 characters
> in the 5th column...the other 95% have approx 10 characters. it has 1100
> rows.
>
> the problem is this - queries (command line) often take a very long time -
> anywhere from 5-15 seconds - to execute. the queries use only the primary
> key and nothing else in the where clause. no joins. a sample query is:
>
> select * from weather where weatherid = 12372;
>
> from the command line, it seems that the first query can take a very long
> time but subsequent queries happen quickly ( < 1 sec). i'm guessing that
> this is the result of caching or something.
>
> do the long times make sense? what can i do to shorten them? would a
> smaller text field help? i have no reason to think that this would be
> faster or slower in another db, so it might be unrelated to postgres itself.
>
> i greatly appreciate your help.
>
> chris
>