Re: Creating tons of tables to support a query - Mailing list pgsql-general

From Richard Huxton
Subject Re: Creating tons of tables to support a query
Date
Msg-id 200209091649.51570.dev@archonet.com
Whole thread Raw
In response to Re: Creating tons of tables to support a query  (Jan Ploski <jpljpl@gmx.de>)
List pgsql-general
On Monday 09 Sep 2002 3:22 pm, Jan Ploski wrote:
[snipped - trying partial indexes]
>  But: 500 iterations of the following code
>
>     loop
>         select msgnum from message into v_cnt where sectionID = 241 order
> by dateSent desc limit 1; end loop
>
> report execution time 00:00:00.03179, while 500 iterations of the following
> (which is closer to what I need):
>
>     v_sid := 241;
>
>     loop
>         select msgnum from message into v_cnt where sectionID = v_sid order
> by dateSent desc limit 1; end loop
>
> take a whopping 00:00:31.442402. Needless to say, I don't have this problem
> with select msgnum from scnt_241 into v_cnt order by dateSent desc limit 1.
> But I'd really like to drop all these extra tables.
>
> How can I find out what is going on behind the scenes?

Look at the section on logging/debugging in your postgresql.conf file (also
the SET/SHOW commands and the manuals).

However, if that code is plpgsql then the query plan is fixed at compile-time
which might well mean that it isn't using the partial indexes. Normally you'd
use EXECUTE to get around this, but you can't do that with SELECT...INTO

There are only two options I can think of at the moment:
1. Rewrite the plpgsql as an sql function/plperl etc (anything dynamic)
2. Take a step back and keep a separate table of messages per
minute/hour/whatever and keep it up to date with triggers.

Something smarter might occur to me later or (more likely) to someone else,
but I've got to finish something off at the moment.

HTH

- Richard Huxton

pgsql-general by date:

Previous
From: Ericson Smith
Date:
Subject: Restore file too large for pg_restore
Next
From: dima
Date:
Subject: Re: SQL: how to find if a table exists?