The problem with this approach is TTFB (Time to first Byte). The
initial query is very slow, but additional requests are fast. In most
situations we do not want the user to have to wait a disproportionate
amount of time for the initial query. If this is the first time using
the system this will be the impression that will stick with them. I
guess we could experiment and see how much extra time creating a cache
table will take...
Alex Turner
NetEconomist
On Wed, 26 Jan 2005 13:58:18 +0100, PFC <lists@boutiquenumerique.com> wrote:
>
> Supposing your searches display results which are rows coming from one
> specific table, you could create a cache table :
>
> search_id serial primary key
> index_n position of this result in the global result set
> result_id id of the resulting row.
>
> Then, making a search with 50k results would INSERT INTO cache ... SELECT
> FROM search query, with a way to set the index_n column, which can be a
> temporary sequence...
>
> Then to display your pages, SELECT from your table with index_n BETWEEN so
> and so, and join to the data table.
>
> If you're worried that it might take up too much space : store an integer
> array of result_id instead of just a result_id ; this way you insert fewer
> rows and save on disk space. Generate it with a custom aggregate... then
> just grab a row from this table, it contains all the id's of the rows to
> display.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faq
>