Thread: caching query results

caching query results

From
"alex b."
Date:
hello all


I've been wondering, if it was possible to cache the query results...

the reason I ask is because of a script I wrote recently... each request
takes up to 4 seconds... that ok, because its quite a bit of data... but
instead of always collecting the data again and again some kind of cache
wouldn't be all too bad.

assuming that all queries are always the same - except for the OFFSET..
- LIMIT stays the same.

cheers, alex


Re: caching query results

From
"scott.marlowe"
Date:
On Wed, 21 May 2003, alex b. wrote:

> hello all
>
>
> I've been wondering, if it was possible to cache the query results...
>
> the reason I ask is because of a script I wrote recently... each request
> takes up to 4 seconds... that ok, because its quite a bit of data... but
> instead of always collecting the data again and again some kind of cache
> wouldn't be all too bad.
>
> assuming that all queries are always the same - except for the OFFSET..
> - LIMIT stays the same.

Query caching is not likely to be added any time soon, as it is generally
believed to be the job of the application to cache data it needs cached.
Not that a good query cache patch would never be accepted, it's just not
on anyone's list of things to do.

Does your application development environment allow persistant
connections?  If so, then maybe you could declare a cursor and use that?


Re: caching query results

From
Darko Prenosil
Date:
On Wednesday 21 May 2003 16:34, alex b. wrote:
> hello all
>
>
> I've been wondering, if it was possible to cache the query results...
>
> the reason I ask is because of a script I wrote recently... each request
> takes up to 4 seconds... that ok, because its quite a bit of data... but
> instead of always collecting the data again and again some kind of cache
> wouldn't be all too bad.
>
> assuming that all queries are always the same - except for the OFFSET..
> - LIMIT stays the same.
>
> cheers, alex
>

The only way is to use cursor or temp table.

>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)


Re: caching query results

From
"alex b."
Date:
hello dear people without shaved necks!

as many of you have already told me cursors are the way to go - now I know!

there it is, kindly provided my BILL G.:

   BEGIN;
   DECLARE <cursorname> FOR <query>;
   FETCH <number_of_rows> FROM <cursorname>;
   MOVE {FORWARD|BACKWARD} <number_of_rows> IN <cursorname>;


THANK YOU ALL VERY VIEL (much in german)!!!

I will now have to implement session ID's into my CGI's...

oh by the way... lets say a transaction has begun and was never
commited.. what will happen to it?

is there a automatic rollback after a certain time?
or would there be ton's of open transactions?



Darko Prenosil wrote:
> On Wednesday 21 May 2003 16:34, alex b. wrote:
>
>>hello all
>>
>>
>>I've been wondering, if it was possible to cache the query results...
>>
>>the reason I ask is because of a script I wrote recently... each request
>>takes up to 4 seconds... that ok, because its quite a bit of data... but
>>instead of always collecting the data again and again some kind of cache
>>wouldn't be all too bad.
>>
>>assuming that all queries are always the same - except for the OFFSET..
>>- LIMIT stays the same.
>>
>>cheers, alex
>>
>
>
> The only way is to use cursor or temp table.
>
>
>>---------------------------(end of broadcast)---------------------------
>>TIP 2: you can get off all lists at once with the unregister command
>>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>
>


Re: caching query results

From
Darko Prenosil
Date:
On Wednesday 21 May 2003 20:26, alex b. wrote:
> hello dear people without shaved necks!
>
> as many of you have already told me cursors are the way to go - now I know!
>
> there it is, kindly provided my BILL G.:
>
>    BEGIN;
>    DECLARE <cursorname> FOR <query>;
>    FETCH <number_of_rows> FROM <cursorname>;
>    MOVE {FORWARD|BACKWARD} <number_of_rows> IN <cursorname>;
>
>
> THANK YOU ALL VERY VIEL (much in german)!!!
>
> I will now have to implement session ID's into my CGI's...
>
> oh by the way... lets say a transaction has begun and was never
> commited.. what will happen to it?
>
> is there a automatic rollback after a certain time?
> or would there be ton's of open transactions?
>

In context of one connection, You can not start next transaction until
previous transaction is not ROLLBACK or COMMIT-ed. Transaction will close
with connection (maybe someone from hackers team might tell You more).

> Darko Prenosil wrote:
> > On Wednesday 21 May 2003 16:34, alex b. wrote:
> >>hello all
> >>
> >>
> >>I've been wondering, if it was possible to cache the query results...
> >>
> >>the reason I ask is because of a script I wrote recently... each request
> >>takes up to 4 seconds... that ok, because its quite a bit of data... but
> >>instead of always collecting the data again and again some kind of cache
> >>wouldn't be all too bad.
> >>
> >>assuming that all queries are always the same - except for the OFFSET..
> >>- LIMIT stays the same.
> >>
> >>cheers, alex
> >
> > The only way is to use cursor or temp table.
> >
> >>---------------------------(end of broadcast)---------------------------
> >>TIP 2: you can get off all lists at once with the unregister command
> >>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly


Re: caching query results

From
"Nigel J. Andrews"
Date:
List-Maintainers:

Seems to be an issue with the mailing list, it's very quiet for a while and
then there's quite a few messages turn up. In this instance
developers.postgresql.org keep hold of the message for a few hours and then
relay3.phsql.com kept hold for a while. Just so you know.

Now to the message:

On Wed, 21 May 2003, alex b. wrote:

> hello dear people without shaved necks!
>
> as many of you have already told me cursors are the way to go - now I know!
>
> there it is, kindly provided my BILL G.:
>
>    BEGIN;
>    DECLARE <cursorname> FOR <query>;
>    FETCH <number_of_rows> FROM <cursorname>;
>    MOVE {FORWARD|BACKWARD} <number_of_rows> IN <cursorname>;
>
>
> THANK YOU ALL VERY VIEL (much in german)!!!
>
> I will now have to implement session ID's into my CGI's...

I had a feeling you'd be doing something like that. I think cursors can't
remain open across a transaction close and definitely not across a connection
close. But I'll look forward to hearing the opposite.


> oh by the way... lets say a transaction has begun and was never
> commited.. what will happen to it?

It will rollback when the connection closes.

>
> is there a automatic rollback after a certain time?
> or would there be ton's of open transactions?

If the connection never closes then the transaction will remain in progress.


--
Nigel Andrews



Re: caching query results

From
"scott.marlowe"
Date:
On Wed, 21 May 2003, alex b. wrote:

> hello dear people without shaved necks!
>
> as many of you have already told me cursors are the way to go - now I know!
>
> there it is, kindly provided my BILL G.:
>
>    BEGIN;
>    DECLARE <cursorname> FOR <query>;
>    FETCH <number_of_rows> FROM <cursorname>;
>    MOVE {FORWARD|BACKWARD} <number_of_rows> IN <cursorname>;
>
>
> THANK YOU ALL VERY VIEL (much in german)!!!
>
> I will now have to implement session ID's into my CGI's...
>
> oh by the way... lets say a transaction has begun and was never
> commited.. what will happen to it?
>
> is there a automatic rollback after a certain time?
> or would there be ton's of open transactions?

Well, transactions can't stay open across a disconnect / reconnect, so
you'll have to use some kind of connection pooling to ensure they stay
open between invocations of your cgi scripts.

What environment are you developing in?  Java has pretty good connection
pooling, and so does PERL.  PHP, not so good, but you can work around it
if you understand the underlying, uber simple persistant connection
methodology.

There may be some open source connection pooling packages that can hold
the transactions open for your cgi scripts, but I've not played with
actual CGI in a few years now, so I have no clue how well any thing like
that would work.


Re: caching query results

From
"alex b."
Date:
hello!

yes, I have already found out, that a transaction is over with each
CGI-connection and its disconnection...

now, I'd be interested in that connection pooling... any ideas, where
I'd have to start?

TIA


scott.marlowe wrote:
> On Wed, 21 May 2003, alex b. wrote:
>
>
>>hello dear people without shaved necks!
>>
>>as many of you have already told me cursors are the way to go - now I know!
>>
>>there it is, kindly provided my BILL G.:
>>
>>   BEGIN;
>>   DECLARE <cursorname> FOR <query>;
>>   FETCH <number_of_rows> FROM <cursorname>;
>>   MOVE {FORWARD|BACKWARD} <number_of_rows> IN <cursorname>;
>>
>>
>>THANK YOU ALL VERY VIEL (much in german)!!!
>>
>>I will now have to implement session ID's into my CGI's...
>>
>>oh by the way... lets say a transaction has begun and was never
>>commited.. what will happen to it?
>>
>>is there a automatic rollback after a certain time?
>>or would there be ton's of open transactions?
>
>
> Well, transactions can't stay open across a disconnect / reconnect, so
> you'll have to use some kind of connection pooling to ensure they stay
> open between invocations of your cgi scripts.
>
> What environment are you developing in?  Java has pretty good connection
> pooling, and so does PERL.  PHP, not so good, but you can work around it
> if you understand the underlying, uber simple persistant connection
> methodology.
>
> There may be some open source connection pooling packages that can hold
> the transactions open for your cgi scripts, but I've not played with
> actual CGI in a few years now, so I have no clue how well any thing like
> that would work.
>
>


Re: caching query results

From
"scott.marlowe"
Date:
Before investing a great deal of time in connection pooling, make sure you
actually need it.  You may get better performance for less work / hassle
by just using a temporary table to hold the result set you wanna walk
through.

Persistant connections are, generaly 1000 times (or more) faster at
establishing than non-persistant, but when non-persistant connects only take about
1/10,000 of a second, it's no great gain speed wise to have persistant
connects.

And most pooling systems still don't guarantee that you'll get the same
connect back each time.

On Fri, 23 May 2003, alex b. wrote:

> hello!
>
> yes, I have already found out, that a transaction is over with each
> CGI-connection and its disconnection...
>
> now, I'd be interested in that connection pooling... any ideas, where
> I'd have to start?
>
> TIA
>
>
> scott.marlowe wrote:
> > On Wed, 21 May 2003, alex b. wrote:
> >
> >
> >>hello dear people without shaved necks!
> >>
> >>as many of you have already told me cursors are the way to go - now I know!
> >>
> >>there it is, kindly provided my BILL G.:
> >>
> >>   BEGIN;
> >>   DECLARE <cursorname> FOR <query>;
> >>   FETCH <number_of_rows> FROM <cursorname>;
> >>   MOVE {FORWARD|BACKWARD} <number_of_rows> IN <cursorname>;
> >>
> >>
> >>THANK YOU ALL VERY VIEL (much in german)!!!
> >>
> >>I will now have to implement session ID's into my CGI's...
> >>
> >>oh by the way... lets say a transaction has begun and was never
> >>commited.. what will happen to it?
> >>
> >>is there a automatic rollback after a certain time?
> >>or would there be ton's of open transactions?
> >
> >
> > Well, transactions can't stay open across a disconnect / reconnect, so
> > you'll have to use some kind of connection pooling to ensure they stay
> > open between invocations of your cgi scripts.
> >
> > What environment are you developing in?  Java has pretty good connection
> > pooling, and so does PERL.  PHP, not so good, but you can work around it
> > if you understand the underlying, uber simple persistant connection
> > methodology.
> >
> > There may be some open source connection pooling packages that can hold
> > the transactions open for your cgi scripts, but I've not played with
> > actual CGI in a few years now, so I have no clue how well any thing like
> > that would work.
> >
> >
>
>


Re: caching query results

From
"scott.marlowe"
Date:
by the way, jsp supports connection pooling quite well.

Do you have a specific development environment you have to work in?

On Fri, 23 May 2003, alex b. wrote:

> hello!
>
> yes, I have already found out, that a transaction is over with each
> CGI-connection and its disconnection...
>
> now, I'd be interested in that connection pooling... any ideas, where
> I'd have to start?
>
> TIA
>
>
> scott.marlowe wrote:
> > On Wed, 21 May 2003, alex b. wrote:
> >
> >
> >>hello dear people without shaved necks!
> >>
> >>as many of you have already told me cursors are the way to go - now I know!
> >>
> >>there it is, kindly provided my BILL G.:
> >>
> >>   BEGIN;
> >>   DECLARE <cursorname> FOR <query>;
> >>   FETCH <number_of_rows> FROM <cursorname>;
> >>   MOVE {FORWARD|BACKWARD} <number_of_rows> IN <cursorname>;
> >>
> >>
> >>THANK YOU ALL VERY VIEL (much in german)!!!
> >>
> >>I will now have to implement session ID's into my CGI's...
> >>
> >>oh by the way... lets say a transaction has begun and was never
> >>commited.. what will happen to it?
> >>
> >>is there a automatic rollback after a certain time?
> >>or would there be ton's of open transactions?
> >
> >
> > Well, transactions can't stay open across a disconnect / reconnect, so
> > you'll have to use some kind of connection pooling to ensure they stay
> > open between invocations of your cgi scripts.
> >
> > What environment are you developing in?  Java has pretty good connection
> > pooling, and so does PERL.  PHP, not so good, but you can work around it
> > if you understand the underlying, uber simple persistant connection
> > methodology.
> >
> > There may be some open source connection pooling packages that can hold
> > the transactions open for your cgi scripts, but I've not played with
> > actual CGI in a few years now, so I have no clue how well any thing like
> > that would work.
> >
> >
>
>


Re: caching query results

From
"alex b."
Date:
scott.marlowe wrote:
> by the way, jsp supports connection pooling quite well.
>
> Do you have a specific development environment you have to work in?

oh yeah, sorry, my bad.
I am working with perl.

besides: temporary tables also last only one connection, as the
documentation indicates.
the script does connect, and after printing everything to the browser it
dies... as well as the connection to postgres.

opening the script with the commands to show the next few rows would
again start the same script, but with a different offset...

I'm beginning to give it up, and use a very simple solution: temporary
files - tab delimited. the script would parse the SQL output (from the
file) and I bet this would be almost as fast as querying a table, or
faster... this is just a thought I had, but I think the pure database
solution is a lot fancier and I'd really want to go for that instead...

>
> On Fri, 23 May 2003, alex b. wrote:
>
>
>>hello!
>>
>>yes, I have already found out, that a transaction is over with each
>>CGI-connection and its disconnection...
>>
>>now, I'd be interested in that connection pooling... any ideas, where
>>I'd have to start?
>>
>>TIA
>>
>>
>>scott.marlowe wrote:
>>
>>>On Wed, 21 May 2003, alex b. wrote:
>>>
>>>
>>>
>>>>hello dear people without shaved necks!
>>>>
>>>>as many of you have already told me cursors are the way to go - now I know!
>>>>
>>>>there it is, kindly provided my BILL G.:
>>>>
>>>>  BEGIN;
>>>>  DECLARE <cursorname> FOR <query>;
>>>>  FETCH <number_of_rows> FROM <cursorname>;
>>>>  MOVE {FORWARD|BACKWARD} <number_of_rows> IN <cursorname>;
>>>>
>>>>
>>>>THANK YOU ALL VERY VIEL (much in german)!!!
>>>>
>>>>I will now have to implement session ID's into my CGI's...
>>>>
>>>>oh by the way... lets say a transaction has begun and was never
>>>>commited.. what will happen to it?
>>>>
>>>>is there a automatic rollback after a certain time?
>>>>or would there be ton's of open transactions?
>>>
>>>
>>>Well, transactions can't stay open across a disconnect / reconnect, so
>>>you'll have to use some kind of connection pooling to ensure they stay
>>>open between invocations of your cgi scripts.
>>>
>>>What environment are you developing in?  Java has pretty good connection
>>>pooling, and so does PERL.  PHP, not so good, but you can work around it
>>>if you understand the underlying, uber simple persistant connection
>>>methodology.
>>>
>>>There may be some open source connection pooling packages that can hold
>>>the transactions open for your cgi scripts, but I've not played with
>>>actual CGI in a few years now, so I have no clue how well any thing like
>>>that would work.
>>>
>>>
>>
>>
>
>


Re: caching query results

From
"scott.marlowe"
Date:
On Fri, 23 May 2003, alex b. wrote:

>
> scott.marlowe wrote:
> > by the way, jsp supports connection pooling quite well.
> >
> > Do you have a specific development environment you have to work in?
>
> oh yeah, sorry, my bad.
> I am working with perl.
>
> besides: temporary tables also last only one connection, as the
> documentation indicates.
> the script does connect, and after printing everything to the browser it
> dies... as well as the connection to postgres.
>
> opening the script with the commands to show the next few rows would
> again start the same script, but with a different offset...
>
> I'm beginning to give it up, and use a very simple solution: temporary
> files - tab delimited. the script would parse the SQL output (from the
> file) and I bet this would be almost as fast as querying a table, or
> faster... this is just a thought I had, but I think the pure database
> solution is a lot fancier and I'd really want to go for that instead...

I meant to use a regular table that you use as though it were a temp
table.  But your temp file system should work just as well too.