Thread: Re: Used Memory

Re: Used Memory

From
"Kevin Grittner"
Date:
In addition to what Mark pointed out, there is the possibility that a
query
is running which is scanning a large table or otherwise bringing in a
large number of pages from disk.  That would first use up all available
unused cache space, and then may start replacing some of your
frequently used data.  This can cause slowness for some time after the
process which flushed the cache, as pages are reread and recached.

Keep in mind that the cache could be flushed by some external process,
such as copying disk files.

The use of free memory for caching is not slowing you down; but if it
coincides with slowness, it could be a useful clue.

-Kevin


Re: Used Memory

From
"Craig A. James"
Date:
Kevin Grittner wrote:
> In addition to what Mark pointed out, there is the possibility that a
> query
> is running which is scanning a large table or otherwise bringing in a
> large number of pages from disk.  That would first use up all available
> unused cache space, and then may start replacing some of your
> frequently used data.

An LRU cache is often a bad strategy for database applications.  There are two illustrations that show why.

1. You have an index that's used for EVERY search, but each search returns a large and unique set of rows.  If it
happensthat the rows returned exceed the systems cache size, the part or all of your index will be flushed with EVERY
query.

2. You do a sequential scan of a table that's one block bigger than the file system cache, then you do it again.  At
thebeginning of the second scan, the first block of the table will have just been swapped out because it was the
oldest,so the file system brings it back in, replacing the second block, which is now the oldest.  As you scan the
table,each block of the table is swapped out JUST BEFORE you get to it.  At the start of your query, the file system
mighthave had 99.9% of the relevant data in memory, but it swaps out 100% of it as your query progresses. 

Scenario 2 above is interesting because a system that is performing very well can suddenly experience a catastrophic
performancedecline when the size of the data exceeds a critical limit - the file system's avaliable cache. 

LRU works well if your frequently-used data is used often enough to keep it in memory.  But many applications don't
havethat luxury.  It's often the case that a single query will exceed the file system's cache size.  The file system
cacheis "dumb" -- it's strategy is too simple for a relational database. 

What's needed is a way for the application developer to explicitely say, "This object is frequenly used, and I want it
keptin memory." 

Craig

Re: Used Memory

From
Scott Marlowe
Date:
On Mon, 2005-10-24 at 12:00, Craig A. James wrote:
> Kevin Grittner wrote:
> > In addition to what Mark pointed out, there is the possibility that a
> > query
> > is running which is scanning a large table or otherwise bringing in a
> > large number of pages from disk.  That would first use up all available
> > unused cache space, and then may start replacing some of your
> > frequently used data.
>
> An LRU cache is often a bad strategy for database applications.  There are two illustrations that show why.
>
> 1. You have an index that's used for EVERY search, but each search returns a large and unique set of rows.  If it
happensthat the rows returned exceed the systems cache size, the part or all of your index will be flushed with EVERY
query.
>
> 2. You do a sequential scan of a table that's one block bigger than the file system cache, then you do it again.  At
thebeginning of the second scan, the first block of the table will have just been swapped out because it was the
oldest,so the file system brings it back in, replacing the second block, which is now the oldest.  As you scan the
table,each block of the table is swapped out JUST BEFORE you get to it.  At the start of your query, the file system
mighthave had 99.9% of the relevant data in memory, but it swaps out 100% of it as your query progresses. 
>
> Scenario 2 above is interesting because a system that is performing very well can suddenly experience a catastrophic
performancedecline when the size of the data exceeds a critical limit - the file system's avaliable cache. 
>
> LRU works well if your frequently-used data is used often enough to keep it in memory.  But many applications don't
havethat luxury.  It's often the case that a single query will exceed the file system's cache size.  The file system
cacheis "dumb" -- it's strategy is too simple for a relational database. 
>
> What's needed is a way for the application developer to explicitely say, "This object is frequenly used, and I want
itkept in memory." 

There's an interesting conversation happening on the linux kernel
hackers mailing list right about now that applies:

http://www.gossamer-threads.com/lists/linux/kernel/580789

Re: Used Memory

From
"Craig A. James"
Date:
Scott Marlowe wrote:
>>What's needed is a way for the application developer to explicitely say,
>> "This object is frequenly used, and I want it kept in memory."
>
> There's an interesting conversation happening on the linux kernel
> hackers mailing list right about now that applies:
>
> http://www.gossamer-threads.com/lists/linux/kernel/580789

Thanks for the pointer.  If you're a participant in that mailing list, maybe you could forward this comment...

A fundamental flaw in the kernel, which goes WAY back to early UNIX implementations, is that the nice(1) setting of a
programonly applies to CPU usage, not to other resources.  In this case, the file-system cache has no priority, so even
ifI set postmaster's nice(1) value to a very high priority, any pissant process with the lowest priority possible can
comealong with a "cat some-big-file >/dev/null" and trash my cached file-system pages.  It's essentially a
denial-of-servicemechanism that's built in to the kernel. 

The kernel group's discussion on the heuristics of how and when to toss stale cache pages should have a strong nice(1)
componentto it.  A process with a low priority should not be allowed to toss memory from a higher-priority process
unlessthere is no other source of memory. 

Getting back to Postgres, the same points that the linux kernel group are discussing apply to Postgres.  There is
simplyno way to devise a heuristic that comes even close to what the app developer can tell you.  A mechanism that
allowedan application to say, "Keep this table in memory" is the only way.  App developers should be advised to use it
sparingly,because most of the time the system is pretty good at memory management, and such a mechanism hobbles the
system'sability to manage.  But when it's needed, there is no substitute. 

Craig


Re: Used Memory

From
"Christian Paul B. Cosinas"
Date:
Hi To all those who replied. Thank You.

I monitor my database server a while ago and found out that memory is used
extensively when I am fetching records from the database. I use the command
"fetch all" in my VB Code and put it in a recordset.Also in this command the
CPU utilization is used extensively.

Is there something wrong with my code or is it just the way postgresql is
behaving which I cannot do something about it?

I just monitor one workstation connecting to the database server and it is
already eating up about 20 % of the CPU of database server.

Which I think will not be applicable to our system since we have a target of
25 PC connecting to the database server most of the time.

-----Original Message-----
From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Craig A. James
Sent: Monday, October 24, 2005 9:47 PM
To: Scott Marlowe
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory

Scott Marlowe wrote:
>>What's needed is a way for the application developer to explicitely
>>say,  "This object is frequenly used, and I want it kept in memory."
>
> There's an interesting conversation happening on the linux kernel
> hackers mailing list right about now that applies:
>
> http://www.gossamer-threads.com/lists/linux/kernel/580789

Thanks for the pointer.  If you're a participant in that mailing list, maybe
you could forward this comment...

A fundamental flaw in the kernel, which goes WAY back to early UNIX
implementations, is that the nice(1) setting of a program only applies to
CPU usage, not to other resources.  In this case, the file-system cache has
no priority, so even if I set postmaster's nice(1) value to a very high
priority, any pissant process with the lowest priority possible can come
along with a "cat some-big-file >/dev/null" and trash my cached file-system
pages.  It's essentially a denial-of-service mechanism that's built in to
the kernel.

The kernel group's discussion on the heuristics of how and when to toss
stale cache pages should have a strong nice(1) component to it.  A process
with a low priority should not be allowed to toss memory from a
higher-priority process unless there is no other source of memory.

Getting back to Postgres, the same points that the linux kernel group are
discussing apply to Postgres.  There is simply no way to devise a heuristic
that comes even close to what the app developer can tell you.  A mechanism
that allowed an application to say, "Keep this table in memory" is the only
way.  App developers should be advised to use it sparingly, because most of
the time the system is pretty good at memory management, and such a
mechanism hobbles the system's ability to manage.  But when it's needed,
there is no substitute.

Craig


---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq


I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html


Re: Used Memory

From
Mark Kirkwood
Date:
Christian Paul B. Cosinas wrote:
> Hi To all those who replied. Thank You.
>
> I monitor my database server a while ago and found out that memory is used
> extensively when I am fetching records from the database. I use the command
> "fetch all" in my VB Code and put it in a recordset.Also in this command the
> CPU utilization is used extensively.
>
> Is there something wrong with my code or is it just the way postgresql is
> behaving which I cannot do something about it?
>
> I just monitor one workstation connecting to the database server and it is
> already eating up about 20 % of the CPU of database server.
>
> Which I think will not be applicable to our system since we have a target of
> 25 PC connecting to the database server most of the time.
>

Could you post the query and the output of EXPLAIN ANALYZE?

In addition, have you run ANALYZE on all the tables in that database ?
(sorry, have to ask :-) ....).

cheers

Mark

Re: Used Memory

From
"Christian Paul B. Cosinas"
Date:
Hi mark

I have so many functions, more than 100 functions in the database :) And I
am dealing about 3 million of records in one database.
And about 100 databases :)


-----Original Message-----
From: Mark Kirkwood [mailto:markir@paradise.net.nz]
Sent: Tuesday, October 25, 2005 3:07 AM
To: Christian Paul B. Cosinas
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory

Christian Paul B. Cosinas wrote:
> Hi To all those who replied. Thank You.
>
> I monitor my database server a while ago and found out that memory is
> used extensively when I am fetching records from the database. I use
> the command "fetch all" in my VB Code and put it in a recordset.Also
> in this command the CPU utilization is used extensively.
>
> Is there something wrong with my code or is it just the way postgresql
> is behaving which I cannot do something about it?
>
> I just monitor one workstation connecting to the database server and
> it is already eating up about 20 % of the CPU of database server.
>
> Which I think will not be applicable to our system since we have a
> target of
> 25 PC connecting to the database server most of the time.
>

Could you post the query and the output of EXPLAIN ANALYZE?

In addition, have you run ANALYZE on all the tables in that database ?
(sorry, have to ask :-) ....).

cheers

Mark


I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html


Re: Used Memory

From
Mark Kirkwood
Date:
Christian Paul B. Cosinas wrote:
> Hi mark
>
> I have so many functions, more than 100 functions in the database :) And I
> am dealing about 3 million of records in one database.
> And about 100 databases :)
>

LOL - sorry, mis-understood your previous message to mean you had
identified *one* query where 'fetch all' was causing the problem!

Having said that, to make much more progress, you probably want to
identify those queries that are consuming your resource, pick one of two
of the particularly bad ones and post 'em.

There are a number of ways to perform said identification, enabling
stats collection might be worth a try.

regards

Mark