Thread: How to reduce writing on disk ? (90 gb on pgsql_tmp)
Hi all, We have a big database (more than 300 Gb) and we run a lot of queries each minute. However, once an hour, the (very complex) query writes A LOT on the disk (more than 95 Gb !!!) We have 64 Gb of RAM and this is our config : And my error on the query is : Do you know how to solve this problem ? Best regards, Benjamin. -- View this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
You should repost this directly and not through Nabble. It has wrapped your code in raw tags which the PostgreSQL mailing list software strips.
On Wednesday, June 3, 2015, ben.play <benjamin.cohen@playrion.com> wrote:
On Wednesday, June 3, 2015, ben.play <benjamin.cohen@playrion.com> wrote:
Hi all,
We have a big database (more than 300 Gb) and we run a lot of queries each
minute.
However, once an hour, the (very complex) query writes A LOT on the disk
(more than 95 Gb !!!)
We have 64 Gb of RAM and this is our config :
And my error on the query is :
Do you know how to solve this problem ?
Best regards,
Benjamin.
--
View this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
SQLSTATE[53100]: Disk full: 7 ERROR: could not write block 1099247 of temporary file Its looks like there is no room to write temporary file, try with limiting temporary file size by setting temp_file_limit GUC. -- View this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5852328.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
Hi Benjamin,
It looks you are facing disk space issue for queries.
In order to avid the disk space issue you can do the following.
1) Increase the work_mem parameter session level before executing the queries.
2) If you observe diskspace issue particular user queries,increase the work_mem parameter user level.
3) Check with developer to tune the query.
On Wed, Jun 3, 2015 at 6:41 PM, amulsul <sul_amul@yahoo.co.in> wrote:
SQLSTATE[53100]: Disk full: 7 ERROR: could not write block 1099247 of
temporary file
Its looks like there is no room to write temporary file, try with limiting
temporary file size by setting temp_file_limit GUC.
--
View this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5852328.htmlSent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 06/03/15 15:27, chiru r wrote: > Hi Benjamin, > > It looks you are facing disk space issue for queries. > In order to avid the disk space issue you can do the following. > 1) Increase the work_mem parameter session level before executing the > queries. > 2) If you observe diskspace issue particular user queries,increase the > work_mem parameter user level. The suggestion to increase work_mem is a bit naive, IMHO. The query is writing ~95GB to disk, it usually takes more space to keep the same data in memory. They only have 64GB of RAM ... In the good case, it will crash just like now. In the worse case, the OOM killer will intervene, possibly crashing the whole database. > 3) Check with developer to tune the query. That's a better possibility. Sadly, we don't know what the query is doing, so we can't judge how much it can be optimized. -- Tomas Vondra http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
The query is (unfortunately) generated by Doctrine 2 (Symfony 2).
We can’t change the query easily.
This is my config :
max_connections = 80 shared_buffers = 15GB work_mem = 384MB maintenance_work_mem = 1GB #temp_buffers = 8MB #temp_file_limit = -1 effective_cache_size = 44GB
If I put a temp_file_limit …Are all my queries (who have to write on disk) will crash ?
As you can see… I have 64 gb of Ram, but less than 3 Gb is used !
ben@bdd:/home/benjamin# free -m
total used free shared buffers cached
Mem: 64456 64141 315 15726 53 61761
-/+ buffers/cache: 2326 62130
Swap: 1021 63 958
Thanks guys for your help :)
Le 3 juin 2015 à 15:51, Tomas Vondra-4 [via PostgreSQL] <[hidden email]> a écrit :
On 06/03/15 15:27, chiru r wrote:
> Hi Benjamin,
>
> It looks you are facing disk space issue for queries.
> In order to avid the disk space issue you can do the following.
> 1) Increase the work_mem parameter session level before executing the
> queries.
> 2) If you observe diskspace issue particular user queries,increase the
> work_mem parameter user level.
The suggestion to increase work_mem is a bit naive, IMHO. The query is
writing ~95GB to disk, it usually takes more space to keep the same data
in memory. They only have 64GB of RAM ...
In the good case, it will crash just like now. In the worse case, the
OOM killer will intervene, possibly crashing the whole database.
> 3) Check with developer to tune the query.
That's a better possibility. Sadly, we don't know what the query is
doing, so we can't judge how much it can be optimized.
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-performance mailing list (<a href="x-msg://4/user/SendEmail.jtp?type=node&node=5852331&i=0" target="_top" rel="nofollow" link="external" class="">[hidden email])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performanceIf you reply to this email, your message will be added to the discussion below:http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5852331.html
View this message in context: Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
On 06/03/15 16:06, ben.play wrote: > The query is (unfortunately) generated by Doctrine 2 (Symfony 2). > We can’t change the query easily. Well, then you'll probably have to buy more RAM, apparently. > This is my config : > > max_connections = 80 > shared_buffers = 15GB > work_mem = 384MB > maintenance_work_mem = 1GB > #temp_buffers = 8MB > #temp_file_limit = -1 > effective_cache_size = 44GB > > > If I put a temp_file_limit …Are all my queries (who have to write on > disk) will crash ? > > As you can see… I have 64 gb of Ram, but less than 3 Gb is used ! > > ben@bdd:/home/benjamin# free -m > total used free shared buffers cached > Mem: 64456 64141 315 15726 53 61761 > -/+ buffers/cache: 2326 62130 > Swap: 1021 63 958 > > > Thanks guys for your help :) I don't see why you think you have less than 3GB used. The output you posted clearly shows there's only ~300MB memory free - there's 15GB shared buffers and ~45GB of page cache (file system cache). But you still haven't shows us the query (the EXPLAIN ANALYZE of it), so we can't really give you advice. -- Tomas Vondra http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote: > > On 06/03/15 16:06, ben.play wrote: >> >> The query is (unfortunately) generated by Doctrine 2 (Symfony 2). >> We can’t change the query easily. > > > Well, then you'll probably have to buy more RAM, apparently. > >> This is my config : >> >> max_connections = 80 >> shared_buffers = 15GB >> work_mem = 384MB >> maintenance_work_mem = 1GB >> #temp_buffers = 8MB >> #temp_file_limit = -1 >> effective_cache_size = 44GB >> >> >> If I put a temp_file_limit …Are all my queries (who have to write on >> disk) will crash ? >> >> As you can see… I have 64 gb of Ram, but less than 3 Gb is used ! >> >> ben@bdd:/home/benjamin# free -m >> total used free shared buffers cached >> Mem: 64456 64141 315 15726 53 61761 >> -/+ buffers/cache: 2326 62130 >> Swap: 1021 63 958 >> >> >> Thanks guys for your help :) > > > I don't see why you think you have less than 3GB used. The output you posted > clearly shows there's only ~300MB memory free - there's 15GB shared buffers > and ~45GB of page cache (file system cache). Because you subtract cached from used to see how much real spare memory you have. The kernel will dump cached mem as needed to free up space for memory usage. So 64141-61761=2380MB used.
On Wed, Jun 3, 2015 at 11:56 AM, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote: > On 06/03/15 16:06, ben.play wrote: >> >> The query is (unfortunately) generated by Doctrine 2 (Symfony 2). >> We can’t change the query easily. > > > Well, then you'll probably have to buy more RAM, apparently. There's an easy way to add disk space for this kind of thing. Add a big fat rotational HD (temp tables are usually sequentially written and scanned so rotational performs great), format it of course, and create a tablespace pointing to it. Then set it as default in temp_tablespaces (postgresql.conf) or do it in the big query's session (I'd recommend the global option if you don't already use a separate tablespace for temporary tables). Not only it will give you the necessary space, but it will also be substantially faster. You'll have to be careful about backups though (the move from one filesystem to two filesystems always requires changes to backup strategies)
On 06/03/15 17:09, Scott Marlowe wrote: > On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra >> >> >> I don't see why you think you have less than 3GB used. The output you posted >> clearly shows there's only ~300MB memory free - there's 15GB shared buffers >> and ~45GB of page cache (file system cache). > > Because you subtract cached from used to see how much real spare > memory you have. The kernel will dump cached mem as needed to free up > space for memory usage. So 64141-61761=2380MB used. Well, except that 15GB of that is shared_buffers, and I wouldn't call that 'free'. Also, I don't see page cache as entirely free - you probably want at least some caching at this level. In any case, even if all 64GB were free, this would not be enough for the query that needs >95GB for temp files. -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Jun 3, 2015 at 1:24 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote: > > > On 06/03/15 17:09, Scott Marlowe wrote: >> >> On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra >>> >>> >>> >>> I don't see why you think you have less than 3GB used. The output you >>> posted >>> clearly shows there's only ~300MB memory free - there's 15GB shared >>> buffers >>> and ~45GB of page cache (file system cache). >> >> >> Because you subtract cached from used to see how much real spare >> memory you have. The kernel will dump cached mem as needed to free up >> space for memory usage. So 64141-61761=2380MB used. > > > Well, except that 15GB of that is shared_buffers, and I wouldn't call that > 'free'. Also, I don't see page cache as entirely free - you probably want at > least some caching at this level. > > In any case, even if all 64GB were free, this would not be enough for the > query that needs >95GB for temp files. You can argue all you want, but this machine has plenty of free memory right now, and unless the OP goes crazy and cranks up work_mem to some much higher level it'll stay that way, which is good. There's far far more than 300MB free here. At the drop of a hat there can be ~60G freed up as needed, either for shared_buffers or work_mem or other things to happen. Cache doesn't count as "used" in terms of real memory pressure. IE you're not gonna start getting swapping becase you need more memory, it'll just come from the cache. Cache is free memory. If you think of it any other way when you're looking at memory usage and pressure on theings like swap you're gonna make some bad decisions.
On Wed, Jun 3, 2015 at 6:18 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Wed, Jun 3, 2015 at 1:24 PM, Tomas Vondra > <tomas.vondra@2ndquadrant.com> wrote: >> >> >> On 06/03/15 17:09, Scott Marlowe wrote: >>> >>> On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra >>>> >>>> >>>> >>>> I don't see why you think you have less than 3GB used. The output you >>>> posted >>>> clearly shows there's only ~300MB memory free - there's 15GB shared >>>> buffers >>>> and ~45GB of page cache (file system cache). >>> >>> >>> Because you subtract cached from used to see how much real spare >>> memory you have. The kernel will dump cached mem as needed to free up >>> space for memory usage. So 64141-61761=2380MB used. >> >> >> Well, except that 15GB of that is shared_buffers, and I wouldn't call that >> 'free'. Also, I don't see page cache as entirely free - you probably want at >> least some caching at this level. >> >> In any case, even if all 64GB were free, this would not be enough for the >> query that needs >95GB for temp files. > > You can argue all you want, but this machine has plenty of free memory > right now, and unless the OP goes crazy and cranks up work_mem to some > much higher level it'll stay that way, which is good. There's far far > more than 300MB free here. At the drop of a hat there can be ~60G > freed up as needed, either for shared_buffers or work_mem or other > things to happen. Cache doesn't count as "used" in terms of real > memory pressure. IE you're not gonna start getting swapping becase you > need more memory, it'll just come from the cache. In my experience, dumping the buffer cache as heavily as that counts as thrashing. Either concurrent or future queries will have to go to disk and that will throw performance out the window, which is never quite so ok. It is generally better to let pg use that temporary file (unless the in-memory strategy happens to be much faster, say using a hash instead of sort, which usually doesn't happen in my experience for those sizes anyway) and let the OS handle the pressure those dirty buffers cause. The OS will usually handle it better than work_mem.
On 06/03/15 23:18, Scott Marlowe wrote: > On Wed, Jun 3, 2015 at 1:24 PM, Tomas Vondra > <tomas.vondra@2ndquadrant.com> wrote: >> >> On 06/03/15 17:09, Scott Marlowe wrote: >>> >>> On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra >>> >> Well, except that 15GB of that is shared_buffers, and I wouldn't call that >> 'free'. Also, I don't see page cache as entirely free - you probably want at >> least some caching at this level. >> >> In any case, even if all 64GB were free, this would not be enough for the >> query that needs >95GB for temp files. > > You can argue all you want, but this machine has plenty of free memory > right now, and unless the OP goes crazy and cranks up work_mem to some > much higher level it'll stay that way, which is good. There's far far > more than 300MB free here. At the drop of a hat there can be ~60G > freed up as needed, either for shared_buffers or work_mem or other > things to happen. Cache doesn't count as "used" in terms of real > memory pressure. IE you're not gonna start getting swapping becase you > need more memory, it'll just come from the cache. Please, could you explain how you free 60GB 'as need' when 15GB of that is actually used for shared buffers? Also, we don't know how much of that cache is 'dirty' which makes it more difficult to free. What is more important, though, is the amount of memory. OP reported the query writes ~95GB of temp files (and dies because of full disk, so there may be more). The on-disk format is usually more compact than the in-memory representation - for example on-disk sort often needs 3x less space than in-memory qsort. So we can assume the query needs >95GB of data. Can you explain how that's going to fit into the 64GB RAM? > Cache is free memory. If you think of it any other way when you're > looking at memory usage and pressure on theings like swap you're > gonna make some bad decisions. Cache is not free memory - it's there for a purpose and usually plays a significant role in performance. Sure, it may be freed and used for other purposes, but that has consequences - e.g. it impacts performance of other queries etc. You generally don't want to do that on production. -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 06/03/2015 03:16 PM, Tomas Vondra wrote: > What is more important, though, is the amount of memory. OP reported the > query writes ~95GB of temp files (and dies because of full disk, so > there may be more). The on-disk format is usually more compact than the > in-memory representation - for example on-disk sort often needs 3x less > space than in-memory qsort. So we can assume the query needs >95GB of > data. Can you explain how that's going to fit into the 64GB RAM? > >> Cache is free memory. If you think of it any other way when you're >> looking at memory usage and pressure on theings like swap you're >> gonna make some bad decisions. > > Cache is not free memory - it's there for a purpose and usually plays a > significant role in performance. Sure, it may be freed and used for > other purposes, but that has consequences - e.g. it impacts performance > of other queries etc. You generally don't want to do that on production. Exactly. If your cache is reduced your performance is reduced because less things are in cache. It is not free memory. Also the command "free" is not useful in this scenario. It is almost always better to use sar so you can see where the data points are that free is using. Sincerely, JD -- Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564 PostgreSQL Centered full stack support, consulting and development. Announcing "I'm offended" is basically telling the world you can't control your own emotions, so everyone else should do it for you.
On 2015-06-03 16:29, Joshua D. Drake wrote: > > On 06/03/2015 03:16 PM, Tomas Vondra wrote: > >> What is more important, though, is the amount of memory. OP reported the >> query writes ~95GB of temp files (and dies because of full disk, so >> there may be more). The on-disk format is usually more compact than the >> in-memory representation - for example on-disk sort often needs 3x less >> space than in-memory qsort. So we can assume the query needs >95GB of >> data. Can you explain how that's going to fit into the 64GB RAM? >> >>> Cache is free memory. If you think of it any other way when you're >>> looking at memory usage and pressure on theings like swap you're >>> gonna make some bad decisions. >> >> Cache is not free memory - it's there for a purpose and usually plays a >> significant role in performance. Sure, it may be freed and used for >> other purposes, but that has consequences - e.g. it impacts performance >> of other queries etc. You generally don't want to do that on production. > > Exactly. If your cache is reduced your performance is reduced because less > things are in cache. It is not free memory. Also the command "free" is not > useful in this scenario. It is almost always better to use sar so you can see > where the data points are that free is using. > It's one thing to consciously keep free memory for the OS cache, but you should not take the "free" column from the first line output of the program free as meaning that's all there is left, or that you need all that memory. You should look at "used" from the second line ("-/+ buffers/cache"). That value is what the kernel and all the apps are using on your machine. Add what ever you want to have for OS cache, and this is the total amount of memory you want in your machine. Note that for a machine that has run long enough, and done enough I/O ops, "free" from the first line will always be close to 0, because the OS tries to use as much memory as possible for caching, do enough I/O and you'll fill that up. -- http://yves.zioup.com gpg: 4096R/32B0F416
On 06/04/15 01:54, Yves Dorfsman wrote: > On 2015-06-03 16:29, Joshua D. Drake wrote: >> >> On 06/03/2015 03:16 PM, Tomas Vondra wrote: >> >>> What is more important, though, is the amount of memory. OP reported the >>> query writes ~95GB of temp files (and dies because of full disk, so >>> there may be more). The on-disk format is usually more compact than the >>> in-memory representation - for example on-disk sort often needs 3x less >>> space than in-memory qsort. So we can assume the query needs >95GB of >>> data. Can you explain how that's going to fit into the 64GB RAM? >>> >>>> Cache is free memory. If you think of it any other way when you're >>>> looking at memory usage and pressure on theings like swap you're >>>> gonna make some bad decisions. >>> >>> Cache is not free memory - it's there for a purpose and usually plays a >>> significant role in performance. Sure, it may be freed and used for >>> other purposes, but that has consequences - e.g. it impacts performance >>> of other queries etc. You generally don't want to do that on production. >> >> Exactly. If your cache is reduced your performance is reduced because less >> things are in cache. It is not free memory. Also the command "free" is not >> useful in this scenario. It is almost always better to use sar so you can see >> where the data points are that free is using. >> > > It's one thing to consciously keep free memory for the OS cache, but > you should not take the "free" column from the first line output of > the program free as meaning that's all there is left, or that you > need allthat memory. No one suggested using the 'free' column this way, so I'm not sure what you're responding to? > You should look at "used" from the second line ("-/+ buffers/cache"). > That value is what the kernel and all the apps are using on your > machine. Add whatever you want to have for OS cache, and this is the > total amount ofmemory you want in your machine. Except that the second line is not particularly helpful too, because it does not account for the shared buffers clearly, nor does it show what part of the page cache is dirty etc. > Note that for a machine that has run long enough, and done enough > I/O ops, "free" from the first line will always be close to 0, > because the OS tries to use as much memory as possible for caching, > do enough I/O and you'll fill that up. That's generally true, but the assumption is that on a 300GB database the page cache has a significant benefit for performance. What however makes this approach utterly futile is the fact that OP has only 64GB of RAM (and only ~45GB of that in page cache), and the query writes >95GB temp files on disk (and then fails). So even if you drop the whole page cache, the query will fail anyway. -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake <jd@commandprompt.com> wrote: > > On 06/03/2015 03:16 PM, Tomas Vondra wrote: > >> What is more important, though, is the amount of memory. OP reported the >> query writes ~95GB of temp files (and dies because of full disk, so >> there may be more). The on-disk format is usually more compact than the >> in-memory representation - for example on-disk sort often needs 3x less >> space than in-memory qsort. So we can assume the query needs >95GB of >> data. Can you explain how that's going to fit into the 64GB RAM? >> >>> Cache is free memory. If you think of it any other way when you're >>> looking at memory usage and pressure on theings like swap you're >>> gonna make some bad decisions. >> >> >> Cache is not free memory - it's there for a purpose and usually plays a >> significant role in performance. Sure, it may be freed and used for >> other purposes, but that has consequences - e.g. it impacts performance >> of other queries etc. You generally don't want to do that on production. > > > Exactly. If your cache is reduced your performance is reduced because less > things are in cache. It is not free memory. Also the command "free" is not > useful in this scenario. It is almost always better to use sar so you can > see where the data points are that free is using. But if that WAS happening he wouldn't still HAVE 60G of cache! That's my whole point. He's NOT running out of memory. He's not even having to dump cache right now.
On Wed, Jun 3, 2015 at 6:53 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake <jd@commandprompt.com> wrote: >> >> On 06/03/2015 03:16 PM, Tomas Vondra wrote: >> >>> What is more important, though, is the amount of memory. OP reported the >>> query writes ~95GB of temp files (and dies because of full disk, so >>> there may be more). The on-disk format is usually more compact than the >>> in-memory representation - for example on-disk sort often needs 3x less >>> space than in-memory qsort. So we can assume the query needs >95GB of >>> data. Can you explain how that's going to fit into the 64GB RAM? >>> >>>> Cache is free memory. If you think of it any other way when you're >>>> looking at memory usage and pressure on theings like swap you're >>>> gonna make some bad decisions. >>> >>> >>> Cache is not free memory - it's there for a purpose and usually plays a >>> significant role in performance. Sure, it may be freed and used for >>> other purposes, but that has consequences - e.g. it impacts performance >>> of other queries etc. You generally don't want to do that on production. >> >> >> Exactly. If your cache is reduced your performance is reduced because less >> things are in cache. It is not free memory. Also the command "free" is not >> useful in this scenario. It is almost always better to use sar so you can >> see where the data points are that free is using. > > But if that WAS happening he wouldn't still HAVE 60G of cache! That's > my whole point. He's NOT running out of memory. He's not even having > to dump cache right now. Further if he started using a few gig here for this one it wouldn't have a big impact on cache (60G-1G etc) but might make it much faster, as spilling to disk is a lot less intrusive when you've got a bigger chunk of ram to work in. OTOH doing something like setting work_mem to 60G would likely be fatal. But he's not down to 3GB of memory by any kind of imagination. Any working machine will slowly, certainly fill its caches since it's not using the memory for anything else. That's normal. As long as you're not blowing out the cache you're fine.
On 04/06/15 12:58, Scott Marlowe wrote: > On Wed, Jun 3, 2015 at 6:53 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake <jd@commandprompt.com> wrote: >>> >>> On 06/03/2015 03:16 PM, Tomas Vondra wrote: >>> >>>> What is more important, though, is the amount of memory. OP reported the >>>> query writes ~95GB of temp files (and dies because of full disk, so >>>> there may be more). The on-disk format is usually more compact than the >>>> in-memory representation - for example on-disk sort often needs 3x less >>>> space than in-memory qsort. So we can assume the query needs >95GB of >>>> data. Can you explain how that's going to fit into the 64GB RAM? >>>> >>>>> Cache is free memory. If you think of it any other way when you're >>>>> looking at memory usage and pressure on theings like swap you're >>>>> gonna make some bad decisions. >>>> >>>> >>>> Cache is not free memory - it's there for a purpose and usually plays a >>>> significant role in performance. Sure, it may be freed and used for >>>> other purposes, but that has consequences - e.g. it impacts performance >>>> of other queries etc. You generally don't want to do that on production. >>> >>> >>> Exactly. If your cache is reduced your performance is reduced because less >>> things are in cache. It is not free memory. Also the command "free" is not >>> useful in this scenario. It is almost always better to use sar so you can >>> see where the data points are that free is using. >> >> But if that WAS happening he wouldn't still HAVE 60G of cache! That's >> my whole point. He's NOT running out of memory. He's not even having >> to dump cache right now. > > Further if he started using a few gig here for this one it wouldn't > have a big impact on cache (60G-1G etc) but might make it much faster, > as spilling to disk is a lot less intrusive when you've got a bigger > chunk of ram to work in. OTOH doing something like setting work_mem to > 60G would likely be fatal. > > But he's not down to 3GB of memory by any kind of imagination. Any > working machine will slowly, certainly fill its caches since it's not > using the memory for anything else. That's normal. As long as you're > not blowing out the cache you're fine. > > I agree with Scott's analysis here. It seems to me that the issue is the query(s) using too much disk space. As others have said, it may not be practical to up work_mem to the point where is all happens in memory...so probably need to: - get more disk or, - tweak postgres params to get a less disk hungry plan (need to see that explain analyze)! Cheers Mark
On 06/04/15 02:58, Scott Marlowe wrote: > On Wed, Jun 3, 2015 at 6:53 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: >> On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake <jd@commandprompt.com> wrote: >>> >>> On 06/03/2015 03:16 PM, Tomas Vondra wrote: >>> >>>> Cache is not free memory - it's there for a purpose and usually >>>> plays a significant role in performance. Sure, it may be freed >>>> and used for other purposes, but that has consequences - e.g. >>>> it impacts performance of other queries etc. You generally >>>> don't want to do that onproduction. >>> >>> >>> Exactly. If your cache is reduced your performance is reduced >>> because less things are in cache. It is not free memory. Also the >>> command "free" is not useful in this scenario. It is almost >>> alwaysbetter to use sar so you can see where the data points are >>> thatfree is using. >> >> But if that WAS happening he wouldn't still HAVE 60G of cache! >> That's my whole point. He's NOT running out of memory. He's not >> even having to dump cache right now. No one claimed he's running out of memory ... What I claimed is that considering page cache equal to free memory is not really appropriate, because it is used for caching data, which plays a significant role. Regarding the "free" output, we have no clue when the "free" command was executed. I might have been executed while the query was running, right after it failed or long after that. That has significant impact on interpretation of the output. Also, we have no clue what happens on the machine, so it's possible there are other queries competing for the page cache, quickly filling reusing free memory (used for large query moments ago) for page cache. And finally, we have no clue what plan the query is using, so we don't know how much memory it's using before it starts spilling to disk. For example it might easily be a single sort node, taking only 384MB (the work_mem) of RAM before it starts spilling to disk. > Further if he started using a few gig here for this one it wouldn't > have a big impact on cache (60G-1G etc) but might make it much > faster, as spilling to disk is a lot less intrusive when you've got a > bigger chunk of ram to work in. OTOH doing something like setting > work_mem to 60G would likely be fatal. It'd certainly be fatal, because this query is spilling >95G to disk, and keeping that in memory would easily require 2-3x more space. > > But he's not down to 3GB of memory by any kind of imagination. Any > working machine will slowly, certainly fill its caches since it's > not using the memory for anything else. That's normal. As long as > you're not blowing out the cache you're fine. Once again, what about the 15GB shared buffers? Not that it'd change anything, really. -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi, Thank you a lot for your answer. I've done that (create a tablespace in another HD with POSTGRES role + put it as the main temp_tablespace in the conf). But ... my command ~# df show me that all queries use the default tablespace ... This was my commands (the directory is owned by postgres) : CREATE TABLESPACE hddtablespace LOCATION '/media/hdd/pgsql'; ALTER TABLESPACE hddtablespace OWNER TO postgres; SHOW temp_tablespaces; > hddtablespace In /media/hdd/pgsql I have only one empty directory (PG_9.3_201306121). Do you have any tips ? Thanks a lot guys ... -- View this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5853081.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
On Tue, Jun 9, 2015 at 12:58 PM, ben.play <benjamin.cohen@playrion.com> wrote: > Hi, > > Thank you a lot for your answer. > I've done that (create a tablespace in another HD with POSTGRES role + put > it as the main temp_tablespace in the conf). > > But ... my command ~# df show me that all queries use the default tablespace > ... > > > This was my commands (the directory is owned by postgres) : > CREATE TABLESPACE hddtablespace LOCATION '/media/hdd/pgsql'; > ALTER TABLESPACE hddtablespace OWNER TO postgres; > > SHOW temp_tablespaces; >> hddtablespace > > In /media/hdd/pgsql I have only one empty directory (PG_9.3_201306121). > > Do you have any tips ? You have to grant public CREATE permissions on the tablespace, otherwise noone will have permission to use it.
Of course ! I'm an idiot ... Thank you a lot ! A question : is it possible with Postgres to change the temp_tablespace only for a session (or page) ? I have a cron which takes a lot of memory. I would like to say to PostGreSql to use this temp_tablespace only on this command and not affect my user experience. Thank you a lot :) -- View this message in context: http://postgresql.nabble.com/How-to-reduce-writing-on-disk-90-gb-on-pgsql-tmp-tp5852321p5853337.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
On Thu, Jun 11, 2015 at 5:56 AM, ben.play <benjamin.cohen@playrion.com> wrote: > A question : is it possible with Postgres to change the temp_tablespace only > for a session (or page) ? > I have a cron which takes a lot of memory. I would like to say to PostGreSql > to use this temp_tablespace only on this command and not affect my user > experience. You can do it with the PGOPTIONS environment variable: PGOPTIONS="-c temp_tablespaces=blabla" psql some_db -f some_script.sql