RE: Protect syscache from bloating with negative cache entries - Mailing list pgsql-hackers
From | Ideriha, Takeshi |
---|---|
Subject | RE: Protect syscache from bloating with negative cache entries |
Date | |
Msg-id | 4E72940DA2BF16479384A86D54D0988A6F426207@G01JPEXMBKW04 Whole thread Raw |
In response to | RE: Protect syscache from bloating with negative cache entries ("Ideriha, Takeshi" <ideriha.takeshi@jp.fujitsu.com>) |
Responses |
RE: Protect syscache from bloating with negative cache entries
|
List | pgsql-hackers |
>From: Ideriha, Takeshi [mailto:ideriha.takeshi@jp.fujitsu.com] >But at the same time, I did some benchmark with only hard limit option enabled and >time-related option disabled, because the figures of this case are not provided in this >thread. >So let me share it. I'm sorry but I'm taking back result about patch and correcting it. I configured postgresql (master) with only CFLAGS=O2 but I misconfigured postgres (path applied) with --enable-cassert --enable-debug --enable-tap-tests 'CFLAGS=-O0'. These debug option (especially --enable-cassert) caused enourmous overhead. (I thought I checked the configure option.. I was maybe tired.) So I changed these to only 'CFLAGS=-O2' and re-measured them. >I did two experiments. One is to show negative cache bloat is suppressed. >This thread originated from the issue that negative cache of pg_statistics is bloating as >creating and dropping temp table is repeatedly executed. >https://www.postgresql.org/message-id/20161219.201505.11562604.horiguchi.kyot >aro%40lab.ntt.co.jp >Using the script attached the first email in this thread, I repeated create and drop >temp table at 10000 times. >(experiment is repeated 5 times. catalog_cache_max_size = 500kB. > compared master branch and patch with hard memory limit) > >Here are TPS and CacheMemoryContext 'used' memory (total - freespace) calculated >by MemoryContextPrintStats() at 100, 1000, 10000 times of create-and-drop >transaction. The result shows cache bloating is suppressed after exceeding the limit >(at 10000) but tps declines regardless of the limit. > >number of tx (create and drop) | 100 |1000 |10000 >----------------------------------------------------------- >used CacheMemoryContext (master) |610296|2029256 |15909024 used >CacheMemoryContext (patch) |755176|880552 |880592 >----------------------------------------------------------- >TPS (master) |414 |407 |399 >TPS (patch) |242 |225 |220 Correct one: number of tx (create and drop) | 100 |1000 |10000 ----------------------------------------------------------- TPS (master) |414 |407 |399 TPS (patch) |447 |415 |409 The results between master and patch is almost same. >Another experiment is using Tomas's script posted while ago, The scenario is do select >1 from multiple tables randomly (uniform distribution). >(experiment is repeated 5 times. catalog_cache_max_size = 10MB. > compared master branch and patch with only hard memory limit enabled) > >Before doing the benchmark, I checked pruning is happened only at 10000 tables using >debug option. The result shows degradation regardless of before or after pruning. >I personally still need hard size limitation but I'm surprised that the difference is so >significant. > >number of tables | 100 |1000 |10000 >----------------------------------------------------------- >TPS (master) |10966 |10654 |9099 >TPS (patch) |4491 |2099 |378 Correct one: number of tables | 100 |1000 |10000 ----------------------------------------------------------- TPS (master) |10966 |10654 |9099 TPS (patch) | 11137 (+1%) |10710 (+0%) |772 (-91%) It seems that before cache exceeding the limit (no pruning at 100 and 1000), the results are almost same with master but after exceeding the limit (at 10000) the decline happens. Regards, Takeshi Ideriha
pgsql-hackers by date: