Thread: "Slow" query or just "Bad hardware"?
Hi I have a table with around 10 million entries The webpage rendered hits at most 200 records which are distributed well in the 10m with an average of 2 "references" pr. entry. Is there anyway to speed this query more up than allready. .. yes running it subsequenctly it is blazingly fast, but with view of around 200/10m we most often dont hit the same query again. # explain analyze SELECT "me"."created", "me"."created_initials", "me"."updated", "me"."updated_initials", "me"."start_time", "me"."end_time", "me"."notes", "me"."id", "me"."sequence_id", "me"."database", "me"."name", "numbers"."reference_id", "numbers"."evidence" FROM "reference" "me" LEFT JOIN "number" "numbers" ON ( "numbers"."reference_id" = "me"."id" ) WHERE ( "me"."sequence_id" IN ( 34284, 41503, 42274, 42285, 76847, 78204, 104721, 126279, 274770, 274790, 274809, 305346, 307383, 307411, 309691, 311362, 344930, 352530, 371033, 371058, 507790, 517521, 517537, 517546, 526883, 558976, 4894317, 4976383, 1676203, 4700800, 688803, 5028679, 5028694, 5028696, 5028684, 5028698, 5028701, 5028676, 5028682, 5028686, 5028692, 5028689, 3048683, 5305427, 5305426, 4970187, 4970216, 4970181, 4970208, 4970196, 4970226, 4970232, 4970201, 4970191, 4970222, 4350307, 4873618, 1806537, 1817367, 1817432, 4684270, 4981822, 3172776, 4894299, 4894304, 4700798, 1120990, 4981817, 4831109, 4831036, 4831068, 4831057, 4831105, 4831038, 4831044, 4831081, 4831063, 4831051, 4831086, 4831049, 4831071, 4831075, 4831114, 4831093, 2635142, 4660208, 4660199, 4912338, 4660150, 4662011, 5307782, 4894286, 4894292, 4894296, 4894309, 4894313, 1428388, 1932290, 5306082, 2010148, 3979647, 4382006, 4220374, 1880794, 1526588, 774838, 1377100, 969316, 1796618, 1121046, 4662009, 963535, 5302610, 1121105, 688700, 688743, 688836, 688763, 688788, 1056859, 2386006, 2386015, 2386023, 4265832, 4231262, 4265743, 5302612, 1121056, 1121090, 1121074, 688659, 688650 ) ) ORDER BY "ecnumbers"."reference_id"; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------- Sort (cost=56246.18..56275.20 rows=11606 width=299) (actual time=2286.900..2287.215 rows=389 loops=1) Sort Key: numbers.reference_id -> Nested Loop Left Join (cost=388.48..55462.63 rows=11606 width=299) (actual time=475.071..2284.502 rows=389 loops=1) -> Bitmap Heap Scan on reference me (cost=388.48..23515.97 rows=11606 width=191) (actual time=451.245..1583.966 rows=389 loops=1) Recheck Cond: (sequence_id = ANY ('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,1121 090,1121074,688659,688650}'::integer[])) -> Bitmap Index Scan on reference_seq_idx (cost=0.00..385.58 rows=11606 width=0) (actual time=422.691..422.691 rows=450 loops=1) Index Cond: (sequence_id = ANY ('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,1121 090,1121074,688659,688650}'::integer[])) -> Index Scan using ecn_ref_idx on number eumbers (cost=0.00..2.74 rows=1 width=108) (actual time=1.794..1.795 rows=0 loops=389) Index Cond: (numbers.reference_id = me.id) Total runtime: 2287.701 ms (10 rows) .. subsequent run: 32.367ms On a X4600 server with 32GB of ram and Equalogic iSCSI SAN attached. Jesper -- Jesper Krogh
On Thu, 27 Mar 2008, Jesper Krogh wrote: > # explain analyze SELECT "me"."created", "me"."created_initials", > "me"."updated", "me"."updated_initials", "me"."start_time", > "me"."end_time", "me"."notes", "me"."id", "me"."sequence_id", > "me"."database", "me"."name", "numbers"."reference_id", > "numbers"."evidence" FROM "reference" "me" LEFT JOIN "number" "numbers" ON > ( "numbers"."reference_id" = "me"."id" ) WHERE ( "me"."sequence_id" IN ( > 34284, 41503, 42274, 42285, 76847, 78204, 104721, 126279, 274770, 274790, > 274809, 305346, 307383, 307411, 309691, 311362, 344930, 352530, 371033, > 371058, 507790, 517521, 517537, 517546, 526883, 558976, 4894317, 4976383, > 1676203, 4700800, 688803, 5028679, 5028694, 5028696, 5028684, 5028698, > 5028701, 5028676, 5028682, 5028686, 5028692, 5028689, 3048683, 5305427, > 5305426, 4970187, 4970216, 4970181, 4970208, 4970196, 4970226, 4970232, > 4970201, 4970191, 4970222, 4350307, 4873618, 1806537, 1817367, 1817432, > 4684270, 4981822, 3172776, 4894299, 4894304, 4700798, 1120990, 4981817, > 4831109, 4831036, 4831068, 4831057, 4831105, 4831038, 4831044, 4831081, > 4831063, 4831051, 4831086, 4831049, 4831071, 4831075, 4831114, 4831093, > 2635142, 4660208, 4660199, 4912338, 4660150, 4662011, 5307782, 4894286, > 4894292, 4894296, 4894309, 4894313, 1428388, 1932290, 5306082, 2010148, > 3979647, 4382006, 4220374, 1880794, 1526588, 774838, 1377100, 969316, > 1796618, 1121046, 4662009, 963535, 5302610, 1121105, 688700, 688743, > 688836, 688763, 688788, 1056859, 2386006, 2386015, 2386023, 4265832, > 4231262, 4265743, 5302612, 1121056, 1121090, 1121074, 688659, 688650 ) ) > ORDER BY "ecnumbers"."reference_id"; Looks like a very reasonable performance, given that the database is having to seek nearly a thousand times to collect the data from where it is scattered over the disc. We had a thread a while ago about using aio or fadvise to speed this sort of thing up (with some really really good initial test results). Greg, is this still in active consideration? You don't say if there is much write traffic, and what sort of order the data gets written to the tables. It may be a significant benefit to cluster the tables on sequence id or reference id. If you have lots of write traffic make sure you recluster every now and again. Experiment with that, and see if it helps. Matthew -- The only secure computer is one that's unplugged, locked in a safe, and buried 20 feet under the ground in a secret location...and i'm not even too sure about that one. --Dennis Huges, FBI
Hm, so this table has 10 million entries and it does not fit in 32GB of RAM ? Could you investigate : - average size of rows in both tables - a quick description of your table columns especially the average size of your TEXT fields, especially the large one(s) like comments etc (don't bother about INTs unless you have like 50 int columns) - which fields get toasted, which don't, number of accesses to TOASTed fields in this query, could add 1 seek per field per fetched row if they're not cached - other stuff in your database that is using those gigabytes of RAM ? (indexes which are used often do count) I would tend to think that you are not going to display 200 kilobytes of text on your listing webpage, most likely something like 100 or 200 bytes of text from each row, right ? If that is the case, 10M rows * 200 bytes = 2G to keep cached in RAM, plus overhead, so it should work fast. You may want to partition your table in two, one which holds the fields which are often used in bulk, search, and listings, especially when you list 200 rows, and the other table holding the large fields which are only displayed on the "show details" page. Note that one (or several) large text field will not kill your performance, postgres will store that offline (TOAST) for you without you needing to ask, so your main table stays small and well cached. Of course if you grab that large 10 kB text field 200 times to display the first 80 charachers of it followed by "..." in your listing page, then, you're screwed ;) that's one of the things to avoid. However, if your "comments" field is small enough that PG doesn't want to TOAST it offline (say, 500 bytes), but still represents the bulk of your table size (for instance you have just a few INTs beside that that you want to quickly search on) then you may tell postgres to store the large fields offline (EXTERNAL, check the docs), and also please enable automatic compression. If however, you have something like 200 INT columns, or a few dozens of small TEXTs, or just way lots of columns, TOAST is no help and in this case you you must fight bloat by identifying which columns of your table need to be accessed often (for searches, listing, reporting, etc), and which are not accessed often (ie. details page only, monthly reports, etc). If you are lucky the column in the first group will form a much smaller subset of your gigabytes of data. Then, you partition your table in two (vertically), so the small table stays small. EXAMPLE on a community site : - members table, huge, search is slow, join to forum tables to get user's name horribly slow because cache is full and it seeks - push members' profiles and other data that only shows up in the details page to a second table : main members table much smaller, fits in RAM now, search is fast, joins to members are also fast. Word to remember : working set ;)
Also, sometimes offine TOASTing is evil : Say you have a forum, you want the posts table to be CLUSTER'ed on (topic_id, post_id) so displaying 1 page with 30 posts on it uses 1 seek, not 30 seeks. But CLUSTER doesn't touch the data that has been pushed offline in the toast table. So, in that case, it can pay (big time actually) to disable toasting, store the data inline, and benefit from cluster. So basically : Data that is seldom used or used only in queries returning/examining 1 row bu otherwise eats cache -> push it away (toast or partition) Data that is used very often in queries that return/examine lots of rows, especially if said rows are in sequence (cluster...) -> keep it inline
In response to "Jesper Krogh" <jesper@krogh.cc>: > Hi > > I have a table with around 10 million entries The webpage rendered hits > at most 200 records which are distributed well in the 10m with an average > of 2 "references" pr. entry. > > Is there anyway to speed this query more up than allready. .. yes running > it subsequenctly it is blazingly fast, but with view of around 200/10m we > most > often dont hit the same query again. While all the other advice is good, what you really need to do to address this is figure out what's in your cache and whether it's the right things. Once you _know_ that (and aren't just speculating) you can start to use the solutions that others have suggested to improve on the situation. If you just start trying things at random, you'll probably figure it out eventually anyway, but I'm assuming you'll want a direct route. So, I'm going to repeat something that I say on this mailing list about twice a month: install MRTG or some equivalent and start graphing critical database statistics. In your case, install the pg_buffercache addon and use it to track how much of your shared buffers each table is using. Based on your description of the problem, I doubt it will take more than a few days to have a clear view of exactly what's going on (i.e. you'll probably see table X clearing table Y out of the buffers or something ...) From there you can start making all kinds of decisions: * Do you need more RAM overall? * Is enough RAM allocated to shared_buffers (you don't provide any details on config settings, so I can't guess at this) * Are there queries that can be better optimized to not fill up the cache with data that they don't really need? * Can switching up storage methods for TEXT fields help you out? * Are your demands simply to high for what a SAN can provide and you'll be better off with a big RAID-10 of SCSI disks? HTH > # explain analyze SELECT "me"."created", "me"."created_initials", > "me"."updated", "me"."updated_initials", "me"."start_time", > "me"."end_time", "me"."notes", "me"."id", "me"."sequence_id", > "me"."database", "me"."name", "numbers"."reference_id", > "numbers"."evidence" FROM "reference" "me" LEFT JOIN "number" "numbers" ON > ( "numbers"."reference_id" = "me"."id" ) WHERE ( "me"."sequence_id" IN ( > 34284, 41503, 42274, 42285, 76847, 78204, 104721, 126279, 274770, 274790, > 274809, 305346, 307383, 307411, 309691, 311362, 344930, 352530, 371033, > 371058, 507790, 517521, 517537, 517546, 526883, 558976, 4894317, 4976383, > 1676203, 4700800, 688803, 5028679, 5028694, 5028696, 5028684, 5028698, > 5028701, 5028676, 5028682, 5028686, 5028692, 5028689, 3048683, 5305427, > 5305426, 4970187, 4970216, 4970181, 4970208, 4970196, 4970226, 4970232, > 4970201, 4970191, 4970222, 4350307, 4873618, 1806537, 1817367, 1817432, > 4684270, 4981822, 3172776, 4894299, 4894304, 4700798, 1120990, 4981817, > 4831109, 4831036, 4831068, 4831057, 4831105, 4831038, 4831044, 4831081, > 4831063, 4831051, 4831086, 4831049, 4831071, 4831075, 4831114, 4831093, > 2635142, 4660208, 4660199, 4912338, 4660150, 4662011, 5307782, 4894286, > 4894292, 4894296, 4894309, 4894313, 1428388, 1932290, 5306082, 2010148, > 3979647, 4382006, 4220374, 1880794, 1526588, 774838, 1377100, 969316, > 1796618, 1121046, 4662009, 963535, 5302610, 1121105, 688700, 688743, > 688836, 688763, 688788, 1056859, 2386006, 2386015, 2386023, 4265832, > 4231262, 4265743, 5302612, 1121056, 1121090, 1121074, 688659, 688650 ) ) > ORDER BY "ecnumbers"."reference_id"; > QUERY > PLAN > ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- > --------------------------------------------------------------------------------------------- > Sort (cost=56246.18..56275.20 rows=11606 width=299) (actual > time=2286.900..2287.215 rows=389 loops=1) > Sort Key: numbers.reference_id > -> Nested Loop Left Join (cost=388.48..55462.63 rows=11606 width=299) > (actual time=475.071..2284.502 rows=389 loops=1) > -> Bitmap Heap Scan on reference me (cost=388.48..23515.97 > rows=11606 width=191) (actual time=451.245..1583.966 rows=389 > loops=1) > Recheck Cond: (sequence_id = ANY > ('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,11 21 > 090,1121074,688659,688650}'::integer[])) > -> Bitmap Index Scan on reference_seq_idx > (cost=0.00..385.58 rows=11606 width=0) (actual > time=422.691..422.691 rows=450 loops=1) > Index Cond: (sequence_id = ANY > ('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,11 21 > 090,1121074,688659,688650}'::integer[])) > -> Index Scan using ecn_ref_idx on number eumbers > (cost=0.00..2.74 rows=1 width=108) (actual time=1.794..1.795 > rows=0 loops=389) > Index Cond: (numbers.reference_id = me.id) > Total runtime: 2287.701 ms > (10 rows) > > .. subsequent run: 32.367ms > > On a X4600 server with 32GB of ram and Equalogic iSCSI SAN attached. > > Jesper > > > -- > Jesper Krogh > > > -- > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ wmoran@collaborativefusion.com Phone: 412-422-3463x4023 **************************************************************** IMPORTANT: This message contains confidential information and is intended only for the individual named. If the reader of this message is not an intended recipient (or the individual responsible for the delivery of this message to an intended recipient), please be advised that any re-use, dissemination, distribution or copying of this message is prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. ****************************************************************
You might try turning “enable_bitmapscan” off, that will avoid the full index scan and creation of the bitmap.
- Luke
On 3/27/08 8:34 AM, "Jesper Krogh" <jesper@krogh.cc> wrote:
- Luke
On 3/27/08 8:34 AM, "Jesper Krogh" <jesper@krogh.cc> wrote:
Hi
I have a table with around 10 million entries The webpage rendered hits
at most 200 records which are distributed well in the 10m with an average
of 2 "references" pr. entry.
Is there anyway to speed this query more up than allready. .. yes running
it subsequenctly it is blazingly fast, but with view of around 200/10m we
most
often dont hit the same query again.
# explain analyze SELECT "me"."created", "me"."created_initials",
"me"."updated", "me"."updated_initials", "me"."start_time",
"me"."end_time", "me"."notes", "me"."id", "me"."sequence_id",
"me"."database", "me"."name", "numbers"."reference_id",
"numbers"."evidence" FROM "reference" "me" LEFT JOIN "number" "numbers" ON
( "numbers"."reference_id" = "me"."id" ) WHERE ( "me"."sequence_id" IN (
34284, 41503, 42274, 42285, 76847, 78204, 104721, 126279, 274770, 274790,
274809, 305346, 307383, 307411, 309691, 311362, 344930, 352530, 371033,
371058, 507790, 517521, 517537, 517546, 526883, 558976, 4894317, 4976383,
1676203, 4700800, 688803, 5028679, 5028694, 5028696, 5028684, 5028698,
5028701, 5028676, 5028682, 5028686, 5028692, 5028689, 3048683, 5305427,
5305426, 4970187, 4970216, 4970181, 4970208, 4970196, 4970226, 4970232,
4970201, 4970191, 4970222, 4350307, 4873618, 1806537, 1817367, 1817432,
4684270, 4981822, 3172776, 4894299, 4894304, 4700798, 1120990, 4981817,
4831109, 4831036, 4831068, 4831057, 4831105, 4831038, 4831044, 4831081,
4831063, 4831051, 4831086, 4831049, 4831071, 4831075, 4831114, 4831093,
2635142, 4660208, 4660199, 4912338, 4660150, 4662011, 5307782, 4894286,
4894292, 4894296, 4894309, 4894313, 1428388, 1932290, 5306082, 2010148,
3979647, 4382006, 4220374, 1880794, 1526588, 774838, 1377100, 969316,
1796618, 1121046, 4662009, 963535, 5302610, 1121105, 688700, 688743,
688836, 688763, 688788, 1056859, 2386006, 2386015, 2386023, 4265832,
4231262, 4265743, 5302612, 1121056, 1121090, 1121074, 688659, 688650 ) )
ORDER BY "ecnumbers"."reference_id";
QUERY
PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------
Sort (cost=56246.18..56275.20 rows=11606 width=299) (actual
time=2286.900..2287.215 rows=389 loops=1)
Sort Key: numbers.reference_id
-> Nested Loop Left Join (cost=388.48..55462.63 rows=11606 width=299)
(actual time=475.071..2284.502 rows=389 loops=1)
-> Bitmap Heap Scan on reference me (cost=388.48..23515.97
rows=11606 width=191) (actual time=451.245..1583.966 rows=389
loops=1)
Recheck Cond: (sequence_id = ANY
('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,1121
090,1121074,688659,688650}'::integer[]))
-> Bitmap Index Scan on reference_seq_idx
(cost=0.00..385.58 rows=11606 width=0) (actual
time=422.691..422.691 rows=450 loops=1)
Index Cond: (sequence_id = ANY
('{34284,41503,42274,42285,76847,78204,104721,126279,274770,274790,274809,305346,307383,307411,309691,311362,344930,352530,371033,371058,507790,517521,517537,517546,526883,558976,4894317,4976383,1676203,4700800,688803,5028679,5028694,5028696,5028684,5028698,5028701,5028676,5028682,5028686,5028692,5028689,3048683,5305427,5305426,4970187,4970216,4970181,4970208,4970196,4970226,4970232,4970201,4970191,4970222,4350307,4873618,1806537,1817367,1817432,4684270,4981822,3172776,4894299,4894304,4700798,1120990,4981817,4831109,4831036,4831068,4831057,4831105,4831038,4831044,4831081,4831063,4831051,4831086,4831049,4831071,4831075,4831114,4831093,2635142,4660208,4660199,4912338,4660150,4662011,5307782,4894286,4894292,4894296,4894309,4894313,1428388,1932290,5306082,2010148,3979647,4382006,4220374,1880794,1526588,774838,1377100,969316,1796618,1121046,4662009,963535,5302610,1121105,688700,688743,688836,688763,688788,1056859,2386006,2386015,2386023,4265832,4231262,4265743,5302612,1121056,1121
090,1121074,688659,688650}'::integer[]))
-> Index Scan using ecn_ref_idx on number eumbers
(cost=0.00..2.74 rows=1 width=108) (actual time=1.794..1.795
rows=0 loops=389)
Index Cond: (numbers.reference_id = me.id)
Total runtime: 2287.701 ms
(10 rows)
.. subsequent run: 32.367ms
On a X4600 server with 32GB of ram and Equalogic iSCSI SAN attached.
Jesper
--
Jesper Krogh
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
PFC wrote: > Also, sometimes offine TOASTing is evil : > Say you have a forum, you want the posts table to be CLUSTER'ed on > (topic_id, post_id) so displaying 1 page with 30 posts on it uses 1 seek, > not 30 seeks. But CLUSTER doesn't touch the data that has been pushed > offline in the toast table. So, in that case, it can pay (big time > actually) to disable toasting, store the data inline, and benefit from > cluster. This claim is false -- CLUSTER does process the toast table along the main heap. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support